{"page_content": "\n\nWhat's new in Striim Cloud 4.2.0Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 What's new in Striim Cloud 4.2.0PrevNextWhat's new in Striim Cloud 4.2.0Looking for the Striim Platform documentation? Click here.The following features are new in Striim Cloud 4.2.0.Web UIResource usage policies can help prevent web UI slowdown (see Resource usage policies).The Apps page has improvements in filtering, sorting, monitoring, and organizing apps.ROUTER components may be created in the Flow Designer (see CREATE ROUTER).Application developmentWizards support initial schema creation for Salesforce, Salesforce Pardot, and ServiceNow sources.Schema evolution supports TRUNCATE TABLE for additional sources and targets (see Handling schema evolution).Schema evolution supports ALTER TABLE ... ADD PRIMARY KEY. and ALTER TABLE ... ADD UNIQUE for MariaDB and MySQL (see Handling schema evolution).Sources and targetsDatabase Reader can use Azure private endpoints for Azure Database for MySQL (see Using Azure private endpoints).Database ReaderGCS Reader reads from Google Cloud Storage.Salesforce Pardot Reader reads Salesforce Pardot sObjects.Salesforce Reader supports JWT Bearer Flow authentication.Azure Event Hub Writer can use Azure private endpoints (see Using Azure private endpoints).ADLS Gen2 Writer can use Azure private endpoints (see Using Azure private endpoints).BigQuery Writer supports parallel requests when using the Storage Write API and allows specifying HttpTransportOptions timeouts in TQL (see BigQuery Writer properties).Database Writer can use Azure private endpoints for Azure Database for MySQL (see Using Azure private endpoints).Database WriterMongoDB Cosmos DB Writer can use Azure private endpoints (see Using Azure private endpoints).MongoDB Writer:Supports exactly-once processing (see notes for the Checkpoint Collection property).Shard key updates have been added to the available Ignorable Exception Code property values.Can use Azure private endpoints (see Using Azure private endpoints).ServiceNow Writer writes to tables in ServiceNow.Change data captureMongoDB Reader:Supports MongoDB versions up to 6.3.x.In Incremental mode, a single MongoDB Reader can read from an entire cluster using a +srv connection URL.With MongoDB 4.2 and later, reads from change streams (see MongoDB Manual > Change Streams) instead of the oplog.When reading from change streams, supports transactions and unset operations, and provides additional metadata.Can select documents based on queries (see Selecting documents using MongoDB Config).Can use Azure private endpoints (see Using Azure private endpoints).MSJet supports compressed tables and indexes (see Learn / SQL / SQL Server / Enable Compression on a Table or Index).MySQL Reader can use Azure private endpoints (see Using Azure private endpoints).Oracle Reader supports Oracle Database 21c.Administration, monitoring, and alertsResource usage policies can help prevent issues such as running out of memory or disk space to cause applications to halt (see Resource usage policies).Cluster-level Smart Alerts can be modified in the web UI (see Managing Smart Alerts).Vaults support Google Secrets Manager (see Using vaults).You can configure access to the following endpoints using Google Private Service Connect (see Using Private Service Connect with Google Cloud adapters and Connecting to VMs or databases in Google Cloud using Private Service Connect):Cloud SQL for MySQLCloud SQL for PostgresGoogle SpannerGoogle BigQueryGoogle Cloud StorageIn this section: What's new in Striim Cloud 4.2.0Web UIApplication developmentSources and targetsChange data captureAdministration, monitoring, and alertsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-07-07\n", "metadata": {"source": "https://www.striim.com/docs/en/what-s-new-in-striim-cloud-4-2-0.html", "title": "What's new in Striim Cloud 4.2.0", "language": "en"}} {"page_content": "\n\nWhat is Striim?Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 What is Striim?PrevNextWhat is Striim?Striim is a complete, end-to-end, in-memory platform for collecting, filtering, transforming, enriching, aggregating, analyzing, and delivering data in real time. Built-in adapters collect data from and deliver data to SQL and no-SQL databases, data warehouses, applications, files, messaging systems, sensors, and more (see Sources and Targets for a complete list), on your premises or in the cloud. Integrated tools let you visualize live data in dashboards, explore it with SQL-like queries, and trigger alerts of anomalous conditions or security violations.Collecting dataStriim ingests real-time streaming data from a variety of sources including databases, logs, other files, message queues, and devices. Sources are defined and configured using our TQL scripting language or the web UI via a simple set of properties. We also provide wizards to simplify creating data flows from common sources to popular targets.Striim does not wait for files to be completely written before processing them in a batch-oriented fashion. Instead, the reader waits at the end of the file and streams out new data as it is written to the file. As such, it can turn any set of log files into a real-time streaming data source.Similarly, Striim's database readers do not have to wait for a database to completely ingest, correlate, and index new data before reading it by querying tables. Instead, using a technology known as Change Data Capture (CDC), Striim non-intrusively captures changes to the transaction log of the database and ingests each insert, update, and delete as it happens.A range of other sources are also available, including support for IoT and device data through TCP, UDP, HTTP, MQTT, and AMQP, network information through NetFlow and PCAP, and other message buses such as JMS, MQ Series, and Flume. (See\u00a0Sources for a complete list.)Data from all these sources can be delivered as is, or go through a series of transformations and enrichments to create exactly the data structure and content you need. Data can even be correlated and joined across sources.Processing dataTypically, you will want to filter your source data to remove everything but that which matches certain criteria. You may need to transform the data through string manipulation or data conversion, or send only aggregates to prevent data overload.\u00a0You may need to add additional context to the data. A lot of raw data may need to be joined with additional data to make it useful.Striim simplifies these crucial data processing tasks\u2014filtering, transformation, aggregation, and enrichment\u2014by using in-memory continuous queries defined in TQL, a language with constructs familiar to anyone with experience using SQL. Filtering is just a WHERE clause. Transformations are simple and can utilize a wide selection of built-in functions, CASE statements,\u00a0custom Java functions, and other mechanisms.\u00a0Aggregations utilize flexible windows that turn unbounded infinite data streams into continuously changing bounded sets of data. The queries can reference these windows and output data continuously as the windows change. This means a one-minute moving average is just an average function over a one-minute sliding window.Enrichment uses external data introduced into Striim through the use of distributed caches (also known as data grids). Caches can be loaded with large amounts of reference data, which is stored in-memory across the cluster. Queries can reference caches in a FROM clause the same way as they reference streams or windows, so joining against a cache is simply a join in a TQL query.Multiple stream sources, windows, and caches can be used and combined together in a single query, and queries can be chained together in directed graphs, known as data flows. All of this can be built through the UI or our scripting language, and can be easily deployed and scaled across a Striim cluster, without having to write any additional code.Analyzing dataStriim enables you to analyze data in memory, the same you process it\u2014through SQL-like continuous queries. These queries can join data streams together to perform correlation, and look for patterns (or specific sequences of events over time) across one or more data streams utilizing an extensive pattern-matching syntax.Continuous statistical functions and conditional logic enable anomaly detection, while built-in regression algorithms enable predictions into the future based on current events.Analytics can also be rooted in understanding large datasets. Striim customers have integrated machine learning into data flows to perform real-time inference and scoring based on existing models. This utilizes Striim in two ways. First, Striim\u00a0can prepare and deliver source data to targets in your desired format, enabling the real-time population of raw data used to generate machine learning models.Then, once a model has been constructed and exported, you can easily call the model from our SQL, passing real-time data into it, to infer outcomes continuously. The end result is a model that can be frequently updated from current data, and a real-time data flow that matches new data to the model, spots anomalies or unusual behavior, and enables faster responses.Visualizing dataThe final piece of analytics is visualizing and interacting with data. Striim includes a dashboard builder that lets you easily build custom, use-case-specific visualizations to highlight real-time data and the results of analytics. With a rich set of visualizations, and simple query-based integration with analytics results, dashboards can be configured to continually update and enable drill-down and in-page filtering.Delivering dataStriim can write continuously to a broad range of data targets, including databases, files, message queues, Hadoop environments, and cloud data stores such as Azure blob storage, Azure SQL DB, Amazon Redshift, and Google BigQuery (see Targets for a complete list). For targets that don't require a specific format, you may choose to format the output as Avro, delimited text, JSON, or XML. As with sources, targets\u00a0are defined and configured using our TQL scripting language or the web UI via a simple set of properties, and wizards are provided for creating apps with many source-target combinations.A single data flow can write to multiple targets at the same time in real time, with rules encoded as queries in between. For example, you can source data from Kafka and write some or all of it to Hadoop, Azure SQL DB, and your enterprise data warehouse simultaneously.Putting it all togetherEnabling all of these things in a single platform requires multiple major pieces of in-memory technology that have to be integrated seamlessly and tuned in order to be enterprise-grade. This means you have to consider the scalability, reliability, and security of the complete end-to-end architecture, not just a single piece.Joining streaming data with data cached in an in-memory data grid, for example, requires careful architectural consideration to ensure all pieces run in the same memory space so joins can be performed without expensive and time-consuming remote calls. Continually processing and analyzing hundreds of thousands, or millions, of events per second across a cluster in a reliable fashion is not a simple task, and can take many years of development time.Striim has been architected from the ground up to scale, and Striim clusters are inherently reliable with failover, recovery, and exactly-once processing guaranteed end to end, not just in one slice of the architecture.Security is also treated holistically, with a single role-based security model protecting everything from individual data streams to complete end-user dashboards.With Striim, you don't need\u00a0to design and\u00a0 build a massive infrastructure, or hire an army of developers to craft your required processing and analytics. Striim enables data scientists, business analysts, and other IT and data professionals to get right to work without having to learn and code to APIs.See our web site for additional information about what Striim is and what it can do for you: http://www.striim.com/products In this section: What is Striim?Collecting dataProcessing dataAnalyzing dataVisualizing dataDelivering dataPutting it all togetherSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/en/what-is-striim-.html", "title": "What is Striim?", "language": "en"}} {"page_content": "\n\nSubscribe to Striim in the AWS MarketplaceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudSubscribe to Striim in the AWS MarketplacePrevNextSubscribe to Striim in the AWS MarketplaceIn the AWS Marketplace, search for Striim Cloud and click it.To evaluate Striim Cloud, select Try for free > Create contract > Set up your account.(Alternatively, to sign up for a one-year contract, select View purchase options and follow the instructions.)In the Sign up for Striim Cloud dialog, enter your name, email address, company name, your desired sub-domain (part of the URL where you will access Striim Cloud), and password, then click Sign up.When you receive the Striim Cloud | Activate your account email, open it and click the activation link.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-02\n", "metadata": {"source": "https://www.striim.com/docs/en/subscribe-to-striim-in-the-aws-marketplace.html", "title": "Subscribe to Striim in the AWS Marketplace", "language": "en"}} {"page_content": "\n\nDeploying and managing Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudPrevNextDeploying and managing Striim CloudIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-07-07\n", "metadata": {"source": "https://www.striim.com/docs/en/deploying-and-managing-striim-cloud.html", "title": "Deploying and managing Striim Cloud", "language": "en"}} {"page_content": "\n\nSubscribe to Striim in the Microsoft Azure MarketplaceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudSubscribe to Striim in the Microsoft Azure MarketplacePrevNextSubscribe to Striim in the Microsoft Azure MarketplaceIn the Azure Marketplace, search for Striim Cloud and click it.Click Get It Now, check the box to accept Microsoft's terms, and click Continue.Select a plan, then click Subscribe.Select one of your existing resource groups or create a new one, enter a name for this subscription, and click Review + subscribe.Click Subscribe.When you receive an \"Activate your Striim Cloud Enterprise\" email from Microsoft AppSource, open it and click Activate now.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-08\n", "metadata": {"source": "https://www.striim.com/docs/en/subscribe-to-striim-in-the-microsoft-azure-marketplace.html", "title": "Subscribe to Striim in the Microsoft Azure Marketplace", "language": "en"}} {"page_content": "\n\nSubscribe to Striim in the Google Cloud Platform MarketplaceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudSubscribe to Striim in the Google Cloud Platform MarketplacePrevNextSubscribe to Striim in the Google Cloud Platform MarketplaceIn the Google Cloud Platform Marketplace, search for Striim Cloud and click it.Scroll down to Pricing, select a plan, and click Select.Scroll down to Additional terms, check to accept them all, and click Subscribe.Click Register with Striim Inc., then follow the instructions to complete registration. Make a note of the domain and password you enter.When you receive the Striim Cloud | Activate your account email, open it and click the activation link.Enter your email address and password, then click Sign up.You will receive another email with information about your subscription.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-26\n", "metadata": {"source": "https://www.striim.com/docs/en/subscribe-to-striim-in-the-google-cloud-platform-marketplace.html", "title": "Subscribe to Striim in the Google Cloud Platform Marketplace", "language": "en"}} {"page_content": "\n\nCreate a Striim Cloud serviceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudCreate a Striim Cloud servicePrevNextCreate a Striim Cloud serviceSelect the Marketplace tab and under Striim Cloud click Create.Enter a Service Name for your service.Select the Region appropriate for your location.Optionally, click Show Advanced Options.Optionally, for Cluster Type select a larger instance with more cores and memory.Optionally, check Create a Kafka persistent stream cluster. This is necessary for Running the CDC demo apps and for using Kafka streams. If you leave this unchecked at this time, you may add Kafka later as detailed in Using Kafka-persisted streams in Striim Cloud.Running the CDC demo appsWhen done setting options, click Create.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-20\n", "metadata": {"source": "https://www.striim.com/docs/en/create-a-striim-cloud-service.html", "title": "Create a Striim Cloud service", "language": "en"}} {"page_content": "\n\nUsing an SSH tunnel to connect to a source or targetSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudUsing an SSH tunnel to connect to a source or targetPrevNextUsing an SSH tunnel to connect to a source or targetNoteThis feature is available only in Striim Cloud, not in Striim Platform.When you need to connect to a source or target through a jump server, set up an SSH tunnel as follows.In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click ... and select Security.Click Create New Tunnel and enter the following:Name: choose a descriptive name for this tunnelJump Host: the IP address or DNS name of the jump serverJump Host Port: the port number for the tunnelJump Host Username: the jump host operating system user account that Striim Cloud will use to connectDatabase Host: the IP address or DNS name of the source or target databaseDatabase Port: the port for the databaseClick Create Tunnel. Do not click Start yet.Under Public Key, click Get Key > Copy Key.Add the copied key to your jump server's authorized keys file, then return to the Striim Cloud Security page and click Start. The SSH tunnel will now be usable in the source or target settings.Give the user specified for Jump Host Username the necessary file system permissions to access the key.Under Tunnel Address, click Copy to get the string to provide as the host name.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-21\n", "metadata": {"source": "https://www.striim.com/docs/en/using-an-ssh-tunnel-to-connect-to-a-source-or-target.html", "title": "Using an SSH tunnel to connect to a source or target", "language": "en"}} {"page_content": "\n\nUsing Azure private endpointsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudUsing Azure private endpointsPrevNextUsing Azure private endpointsStriim can connect with Microsoft Azure services using private endpoints in Azure. For services managed in Azure you connect using a resource ID, while for external services you connect through the Azure Private Link service. You can also connect with on-premise databases that are connected to Azure using Private Link.For an introduction to Azure private endpoints and Azure Private Link, see:What is a private endpoint?Learn / Azure / Networking / Private Link / What is Azure Private Link? What is Azure Load Balancer?Using private endpoints has been certified with the following Microsoft services:Azure Data Lake Storage Gen2Azure Cosmos for MongoDBAzure Database for MySQLAzure Event HubUsing private endpoints has been certified with the following non-Microsoft service:MongoDB AtlasNoteYour Striim Cloud bill can increase when you enable Azure Private Link as a result of increased compute and data transfer costs. For details, contact your Striim account representative.PrerequisitesYou may need permissions in Azure to create a database, virtual machine, standard load balancer, Azure Private Link service, or private endpoint. You may also need permission to approve the endpoints created. Some Microsoft services auto-approve private endpoints.Before configuring Striim Cloud, do the following in Azure.For Microsoft servicesGet the Resource ID for the Azure-managed service. The Resource ID can be obtained by navigating to the resource in the Azure Portal, selecting Properties and copying the 'ID' field. The tooltip says 'Resource ID'.For MongoDB AtlasCreate a private endpoint from the MongoDB Atlas endpoint page. This creates a Private Link service which has a Resource ID attached to it. Once you configure the resource ID in Striim Console, you will receive an email from Striim that contains the Resource ID and IP address of the private endpoint. You will use these values to configure the private endpoint in MongoDB Atlas.See Quickstart: Create a Private Link service by using the Azure portal.See What is a private endpoint? and related topics.Configuring an Azure private endpoint in Striim CloudMake sure the Striim Cloud service is running.In the Striim Cloud Console, select the Services tab, then select More > View Details > Secure connection for the Striim Cloud service.In the Private Endpoints section, click Create Private Endpoint and enter the following:Name: a unique name for your private endpoint.Service Alias:For Microsoft services: enter the resource ID for the service.For MongoDB Atlas: enter the alias from the Private Link Service page in Azure Portal (see Learn / Azure / Networking / Private Link / What is Azure Private Link service? / Alias).Click Create Private Endpoint.The new private endpoint will be in the Creating state while connecting to Azure. For MongoDB only, it will then be in the Pending state until you provide the Resource ID and IP address that you receive through an email from Striim, at which point the state of the private endpoint will be auto-approved and in the Running state.Other services may require approval before going to the Running state.The private endpoint in Striim will then be in the Running state.Specifying Azure private endpoints in sources and targetsFor ADLS Gen2 Writer or Azure Event Hub Writer, if a running Striim Cloud private endpoint is associated with the same service as the SAS key specified in the adapter properties, the adapter will use it automatically.Azure Event Hub WriterFor MongoDB Reader or MongoDB Writer for MongoDB Atlas, obtain the connection string URL from MongoDB Atlas and use this URL in the TQL to connect with the private endpoint. In the MongoDB Atlas Database home page, click on Connect. Click\u00a0Connect with MongoDB Compass. Copy the provided connection string.MongoDB WriterFor Database Reader, Database Writer, or MySQL Reader for Azure Database for MySQL:Database ReaderDatabase WriterIn the Striim Cloud Console, select the Services tab, then select More > View Details > Secure connection for the Striim Cloud service.In the Private Endpoints section, copy the appropriate FQDN value and use it in place of the IP address, host name, or network name in the adapter's Connection URL property value.In this section: Using Azure private endpointsPrerequisitesConfiguring an Azure private endpoint in Striim CloudSpecifying Azure private endpoints in sources and targetsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-30\n", "metadata": {"source": "https://www.striim.com/docs/en/using-azure-private-endpoints.html", "title": "Using Azure private endpoints", "language": "en"}} {"page_content": "\n\nPrivate Service Connect supportSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudPrivate Service Connect supportPrevNextPrivate Service Connect supportGoogle's Private Service Connect allows private services to be securely accessed from Virtual Private Cloud (VPC) networks without exposing the services to the public internet (for more information, see Virtual Private Cloud > Documentation > Guides > Private Service Connect). You can use Private Service Connect to access managed services across VPCs or to access Google APIs and services.For information on how Striim support Private Service Connect see Using Private Service Connect with Google Cloud adapters.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-07-07\n", "metadata": {"source": "https://www.striim.com/docs/en/private-service-connect-support.html", "title": "Private Service Connect support", "language": "en"}} {"page_content": "\n\nAdding users to a Striim Cloud serviceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudAdding users to a Striim Cloud servicePrevNextAdding users to a Striim Cloud serviceIn the Striim Cloud Console, go to the Users page and click Invite User.Enter the new user's email address, select the appropriate role (see the text of the drop-down for details), and click Save.The new user will receive an email with a signup link. Once they have signed up, their status will change from Pending to Activated. Once the new user is activated, select ... > Edit, add the service(s) you want them to have access to, and click Save.If you want the new user to share existing namespaces or applications, launch the service, go to the Users page, and assign the user the appropriate roles (see Managing users, permissions, and roles).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-03\n", "metadata": {"source": "https://www.striim.com/docs/en/adding-users-to-a-striim-cloud-service.html", "title": "Adding users to a Striim Cloud service", "language": "en"}} {"page_content": "\n\nUsing Kafka-persisted streams in Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudUsing Kafka-persisted streams in Striim CloudPrevNextUsing Kafka-persisted streams in Striim CloudWhen you create a Striim Cloud service you can select whether to include a managed Kafka instance, which is required to use Kafka-persisted streams (see Persisting a stream to Kafka). If you chose not to, you can create it later by going to Striim Cloud Console's Services page and selecting More > View Details > Persistent Streams > Attach (this will increase your charges).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-20\n", "metadata": {"source": "https://www.striim.com/docs/en/using-kafka-persisted-streams-in-striim-cloud.html", "title": "Using Kafka-persisted streams in Striim Cloud", "language": "en"}} {"page_content": "\n\nEnabling OJet on Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudAdd-on featuresEnabling OJet on Striim CloudPrevNextEnabling OJet on Striim CloudContact Striim support to enable your Striim Cloud instance to run OJet.Download setupOjet.tar.gz from Striim Cloud: select Download Utilities from the menu.Follow the instructions in Running the OJet setup script on Oracle.To connect your Oracle instance to Striim using an SSH tunnel, see Using an SSH tunnel to connect to a source or target.To connect your Oracle instance to Striim using a reverse SSH tunnel or VPN, Contact Striim support.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-31\n", "metadata": {"source": "https://www.striim.com/docs/en/enabling-ojet-on-striim-cloud.html", "title": "Enabling OJet on Striim Cloud", "language": "en"}} {"page_content": "\n\nScheduling service stop and restart timesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudScheduling service stop and restart timesPrevNextScheduling service stop and restart timesYou can define the timeframes when your Striim Cloud service is scheduled to automatically stop and restart. This configuration allows you to run the service on a specific schedule, for example, you may want to configure your service to run between 9am - 5pm during weekdays and be in a stopped state for the remainder of the week. The minimum time between the start and stop of a service is one hour.Stopping or restarting a service takes a few minutes. For example, when scheduling a service to stop at 5pm, Striim initiates the stop at the specified time, which takes a few minute to complete. Similarly, when scheduling a service to restart at 7am, Striim starts creating the service at 7am and the creation takes a few minutes and is ready to use after it goes into the running state.To configure your stop and restart schedule, follow these steps:In the Striim Cloud Console, if the service is not running, start it and wait for its status to change to Running.In the Services page, select More > View Details > Advanced configuration > Schedule your service.In the graph, select a time period, or click Edit to adjust the stop and restart days/times.Add any additional stop and restart periods as needed.Click Save.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-06\n", "metadata": {"source": "https://www.striim.com/docs/en/scheduling-service-stop-and-restart-times.html", "title": "Scheduling service stop and restart times", "language": "en"}} {"page_content": "\n\nMetering in Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudMetering in Striim CloudPrevNextMetering in Striim CloudStriim Cloud subscriptions are prepaid for the term of your commitment. Each subscription includes a certain number of credits that are debited based on your usage. How fast those credits are consumed depends on how many events you read and write, the amount of data you transfer, the compute resources that you use, and other factors. You can track your usage history and monitor usage using budget alerts to avoid exceeding your budget. For more information on usage charges, see Understanding usage charges.Information available in the Metering pageThe Metering page allows you to monitor the usage of your account and services. To access it, select Metering from the Striim Cloud Console. The Usage overview tab provides a summary of current usage. You can access more granular data from the Usage history tab that drills down into the detailed usage within a Striim Cloud service. Finally, you can track your Striim Cloud credit usage with Budget alerts.You can find the following information in the tabs on the Metering page:Usage overview: Provides important information about your account's usage for the current and previous billing cycles including:Striim Enterprise credit account: Provides the monthly usage for the current and previous month, the dates of those billing cycles, and the current subscription expiration date, and the remaining Striim Credits.Usage details: Provides a graph of usage for the selected time period for each of your services.Usage history: Provides usage statistics for the selected time period for each of your services. You can filter to see the history for a specific service. Selecting a time period then choosing Download PDF generates a file containing a breakdown of the usage charges for that time period. For more information on the types of charges in the report, see Understanding usage charges.Budget alerts: Allow you to monitor your Striim Cloud credit usage and set alerts at both a service and account level. You receive alert notifications when your usage passes the thresholds you configure. For more information on setting budget alerts, see Creating budget alerts.Viewing Striim Credit usage detailsYou can use the Usage history page to access a detailed report that shows how you use Striim Credits. For more information on Striim Credits see Understanding usage charges.Click Metering then select the Usage history tab.Click View detail from the Action column.Review the number page of Striim Credits you used for the current and previous months.Scroll down to see usage by adapters, and the number of read and write events.Creating budget alertsBudget alerts allow you to track your credit usage in Striim Cloud and notify you when a a budget crosses a threshold. Budgets are a usage limit that you expect the service/account to use. Budgets let you track your actual spending against your planned spending. When the budget crosses a threshold that you configure, email and in-app notifications are triggered. None of your services are affected by the triggering of an alert, and their consumption isn't stopped.By creating budget alerts for your account or services, you are alerted as soon as your Striim Credit usage exceeds the thresholds you've configured for your account or service. You can set budget alerts at both the credit-account level and service level. You can create a maximum of one budget alert per account or service. When your usage exceeds the alert threshold you configured for your account or service, you receive an in-app notification and each of your Striim Cloud admins receive a notification by email.Setting budget alerts for your account or serviceTo set budget alerts for your Striim Cloud credit account or service:In the Striim Cloud Console, go to the Metering page.Select the Budget alerts tab.Click Create new.Configure the following options for the budget alert:Budget Name: Enter a unique name for the budget alert.Select Budget Scope: Specify Credit Account Level (for your entire credit account), or Service Level (for a single service).Select Credit Account: Select the credit account for which you want budget alerts.Select Service: (Only configured when you select Service Level for your budget scope). Select the service in your account for which you want budget alerts.Select Budget Frequency: select Daily or Monthly frequency.Budget (Striim Credits): Specify your total Striim Credit budget for the daily or monthly frequency. The budget alert will trigger when your usage exceeds a specified threshold of this budget.Add Threshold for Alerts: Specifies the usage threshold of your budget above which you want to receive an alert. The default threshold is 50%. When you select the budget scope as Credit Account Level, this threshold represents the total usage for all services in your account. For the Service Level scope, it represents the usage for that single service.For example, a customer wants to use no more than 500 credits for all services in their Striim Cloud credit account. The customer sets the budget scope as Credit Account Level, and sets a threshold of 50% for a 1000 budget of Striim credits. The customer will receive a budget alert when the usage for all the services within their Striim Cloud credit account exceeds 500 credits.Click Create.You will see the message Successfully created budget, and the alert will be listed in the Budget alerts tab.Understanding usage chargesStriim Cloud subscriptions are prepaid for the term of your commitment. Each subscription includes a certain number of credits that are debited based on your usage. How fast those credits are consumed depends on how many events you read and write, the amount of data you transfer, the compute resources that you use, and other factors.On the AWS marketplace, Azure marketplace or Google Cloud Platform marketplace, you can view pricing details for the available Striim Cloud Enterprise credit plans. The Pricing tab of each plan lists the rates at which events are billed for standard and premium adapters, the charges for the machine used at a rate per core per hour, and charges for inbound and outbound data transfer.Comparing plansThe plans vary based not only on the number of Striim credits provided, but also the rates at which you are charged for events processed, which can be a significant component of your cost. For example the Enterprise-3500 plan charges you 0.50 Striim credits/core for compute charges, 0.10 Striim credits/GB for inbound or outbound data transfer, 80 Striim credits to process 1 million events for standard adapters, and 160 Striim credits to process 1 million events for premium adapters.By comparison the Enterprise-7000 plan would be appropriate if you needed to process a greater number of events, as not only does it provide a greater number of Striim credits, but its rates to process events for standard and premium adapters are significantly lower at 22 credits and 44 credits per 1 million events, respectively.A Striim credit is equal to one dollar. Striim plans are based on a yearly subscription with a fixed number of Striim credits. As you compare your estimated usage to your actual usage, you can choose to increase your subscription to a larger plan at any time.Types of chargesThe following prices are examples only and subject to change.Compute chargesCompute cost charges are based on a rate per core per hour, and will vary based on the machine type selected for your Striim Cloud instances, and how much your instances are running over a month. For example an XS (extra small) machine with 4vCPUs and 32 GB memory costs 2 Striim credits/hour (calculated as 4 x 0.5 credits per core). If that machine runs for an entire month (calculated as 730 hours for an average month) the compute cost will be $1460 per month, not including the cost of processing events or transferring data.Machine types are available in sizes of XS, S, M, or L. When selecting a machine size, consider the number of monthly events you expect to process. Striim can provide guidance on appropriate machine type. You can upgrade your machine type when needed in your Striim Cloud account.When pricing out the compute cost, you may want to calculate machine costs for both a production machine (which runs workloads for a known number of hours per month) and a non-production machine (which runs for a custom number of hours per month based on needs).Events chargesYou also pay for the events that you read and write. Suppose that you have an Oracle source and a Snowflake target, with 1,000,000 events being read and 1,000,000 events being written. The cost would be 100 credits for the standard adapter and 200 credits for the premium adapter for a total charge of 300 Striim credits for monthly events.Data transfer chargesFor the above example, suppose that the data transfer size of an event is 5K.\u00a0 With 2,000,000 events, the data transferred monthly is 10GB. The data transfer cost for this example would be 1 Striim credit.\u00a0Data transfer charges are often not significant compared to event charges or compute charges.Additional feature chargesUsing or enabling certain Striim Cloud features such as persistence can incur an additional charge, and can have a significant impact on your total month cost. These costs can be calculated in the Striim Cloud pricing calculator.Reviewing your billYour monthly bill from Striim itemizes all your charges including usage charges, overages if any, and fixed charges for your plan and any additional features. By regularly reviewing your bill you can determine if you are subscribed to an appropriate plan, or have additional capacity in your plan for more data processing.Reviewing current usageYou can track your current credit usage in your Striim Cloud account, and compare to your estimate. If you are going to be exceeding the number of Striim credits in your plan, you may want to subscribe to a higher tier plan. You will receive an email notice when your remaining credits are less than 20%. You can also configure budget alerts, as described in Creating budget alerts.In this section: Metering in Striim CloudInformation available in the Metering pageViewing Striim Credit usage detailsCreating budget alertsSetting budget alerts for your account or serviceUnderstanding usage chargesComparing plansTypes of chargesReviewing your billReviewing current usageSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-10\n", "metadata": {"source": "https://www.striim.com/docs/en/metering-in-striim-cloud.html", "title": "Metering in Striim Cloud", "language": "en"}} {"page_content": "\n\nUpgrading Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudUpgrading Striim CloudPrevNextUpgrading Striim CloudOn the Services tab of your Striim Cloud subscription, Upgrade available will appear next to your service to indicate that you may upgrade to the latest release.Before upgrading:Quiesce and undeploy all running applications with persisted streams.Stop and undeploy all other running and deployed applications.To upgrade, on the Services tab of your Striim Cloud subscription, select ... > Upgrade. The service will be unavailable until the upgrade is complete.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-21\n", "metadata": {"source": "https://www.striim.com/docs/en/upgrading-striim-cloud.html", "title": "Upgrading Striim Cloud", "language": "en"}} {"page_content": "\n\nPatching Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudPatching Striim CloudPrevNextPatching Striim CloudOn occasion, Striim makes available a patch that changes some behavior of your Striim Cloud service. For example, a patch may contain a resolution to an issue identified by Striim support and a customer. Patches are associated to a specific version of a Striim Cloud service, and when a patch is available you are notified by a banner in the Striim Cloud console and can select the patch for installation to your service.Once you select a patch for installation to your service, the patching operation installs the patch in an automated installation process, and when complete your service's details will show description and version information about the patch you installed. You can revert a patch if you determine you do not want it.Patching your serviceWhen a patch is available, the Patch Available banner appears next to your service. Before deciding to install a patch, you should review the patch description and patch version by selecting the Patch Available banner. To patch your service:Locate the Patch Available banner next to a running service.Select Apply Patch from the More menu.In the confirmation dialog, select Apply Patch.The service state changes to Patching.To track progress, go to the service details page.The service enters the Patching state. A message and progress bar indicate the progress.Wait for the service to return to the Running state.Select the Patch Applied banner next to your service to see the details of the patch. Review the details of the applied patch under the Overview tab and Advanced Configuration.Reverting a patch of your serviceOnce you have applied a patch, you can revert the patch if needed. Reverting a patch will revert back to the base version, not to any intermittent patch. For example you apply patch1, patch2, and patch3 from base version 1.0.0; when you revert, the service returns to the 1.0.0 version.Locate a service that has a Patch Applied banner next to the service.Select Remove Patch from the More menu.In the confirmation dialog, select Remove Patch.The service state changes to Removing Patch.To track progress, go to the service details page.The service enters the Patching state. A message and progress bar indicate removal progress.Wait for the service to return to the Running state.You receive a notification that the patch was successfully removed.Restoring a patched service to a snapshotWhen you have a patched service and want restore that service to a snapshot, the correct procedure is to first revert the patch for that service, and then restore the snapshot of that service. The service is then restored without the patch.In this section: Patching Striim CloudPatching your serviceReverting a patch of your serviceRestoring a patched service to a snapshotSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-09\n", "metadata": {"source": "https://www.striim.com/docs/en/patching-striim-cloud.html", "title": "Patching Striim Cloud", "language": "en"}} {"page_content": "\n\nUsing Active Directory with Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudUsing Active Directory with Striim CloudPrevNextUsing Active Directory with Striim CloudYou can configure Striim Cloud to allow users in your organization to log in using Azure AD single sign-on (SSO). This requires you to create a SAML application in Azure AD, assign that application to your users, and configure Striim Cloud to trust Azure AD as an identity provider (IdP). For more information, see Enable single sign-on for an enterprise application.To add an enterprise application to your Azure AD tenant, you need an Azure AD user account with one of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.Creating a SAML applicationLog into Azure Portal, and choose Azure Active Directory service.In the left menu, select Enterprise applications and the New Application option.Choose the Create a new own application option. Enter the name for the Striim application and select the last radio button option, Integrate any other application you don't find in the gallery (Non-gallery). Click Create.On the left panel, under Manage, choose Single sign on.Choose SAML.Edit the Basic configuration as follows:Identifier (Entity ID): Identifier (Entity ID) : Reply URL (Assertion Consumer Service URL): /auth/saml/callbackClick Save.Edit the Attributes and Claims, and click on Add a new claim.Create the following attribute statements for first name, last name and email, then click Next.NameValuefirstNameuser.givennamelastNameuser.surnameemailuser.mailDownload the X509 certificate in base64 format and extract the public certificate. One way to do so is to export the downloaded certificate as a PEM file using Keychain Access app in the Mac and apply this command on the PEM file.Note the Login URL and AD Identifier URL (you will enter these values in your Striim application).Assign users to the Striim application in Azure AD:Choose the Users and groups option in the left panel under Manage. Select Add user/group, and click on the None Selected link.In the left form, enter the name of the user you want to assign the application to and choose Select. Ensure that the user you are assigning to has their firstName, lastName and email (Striim mail ID) attributes set up in their user profile.On the Add assignment page, choose the Assign button.Configure Striim Cloud to trust Azure AD as an IdPLog into your Striim Cloud account and click User Profile at the top right of the screen.Go to the Login & Provisioning tab.In the Single sign-on section paste the values noted in the previous procedure (see Step 11 above) into the SSO URL, IDP Issuer and Public Certificate fields. The Azure Login URL is the SSO URL, Identifier URL is the IDP Issuer and the X509 Certificate from the downloaded certification is the Public Certificate.Click Update configuration.Enable the Single sign-on (SSO) toggle near the top of the page.Test logging in to your Striim Cloud account through Azure AD. Logout then go to the login page and select Sign in with SAML. You will be logged in through Azure AD. Users can access Striim Cloud through the Striim Cloud login page.In this section: Using Active Directory with Striim CloudCreating a SAML applicationConfigure Striim Cloud to trust Azure AD as an IdPSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-10\n", "metadata": {"source": "https://www.striim.com/docs/en/using-active-directory-with-striim-cloud.html", "title": "Using Active Directory with Striim Cloud", "language": "en"}} {"page_content": "\n\nGet support for Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Deploying and managing Striim CloudGet support for Striim CloudPrevNextGet support for Striim CloudSelect Create ticket from the menu, fill out the Contact Support form, and click Submit.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-05-12\n", "metadata": {"source": "https://www.striim.com/docs/en/get-support-for-striim-cloud.html", "title": "Get support for Striim Cloud", "language": "en"}} {"page_content": "\n\nGetting StartedSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Getting StartedPrevNextGetting StartedThis section of the documentation provides an introduction to the platform for new and prospective users.First, follow the instructions in Deploying and managing Striim Cloud.Install Striim Platform for evaluation purposesDeploying and managing Striim CloudOnce you do that, you may take the Hands-on quick tour, explore Striim on your own, or run the following demo and sample applications:The CDC (change data capture) demo apps highlight Striim's initial load and CDC replication capabilities. A Docker container with a PostgreSQL database is provided to try fast, high-volume data loading to another database, Kafka, or file storage. See Running the CDC demo apps.The PosApp sample application demonstrates how a credit card payment processor might use Striim to generate reports on current transaction activity by merchant and send alerts when transaction counts for a merchant are higher or lower than average for the time of day. This application is explained in great detail, so is useful for developers who want to learn how to write Striim applications.The MultiLogApp sample application demonstrates how Striim could be used to monitor and correlate logs from web and application server logs from the same web application. For developers who want to learn how to write Striim applications, this builds on the concepts covered in PosApp.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-09\n", "metadata": {"source": "https://www.striim.com/docs/en/getting-started.html", "title": "Getting Started", "language": "en"}} {"page_content": "\n\nCommon Striim use casesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Getting StartedCommon Striim use casesPrevNextCommon Striim use casesStriim is a distributed data integration and intelligence platform that can be used to design, deploy, and run data movement and data streaming pipelines. The following are common business applications for the Striim platform. (Note that these examples include just a small fraction of the thousands of source-target combinations Striim supports.)Cloud adoption, including database migration, database replication, and data distribution. Popular data pipelines for this scenario include:RDBMS to RDBMS, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to homogeneous or heterogeneous databases running on AWS, Google Cloud Platform, Microsoft Azure, or Oracle Cloud.RDBMS to data warehouse, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to Amazon Redshift, Azure Synapse, Databricks, Google BigQuery, or Snowflake.Hybrid cloud data integration, including on-premise to cloud, on-premise to on-premise, cloud to cloud, and cloud to on-premise topologies. Popular data pipelines for this scenario include:RDBMS to RDBMS, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to homogeneous or heterogeneous databases running on AWS, Google Cloud Platform, Microsoft Azure, or Oracle Cloud.RDBMS to queuing systems, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to Kafka or cloud-based messaging systems such as Amazon Kinesis, Azure Event Hub, or Google PubSub.Queuing systems to RDBMS, including from Kafka to MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server.RDBMS to cloud-based storage systems, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to Amazon S3, Azure Data Lake Storage, or Google Cloud Storage.Cloud-based storage systems to RDBMS, including from Amazon S3 to MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server.Digital transformation, including real-time data distribution, real-time reporting, real-time analytics, stream processing, operational monitoring, and machine learning. Popular use cases for this scenario include:Real-time alerting and notification for CDC workloads (see the discussion of alerts in Running the CDC demo apps).Running the CDC demo appsStreaming analytics using data windows (see Sample applications for programmers).Running SQL-based continuous queries on moving data pipelines.Creating real-time dashboards on CDC or Kafka workloads.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-02\n", "metadata": {"source": "https://www.striim.com/docs/en/common-striim-use-cases.html", "title": "Common Striim use cases", "language": "en"}} {"page_content": "\n\nRunning the CDC demo appsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Getting StartedRunning the CDC demo appsPrevNextRunning the CDC demo appsThe CDC demo applications demonstrate Striim's data migration capabilities using a PostgreSQL instance in a Docker container.About the CDC demo appsThere are three groups of applications:SamplesDB demonstrates a SQL CDC source to database target pipeline, replicating data from one set of PostgreSQL tables to another set. The two applications that are similar to their real-world equivalents are:PostgresToPostgresInitialLoad150KRows uses Database Reader and Database Writer to replicate 150,000 existing records from the customer, nation, and region tables to the customertarget, nationtarget, and regiontarget tables. In a real-world application, the source and target would typically be different databases. For example, the source might be Oracle and the target might be Amazon Redshift; Azure SQL Data Warehouse, PostgreSQL, or SQL DB; Google BigQuery, Cloud SQL, or Spanner; or Snowflake.DatabaseReaderDatabaseWriterPostgresToPostgresCDC uses PostgreSQLReader (see PostgreSQL) and Database Writer to continuously update the target tables with changes to the source.DatabaseWriterSamplesDB2Kafka demonstrates a typical SQL CDC source to database target pipeline, replicating data from a set of PostgreSQL tables to a Kafka topic. The two applications that are similar to their real-world equivalents are:PostgresToKafkaInitialLoad150KRows uses Database Reader and Kafka Writer to replicate 150,000 existing records from the PostgreSQL customer, nation, and region tables to messages in a Kafka topic called kafkaPostgresTopic. In a real-world application, the target would be an external Kafka instance, either on-premise or in the cloud.DatabaseReaderPostgresToKafkaCDC uses PostgreSQLReader (see PostgreSQL) and Kafka Writer to continuously update the Kafka topic with changes to the PostgreSQL source tables. Note that updates and deletes in PostgreSQL create new messages in Kafka rather than updating or deleting previous messages relating to those rows.SamplesDB2File demonstrates a typical SQL CDC source to file target pipeline, replicating data from a set of PostgreSQL tables to files. The two applications that are similar to their real-world equivalents are:PostgresToFileInitialLoad150KRows uses Database Reader and File Writer to replicate 150,000 existing records from the PostgreSQL customer, nation, and region tables to files in striim/SampleOutput.. In a real-world application, the target directory would typically be on another host, perhaps in AWS S3, Azure Blob Storage or HD Insight Hadoop, or Google Cloud Storage.DatabaseReaderPostgresToFileCDC uses PostgreSQLReader (see PostgreSQL) and File Writer to continuously update the files with changes to the PostgreSQL source tables. Note that updates and deletes in PostgreSQL add new entries to the target files rather than updating or deleting previous entries relating to those rows.Striim provides wizards to help you create similar applications for many source-target combinations (see Creating apps using templates).Creating apps using templatesThe other applications use open processors (see Creating an open processor component) and other custom components to manage the PostgreSQL instance and generate inserts, updates, and deletes. In a real-world application, the source database would be updated by users and other applications.ValidatePostgres, ValidateKafka, and ValidateFile verify that the sources and targets used by the other apps are available.Execute250Inserts adds 250 rows to the source tables and stops automatically.Execute250Updates changes 250 rows in the source tables and stops automatically.Execute250Deletes removes 250 rows from the source tables and stops automatically.ResetPostgresSample, ResetKafkaSample, and ResetFileSample clear all the data created by the other apps, leaving the apps, PostgreSQL tables, Kafka, and SampleOutput directory in their original states.Running the applicationsWhen Striim, the PostgreSQL instance in Docker, and Kafka are running, you can use the PostgreSQL demo applications. The process is the same for all three sets of applications.Deploy and start the ValidatePostgres, ValidateKafka, and ValidateFile applications and leave them running.In the SamplesDB group, deploy and start the SamplesDB.PostgresToPostgresInitialLoad150KRows application.When you see the alert above, that means initial load has completed. Stop and undeploy the InitialLoad application.Deploy and start the SamplesDB.PostgresToPostgresCDC application.Once the CDC application is running, deploy and start the SamplesDB.Execute250Inserts application. It will add 250 rows to the customer table, give you an alert, and stop automatically. The CDC app will replicate the rows to the target.Deploy and start SamplesDB.Execute250Updates. It will update a random range of 250 rows in the customer table, give you an alert, and stop automatically. The PostgreSQL CDC app will replicate the changes to the corresponding rows in the customertarget table. PostgresToKafkaCDC will add messages describing the updates to the target topic. PostgresToFileCDC will add entries describing the updates to the files in SampleOutput.Deploy and start SamplesDB.Execute250Deletes. It will delete the first 250 rows in the customer table, give you an alert, and stop automatically. The PostgreSQL CDC app will delete the corresponding rows in the customertarget table. PostgresToKafkaCDC will add messages describing the deletes to the target topic. PostgresToFileCDC will add entries describing the deletes to the files in SampleOutput.Verifying PostgreSQL to PostgreSQL replicationTo view the results of the load, insert, update, and delete commands in the PostgreSQL target, use any PostgreSQL client to log in to localhost:5432 with username striim and password striim.Alternatively, you can access virtual machine's command line and run psql:In a Docker Quickstart, OS X, or Linux terminal, enter:docker exec -it striimpostgres /bin/bashWhen you see the bash prompt, enter:psql -U striim -d webactionBefore running PostgresToPostgresInitialLoad150KRows, the customer table has 150,000 rows and customertarget has none:webaction=# select count(*) from customer;\n\u00a0count \u00a0\n--------\n\u00a0150000\n(1 row)\n\nwebaction=# select count(*) from customertarget;\n\u00a0count\u00a0\n-------\n\u00a0\u00a0 \u00a0 0\n(1 row)\nAfter running PostgresToPostgresInitialLoad150KRows, customertarget has 150,000 rows:webaction=# select count(*) from customertarget;\n\u00a0count \u00a0\n--------\n\u00a0150000\n(1 row)\nAfter stopping PostgresToPostgresInitialLoad150KRows, starting PostgresToPostgresCDC, and running Execute250Inserts:webaction=# select count(*) from customertarget;\n\u00a0count \u00a0\n--------\n\u00a0150250\n(1 row)\nAfter running Execute250Updates:webaction=# select * from customer where c_custkey=113981;\n\u00a0c_custkey | \u00a0 \u00a0 \u00a0 c_name \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 c_address\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | c_nationkey ... \u00a0 \u00a0\n-----------+--------------------+------------------------------------+------------ ...\n\u00a0 \u00a0 113981 | Customer#000113981 | kpxLWwaZh3DpOr Qudn1OKolRYyIlFshOG | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 4 ...\n(1 row)\n\nwebaction=# select * from customertarget where c_custkey=113981;\n\u00a0c_custkey | \u00a0 \u00a0 \u00a0 c_name \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 c_address\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | c_nationkey ... \u00a0 \u00a0\n-----------+--------------------+------------------------------------+------------ ...\n\u00a0 \u00a0 113981 | Customer#000113981 | kpxLWwaZh3DpOr Qudn1OKolRYyIlFshOG | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 4 ...\n(1 row)After running Execute250Deletes:webaction=# select * from customer where c_custkey=1;\n\u00a0c_custkey | c_name | c_address | c_nationkey | c_phone | c_acctbal | c_mktsegment | c_comment\u00a0\n-----------+--------+-----------+-------------+---------+-----------+--------------+-----------\n(0 rows)\n\nwebaction=# select * from customertarget where c_custkey=1;\n\u00a0c_custkey | c_name | c_address | c_nationkey | c_phone | c_acctbal | c_mktsegment | c_comment\u00a0\n-----------+--------+-----------+-------------+---------+-----------+--------------+-----------\n(0 rows)\nViewing Kafka target dataTo see the output of PostgresToKafkaCDC, use Kafka Tool or a similar viewer. The Kafka cluster name is the same as your Striim cluster name. The Kafka version is 0.11.Viewing file target dataThe output of PostgresToFileCDC is in striim/SampleOutput.Running the applications again at a later timedocker start striimpostgresWarningOn Windows, Zookeeper and Kafka do not shut down cleanly. (This is a well-known problem.) Before you restart Kafka, you must delete the files they leave in\u00a0c:\\tmp.Deploy and start the ValidatePostgres, ValidateKafka, and ValidateFile applications and leave them running.Deploy and start the ResetPostgresSample, ResetKafkaSample, and ResetFileSample apps, then when they have completed undeploy them.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/en/running-the-cdc-demo-apps.html", "title": "Running the CDC demo apps", "language": "en"}} {"page_content": "\n\nHands-on quick tourSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Getting StartedHands-on quick tourPrevNextHands-on quick tourThis tour will give you a quick hands-on look at Striim's dashboards, Source Preview, Flow Designer, and more.Viewing dashboardsSelect Apps > View All Apps.If you don't see PosApp, select Create App > Import TQL file, navigate to\u00a0Striim/Samples/PosApp, double-click PosApp.tql, enter Samples as the namespace, and click Import.At the bottom right corner of the PosApp tile, select ... > Deploy > Deploy.When deployment completes, select ... > Start. The counter on the bell (alert) icon at the top right should start counting up, indicating that alerts are being generated.From the top left menu, select Dashboards > View All Dashboards > PosAppDash.It may take a minute for enough data to load that your display looks like the following. The PosApp sample application shows credit card transaction data for several hundred merchants (for more information, see PosApp).Hover the mouse over a map or scatter plot point, bar, or heat-map segment to display a pop-up showing more details.Click a map or plot point to drill down for details on a particular merchant:To return to the main page, click Samples.PosAppDash in the breadcrumbs:You can filter the data displayed in the dashboard using page-level or visualization-level text search or time-range filters.With the above text search, the dashboard displays data only for Recreational Equipment Inc.Click the x in the search box to clear the filter.To try the time-range filter, click the filter icon at the top right of the scatter chart, select StartTime, and set the dialog as shown below:Filter: select is betweenValue: Enter 2013-03-12 and 8:45pm as the from start date and time, and 2013-03-12 and 9:00pm as the to date and time.Click Apply.Click Clear to clear the filter.When you are through exploring the dashboard, continue with Creating sources and caches using Source Preview.Creating sources and caches using Source PreviewSource Preview is a graphical alternative to defining sources and caches using TQL. With it, you:browse regular or HDFS volumes accessible by the Striim serverselect the file you wantselect the appropriate parser (Apache, structured text, unstructured text, or XML)choose settings for the selected parser, previewing the effects on how the data is parsedgenerate a new application containing the source or cache, or add it to an existing applicationFor sources, Source Preview will also create:a CQ to filter the raw data and convert the fields to Striim data typesa stream of type WAEvent linking the source and CQan output stream of a new type based on the parser settings you chose in Source PreviewCreate a sourceThe following steps create a source from the sample data used by PosApp:Select Apps > Create New > Source Preview > Samples > PosDataPreview.csv > Preview.Check Use first line for column names and set columndelimiter to , (comma).PosApp uses only the MERCHANTID, DATETIME, AUTHAMOUNT, and ZIP columns, so uncheck the others.Set the data types for DATETIME to DateTime (check Unix Timestamp) and for AUTHAMOUNT to Double. Leave MERCHANTID and ZIP set to String.The data is now parsed correctly, the columns have been selected, and their names and data types have been set, so click Save.For Name enter PosSourceApp.If you are logged in as admin, for Namespace enter PosSourceNS. Otherwise, select your personal namespace. Then click Next.For Name enter PosSource, then click Save.The new PosSourceApp application appears in the flow editor.At this point you could add additional components such as a window, CQ, and target to refine the application, or export it to TQL for use in manually coded applications.Add a cacheThe following steps will add a cache to the PosSourceApp application:Download USAddressesPreview.zip from github.com/striim/doc-downloads and unzip it.Select > My Files, click Select File next to No File Selected (not the one next to Cancel), navigate to and double-click USAddressesPreview.txt, and click Upload and SelectSelect Apps > Create New > Source Preview > Browse, select USAddressesPreview.txt, and click Select File.Check Use first line for column names, set columndelimiter to \\t (tab), set the data type for latVal and longVal to Double, and click Save.Select Use Existing, select PosSourceApp, and click Next.Select Create cache, for Name enter ZipCache, for Cache Key select Zip, leave Cache Refresh blank, and click Save.WarningIf you save as a cache and deploy the application, the entire file will be loaded into memory.Continue with Modifying an application using the Flow Designer.Modifying an application using Flow DesignerThe instructions in this topic assume you have completed the steps in Creating sources and caches using Source Preview and are looking at PosSourceApp in Flow Designer:We will enhance this application with a query to join the source and cache and populate a target and WActionStore.Collapse Sources and expand Base Components.Click WActionStore, drag it into the workspace, and drop.Set the name to PosSourceData.Click in the Type field and enter PosSourceContext as a new type.Click Add Field four times.Set the fields and data types as shown below. Click the key icon next to MerchantId to set it as the key for PosSourceContext.Add four more fields as shown below.Click the Save just below the types (not the one at the bottom of the property editor).Set Event Types to PosSourceContext, set Key Field to Merchant ID, and click Save (the one at the bottom of the property editor).Drag a continuous query (CQ) into the workspace.Set the name to GenerateWactionContext.Enter or paste the following in the Query field:SELECT p.MERCHANTID,\n p.DATETIME,\n p.AUTHAMOUNT,\n z.Zip,\n z.City, \n z.State,\n z.LatVal,\n z.LongVal\nFROM PosSource_TransformedStream p, ZipCache z\nWHERE p.ZIP = z.ZipSet Output to Existing Output and PosSourceData. The configuration dialog should look like this:Click Save. The application should look like this:The status should now show Created. Select Deploy App > Deploy.When the status changes to Deployed, select the stream icon below\u00a0GenerateWactionContext, then click the eye icon or Preview On Run. The data preview pane will appear at the bottom of the window.Click\u00a0Deployed and select Start App. Counts will appear above each of the application's components indicating how many events it is processing per second. (Since this application has a small amount of data, these counts may\u00a0return to zero before they are refreshed. Run MultiLogApp for a larger data set where the counts will be visible for longer.)The first 100 events from the GenerateWactionContext output stream\u00a0will be displayed in the preview pane.At this point, the WActionStore contains data, so we can query or visualize it. Continue with Browsing data with ad-hoc queries.Browsing data with ad-hoc queriesAd-hoc queries let you do free-form queries on WActionStores, caches, or streams in real time by entering select statements in the Tungsten console. The syntax is the same as for queries in TQL applications (see CREATE CQ (query)) .The following example assumes you performed the steps in Modifying an application using the Flow Designer, including deploying and starting the application.Open a terminal window and start the Tungsten console. If Striim is installed in /opt, the command is: /opt/Striim/bin/console.shLog in with username admin and the password you provided when you installed Striim.At the W (admin) > prompt, enter the following: select * from PosSourceNS.PosSourceData; You should see something like the following:[\n MerchantId = Mpc6ZXJBAqw7fOMSSj8Fnlyexx6wsDY7A4E\n DateTime = 2607-11-27T09:22:53.210-08:00\n Amount = 23.33\n Zip = 12228\n City = Albany\n State = NY\n LatVal = 42.6149\n LongVal = -73.9708\n]\n[\n MerchantId = Mpc6ZXJBAqw7fOMSSj8Fnlyexx6wsDY7A4E\n DateTime = 2607-11-27T09:22:53.210-08:00\n Amount = 34.26\n Zip = 23405\n City = Machipongo\n State = VA\n LatVal = 37.4014\n LongVal = -75.9082\n]Press Enter to exit the query.If you prefer, you can see the data in a tabular format. To try that, enter: set printformat=row_format;Press cursor up twice to recall the query, then press Enter to run it again. You should see the following (if necessary, widen the terminal window to format the table correctly):To switch back to the default format:set printformat=json;Continue with Creating a dashboard.Creating a dashboardIn Viewing dashboards you saw the dashboard of the PosApp sample application. Now you will create one from scratch.The following instructions assume you completed the steps in Modifying an application using the Flow Designer and Browsing data with ad-hoc queries and that the application is still running.From the main menu, select Dashboards > View All Dashboards.Click Add Dashboard, for Dashboard Name enter PosSourceDash, for Namespace select PosSourceNS as the namespace, and click Create Dashboard. A blank dashboard will appear.To add a visualization to the dashboard, drag a Vector Map from the visualization palette and drop it on the grid.The first step in configuring a dashboard is to specify its query: click Edit Query.In the Query Name field, enter PosSourceNS.PosSourceDataSelectAll, edit the query to read select * from PosSourceData; and click Save Query.Click Configure (the pencil icon).Set the map properties as shown above, then click Save Visualization.Since the data is all in the continental United States, you might want to edit the settings to center it there. You could also change the Bubble Size settings so that the dots on the map vary depending on the amount.Click Configure again, change the settings as shown above, click Save Visualization, then refresh your browser to apply the new zoom settings.Experiment with the settings or try more visualizations if you like. For more information on this subject, see Dashboard Guide.Continue with Exporting applications and dashboardsExporting applications and dashboardsTo save the work you have done so far, you can export the application and dashboard to files.From the upper-left menu, select Apps.From PosSourceApp's ... menu, select Export.Click Export (since the app contains no Encrypted passwords, do not specify a passphrase).Optionally, change the file name or directory, then click Save.From the top menu, select Dashboards > View All Dashbaords.Click PosSourceDash.Select Export. Optionally, change the file name or directory, then click Save.You may import the exported application TQL file and dashboard JSON file to any namespace. Note that for the dashboard to work you must import it to the same namespace as the application.You may edit the exported TQL file as discussed in Programmer's Guide.What next?See\u00a0Web UI Overview for a look at additional Striim features.Run the CDC demo applications to explore Striim's data migration capabilities (see Running the CDC demo apps).If you do not plan to write Striim applications but would like to create or modify dashboards, continue with the Dashboard Guide\u00a0 and\u00a0PosAppDash in the Programmer's Guide.NoteThe Striim platform's TQL programming language is in many ways similar to SQL, particularly as regards SELECT statements. The Programmer's Guide assumes basic knowledge of SQL.To learn to write Striim applications, continue with Programmer's Guide.In this section: Hands-on quick tourViewing dashboardsCreating sources and caches using Source PreviewModifying an application using Flow DesignerBrowsing data with ad-hoc queriesCreating a dashboardExporting applications and dashboardsWhat next?Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/en/hands-on-quick-tour.html", "title": "Hands-on quick tour", "language": "en"}} {"page_content": "\n\nResource usage policiesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Resource usage policiesPrevNextResource usage policiesThe core Striim application, your applications, and your users logged in and working on Striim \u2014 together, these consume the same set of CPU, memory and storage resources. How much Striim can do\u2014how many applications it can run at a time, how many users can log in at once, and so on\u2014is limited by the CPU cores, memory, and disk space available to it. If you try to do more than these available resources can handle effectively, it can lead to issues such as excessive CPU usage and out-of-memory errors, and these can ultimately degrade the performance of your Striim applications and environment. We have recommended the policy defaults discussed below to avoid accidentally running into such problems.When a resource policy limit is reached, a ResourceLimitException is displayed in the web UI and logged in striim.server.log, and the action that exceeded the limit (such as deploying an application or logging in) fails.NoteIn this release, resource usage policies can only be viewed and managed using the console.The default resource usage limits are in effect for all new Striim clusters. The resource usage limits are disabled for Striim clusters that you upgrade from previous versions. You can use the default usage policy limits, or adjust, and even disable, the resource usage limits as needed. You must log in with Striim admin credentials to be able to modify the resource usage policy limits.Resource usage policy limitsObject/componentDefault value and triggering actionNotesactive_users_limitNumber of active non-system users30Limit checked: when you create a new user.Scope: clusterA greater number of active users can mean that there are more applications, and may lead to overloading the system.api_call_rate_limitRate limit for Rest API calls500 (per sec)Limit checked: when you make a REST API call.Scope: serverServing the REST API calls takes resources on the server backend. Bursty REST API calls also are an indication of a user side uncontrolled application or at worst a DDOS type pattern.After altering / disabling this limit, you must restart the service.apps_per_cpu_limitNumber of running applications based on CPU cores4 (applications per available core on the server)Limit checked: during deployment.Scope: serverApplications are the primary consumers of resources.To\u00a0maintain a certain throughput level we need to limit the number of running applications as a function of the available\u00a0vCPUs.apps_per_gb_limitNumber of running applications based on memory2 (applications per 1 GB of memory available to the Java virtual machine running Striim)Limit checked: during deployment.Scope: serverThe number of running applications is combination of both the CPU cores and memory limits.See Application resource policies.cluster_size_limitNumber of servers in the cluster7Limit checked: when you add a new server to the cluster.Scope: clusterBenefits: The probability of a server failure increases as their number grows.num_queries_limitNumber of adhoc and named (dashbaord) queries50Limit checked: when you run an ad-hoc query or a dashboard runs a named query.Scope: serverToo many unmanaged tasks/queries taking up system resources and destabilizing running applications will be limited.ui_sessions_limitNumber of concurrent active web UI sessions10Limit checked: when a user logs in through the UI.Scope: serverThe Striim user interface is an active page which when loaded and open actively receives various types of data. Having too many of these pages open may lead to memory pressure.Application resource usage policiesApplications are the primary consumers of resources. Your Striim environment may be constrained by CPU resources or memory resources depending on the configuration of your underlying infrastructure. Thus, there are two separate policy limits that apply to the maximum number of applications that can concurrently run in your environment:Number of running applications based on CPU coresNumber of running applications based on memory available to the Java virtual machine running StriimThe maximum number of concurrently running applications is determined by the combination of these two resource policy limits. That is, the limit will be a minimum of the number of applications that can run based on CPU and memory resources. For example, a server with 8 CPU cores and 16 GB memory can be considered to be constrained by memory resources. If you configure the application resource policies to allow a maximum of 1 application per GB of memory and 4 applications per CPU core, then Striim will allow a maximum of 16 applications to run at anytime because it is the lower of the 2 limits - 16 applications on the basis of memory and 32 applications on the basis of CPU cores.Viewing resource policiesThe following command shows the current value for a given resource limit policy:describe resource_limit_policy ;The following command lists all the names of the resource limit policies:list resource_limit_policies;Enabling or disabling resource policies as a groupYou can enable or disable resource limits as a group using the alter cluster command:alter cluster { enable | disable } resource_limit_policy;After enabling or disabling resource limits, you must restart Striim before the change will take effect (see Starting and stopping Striim Cloud).Disabling the resource_limit_policy turns off all resource limit checks.Enabling the resource_limit_policy turns resource limit checks with the values of the limits reverting to those set by the user before disabling, or to the default values if no changes to the defaults have been made.Modifying individual resource usage policiesYou can enable or disable resource limits individually or change their values using the alter resource_limit_policy command. Policies apply to all servers in the cluster; you cannot set different policies for each server.If you make a change to api_call_rate_limit, you must restart Striim before the change will take effect (see Starting and stopping Striim Cloud).To disable an individual resource usage policy or several enumerated policies:alter resource_limit_policy\u00a0unset\u00a0<\"resourcelimitname\">, ...;For example, alter RESOURCE_LIMIT_POLICY unset \"CLUSTER_SIZE_LIMIT\", \"APPS_PER_CPU_LIMIT\";To enable a resource usage policy or set a new value for the default limit:alter resource_limit_policy set\u00a0<\"resourcelimitname\">, ...;For example, alter RESOURCE_LIMIT_POLICY set (CLUSTER_SIZE_LIMIT : 3, APPS_PER_CPU_LIMIT : 14);The value must be a positive integer.In this section: Resource usage policiesResource usage policy limitsApplication resource usage policiesViewing resource policiesEnabling or disabling resource policies as a groupModifying individual resource usage policiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-22\n", "metadata": {"source": "https://www.striim.com/docs/en/resource-usage-policies.html", "title": "Resource usage policies", "language": "en"}} {"page_content": "\n\nPipelinesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 PipelinesPrevNextPipelinesAs discussed in Common Striim use cases, Striim applications can do many different things. When the primary purpose of an application is to move or copy data from a source to a target, we call that a \"pipeline\" application. For an introduction to the subject, see What is a Data Pipeline.Common source-target combinationsThe following examples are just the most popular among Striim's customers. There are many other possibilities.Database to database, for example, from MySQL, Oracle, or SQL Server to MariaDb, PostgreSQL, or Spanner in the cloud. See the Change Data Capture (CDC) for a full list of supported sources and Targets for a full list of supported targets.The most common use for this kind of pipeline is to allow a gradual migration from on-premise to cloud. Applications built on top of the on-premise database can be gradually replaced with new applications built on the cloud database. Once all the legacy applications are replaced, the pipeline can be shut down and the on-premise database can be retired.In this model, updates, and delete operations on the the source tables are replicated to the target with no duplicates or missing data (that is, \"exactly once processing or E1P\"). This consistency is ensured even after events such as a server crash require restarting the application (see Recovering applications).Recovering applicationsDatabase to data warehouse, for example, from Oracle, PostgreSQL, or SQL Server (on premise or in the cloud) to Google BigQuery, Amazon Redshift, Azure Synapse, or Snowflake. See Sources for a full list of supported sources and Targets for a full list of supported targets.The primary use for this kind of pipeline is to update data warehouses with new data in near real time rather than in periodic batches.Typically data warehouses retain all data so that business intelligence reports can be generated from historical data. Consequently, when rows are updated or deleted in the source tables, instead of overwriting the old data in the target Striim appends a record of the update or delete operation. Striim ensures that all data is replicated to the target, though after events such as a server crash require restarting the application there may be duplicates in the target (that is, \"at least once processing\" or A1P).Supported sources and targets for pipeline appsThe following sources (all SQL databases) and targets may be directly connected by a WAEvent stream.Supported WAEvent sourcesSupported targetsCosmos DB ReaderGCS ReaderHP NonStop SQL/MX using Database Reader or Incremental Batch ReaderHP NonStop Enscribe, SQL/MP, and SQL/MX readers (CDC)MariaDB Reader (CDC)MariaDB using Database Reader or Incremental Batch ReaderMongo Cosmos DB ReaderMySQL Reader (CDC)MySQL using Database Reader or Incremental Batch ReaderOracle Reader (CDC)OJetOracle Database using Database Reader or Incremental Batch ReaderPostgreSQL Reader (CDC)PostgreSQL using Database Reader or Incremental Batch ReaderSalesforce Pardot ReaderServiceNow Reader (in this release, supports insert and update operations only, not deletes)SQL Server using MSJet (CDC)SQL Server CDC using MS SQL Reader (CDC)SQL Server using Database Reader or Incremental Batch ReaderSybase using Database Reader or Incremental Batch ReaderTeradata using Database Reader or Incremental Batch ReaderAzure Synapse using Azure SQL DWH WriterBigQuery WriterCassandra Cosmos DB WriterCassandra WriterCloudera Hive WriterCosmos DB WriterDatabricks WriterHazelcast WriterHBase WriterHP NonStop SQL/MX using Database WriterHortonworks Hive WriterKafka WriterKudu WriterMariaDB using Database WriterMongo Cosmos DB WriterMongoDB WriterMySQL using Database WriterOracle Database using Database WriterPostgreSQL using Database WriterRedshift WriterSalesforce Writer (in MERGE mode)SAP HANA using Database WriterServiceNow WriterSinglestore (MemSQL) using Database WriterSnowflake WriterSpanner WriterSQL Server using Database WriterThe following sources and targets may be directly connected by a JSONNodeEvent stream.Supported JSONNodeEvent sourcesSupported targetsCosmos DB ReaderJMX ReaderMongoDB ReaderMongo Cosmos DB ReaderADLS Writer (Gen1 and Gen2)Azure Blob WriterAzure Event Hub WriterCosmos DB WriterFile WriterGCS WriterGoogle PubSub WriterHDFS WriterJMS WriterKafka WriterKinesis WriterMapR FS WriterMapR Stream WriterMongoDB Cosmos DB WriterMongoDB WriterS3 WriterMapping and filteringThe simplest pipeline applications simply replicate the data from the source tables to target tables with the same names, column names, and data types. If your requirements are more complex, see the following:Using database event transformersMasking functionsMasking functionsModifying and masking values in the WAEvent data array using MODIFYModifying the WAEvent data array using replace functionsMapping columnsModifying output using ColumnMapValidating table mappingSchema evolutionFor some CDC sources, Striim can capture DDL changes. Depending on the target, it can replicate those changes to the target tables, or take other actions, such as quiescing or halting the application, For mote information, see Handling schema evolution:Initial load versus continuous replicationTypically, setting up a data pipeline occurs in two phases.The first step is the initial load, copying all existing data from the source to the target. You may write a Striim application or use a third-party tool for this step. If the source and target are homogenous (for example, MySQL to MariaDB, Oracle to Oracle Exadata, or SQL Server to Azure SQL Server managed instance), it its be fastest and easiest to use the native copy or backup-restore tools.Depending on the amount and complexity of data in the source tables, this may take minutes, hours, days, or weeks. You may monitor progress by Creating a data validation dashboard.Creating a data validation dashboardOnce the initial load is complete, you will start the Striim pipeline application to pick up where the initial load left off. See Switching from initial load to continuous replication for technical details.Monitoring your pipelineYou may monitor your pipeline by Creating a data validation dashboard.Creating a data validation dashboardYou should also set up alerts to let you know if anything goes wrong. See Sending alerts about servers and applications.Sending alerts about servers and applicationsSetting up alerts for your pipelineSystem alerts for potential problems are automatically enabled. You may also create custom alerts. For more information. (see Sending alerts about servers and applications.Sending alerts about servers and applicationsScaling up for better performanceWhen a single reader can not keep up with the data being added to your source, create multiple readers. Use the Tables property to distribute tables among the readers:Assign each table to only one reader.When tables are related (by primary or foreign key) or to ensure transaction integrity among a set of tables, assign them all to the same reader.When dividing tables among readers, distribute them according to how busy they are rather than simply by the number of tables. For example, if one table generates 50% of the entries in the CDC log, you might assign it and any related tables to one reader and all the other tables to another.The following is a simple example of how you could use two Oracle Readers, with one reading a very busy table and the other reading the rest of the tables in the same schema:CREATE SOURCE OracleSource1 USING OracleReader ( \n FetchSize: 1,\n Compression: false,\n Username: 'myname',\n Password: '7ip2lhUSP0o=',\n ConnectionURL: '198.51.100.15:1521:orcl',\n ReaderType: 'LogMiner',\n Tables: 'MYSCHEMA.VERYBUSYTABLE'\n) \nOUTPUT TO OracleSourcre_ChangeDataStream;\n\nCREATE SOURCE OracleSource2 USING OracleReader ( \n FetchSize: 1,\n CommittedTransactions: true,\n Compression: false,\n Username: 'myname',\n Password: '7ip2lhUSP0o=',\n ConnectionURL: '198.51.100.15:1521:orcl',\n ReaderType: 'LogMiner',\n Tables: 'MYSCHEMA.%',\n ExcludedTables: 'MYSCHEMA.VERYBUSYTABLE'\n) \nOUTPUT TO OracleSourcre_ChangeDataStream;When a single writer can not keep up with the data it is receiving from the source (that is, when it is backpressured), create multiple writers. For many writers, you can simply use the Parallel Threads property to create additional instances and Striim will automatically distribute data among them (see Creating multiple writer instances). For other writers, use the same approach as for sources, described above.In this section: PipelinesCommon source-target combinationsSupported sources and targets for pipeline appsMapping and filteringSchema evolutionInitial load versus continuous replicationMonitoring your pipelineSetting up alerts for your pipelineScaling up for better performanceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/en/pipelines.html", "title": "Pipelines", "language": "en"}} {"page_content": "\n\nSourcesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesPrevNextSourcesIn a Striim application, readers collects data from external sources such as Oracle, SQL Server, MySQL, PostgreSQL, Kafka, Hadoop, Amazon S3, Azure Storage, or Google Cloud Storage, For a complete list of supported sources, see Readers overview.Readers overviewIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-20\n", "metadata": {"source": "https://www.striim.com/docs/en/sources.html", "title": "Sources", "language": "en"}} {"page_content": "\n\nReaders overviewSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesReaders overviewPrevNextReaders overviewThe following is a summary of reader capabilities. For more information, see:Using source and target adapters in applicationsSample applications for programmers for an introduction to WAEventWAEvent contents for change data. HP NonStop reader WAEvent fields, MySQL Reader WAEvent fields, Oracle Reader and OJet WAEvent fields, PostgreSQL Reader WAEvent fields, SQL Server readers WAEvent fields,MongoDBReader JSONNodeEvent fieldsParsers for discussion of many of the supported inputsSupported reader-parser combinationsAvro Parser for discussion of AvroEventJSON Parser for discussion of user-defined JSONSQL CDC replication examples and Replicating MongoDB data to Azure CosmosDBHow update and delete operations are handled in writers for discussion of \"insert only\"How update and delete operations are handled in writersRecovering applications for detailed information about recovery and its limitationsRecovering applicationsReaders summary tablereaderinput(s)output stream type(s)supports replicationrecoverableCosmos DB Reader (see Azure Cosmos DB using Core (SQL) API)Initial Load mode: Cosmos DB documents using Microsoft Azure Cosmos SDK for Azure CosmosDB SQL APIIncremental mode: Cosmos DB documents using Cosmos DB's change feedJSONNodeEventrequires upsert support in writer, so only to Cosmos DB Writer and MongoDB Writeryes, but see Cosmos DB Reader limitationsDatabase ReaderDatabase ReaderJDBC from a supported DBMS (see Database Reader)Database ReaderWAEventinsert onlyif output is persisted to a Kafka stream (or use Incremental Batch Reader instead)File ReaderApache access log, Avro, binary, delimited text, free-form text (using RegEx), GoldenGate trail file, JSON, name-value pairs, Parquet, XMLAvroEvent (when input is Avro), user-defined JSON (when input is JSON), ParquetEvent (when input is Parquet) or WAEventfor GoldenGate onlyyesGCS ReaderApache access log, Avro, binary, delimited text, free-form text (using RegEx), JSON, name-value pairs, Parquet, XMLJSONNodeEvent, ParquetEvent, user-defined, WAEvent, XMLNodeEventnoA1PHDFS ReaderApache access log, binary, delimited text, free-form text (using RegEx), JSON, name-value pairs, Parquet, XMLuser-defined JSON (when input is JSON), ParquetEvent (when input is Parquet) or WAEventnoyesHP NonStop Enscribe, SQL/MP, and SQL/MX ReadersNonStop TMF audit trailWAEventyesyesHTTP ReaderApache access log, binary, free-form text (using RegEx), JSON, name-value pairs, XMLWAEvent or user-defined JSON (when input is JSON)noif output is persisted to a Kafka streamIncremental Batch ReaderIncremental Batch ReaderJDBC from same sources as Database ReaderWAEventinsert onlyyesJMS ReaderJMS ReaderApache access log, delimited text, free-form text (using RegEx), JSON, name-value pairs, XMLWAEvent or user-defined JSON (when input is JSON)noyesJMX ReaderJava Management Extensions (JMX)JSONNodeEventnoyesKafka ReaderKafka ReaderApache access log, Avro, delimited text, free-form text (using RegEx), JSON, name-value pairs, XMLWAEvent, AvroEvent (when input is Avro), or user-defined JSON (when input is JSON)noyesMapR FS ReaderApache access log, binary, delimited text, free-form text (using RegEx), JSON, name-value pairs, XMLWAEvent or user-defined JSON (when input is JSON)noyesMariaDB Reader (see MariaDB / SkySQL)MariaDB Galera Cluster binary log (binlog)WAEventyesyesMongo Cosmos DB Reader (see Azure Cosmos DB using Cosmos DB API for MongoDB)Initial Load mode: Azure Cosmos DB documents using mongo-driver-syncIncremental mode: Cosmos DB documents using Azure Cosmos DB API for MongoDB's change streamJSONNodeEventinsert and delete onlyrequires upsert support in writer, so only to Cosmos DB Writer and MongoDB Writersee Mongo Cosmos DB Reader limitationsMongoDB Reader (see MongoDB)MongoDBMongoDB replica set operations log (oplog.rs)JSONNodeEventyesyesMQTT ReaderAvro, delimited text, JSON, name-value pairs, XMLWAEvent, AvroEvent (when input is Avro), or user-defined JSON (when input is JSON)noyesMS SQL Reader / MSJet (see SQL Server)SQL ServerSQL Server transaction logWAEventyesyesMultiFile ReaderApache access log, Avro, binary, delimited text, free-form text (using RegEx), JSON, name-value pairs, XMLWAEvent, AvroEvent (when input is Avro), or user-defined JSON (when input is JSON)noif output is persisted to a Kafka streamMySQL Reader (see MySQL)MySQL / MariaDBMySQL binary log (binlog)WAEventyesyesOPCUA Readeran OPC-UA serverOPCUA Data Change EventnoyesOJet (see Oracle Database)Oracle DatabaseOracle logsWAEventyesyesOracle Reader (see Oracle Database)Oracle DatabaseOracle logsWAEventyesyesPostgreSQL Reader (see PostgreSQL)PostgreSQL logical replication slotWAEventyesyesS3 ReaderApache access log, Avro, binary, delimited text, free-form text (using RegEx), JSON, name-value pairs, Parquet, XMLAvroEvent (when input is Avro), user-defined JSON (when input is JSON) ParquetEvent (when input is Parquet), or WAEventnoyesSalesforce Pardot ReaderForce.com REST APIWAEventyesyesSalesforce ReaderSalesforce ReaderForce.com REST APIWAEventyesyesSalesforce Platform Event ReaderSalesforce platform event message subscriptionWAEventinsert onlyyesSalesforce Push Topic ReaderSalesforce Streaming APIWAEventyesyesServiceNow ReaderServiceNow tablesWAEventinsert and update onlyyesSQL Serversee MS SQL Reader / MS Jet, aboveTCP ReaderApache access log, binary, delimited text, free-form text (using RegEx), JSON, name-value pairs, XMLWAEvent or user-defined JSON (when input is JSON)noif output is persisted to a Kafka streamTeradatasee Database Reader, aboveUDP ReaderApache access log, binary, collectd, delimited text, free-form text (using RegEx), JSON, Kafka stream, name-value pairs, NetFlow v5 or v9, SNMP, XMLWAEvent, CollectdEvent (when input is collectd), or user-defined JSON (when input is JSON)noif output is persisted to a Kafka streamWindows Event Log ReaderWindows Application, Security, or System event logWindowsLogEventnoyesIn this section: Readers overviewReaders summary tableSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-16\n", "metadata": {"source": "https://www.striim.com/docs/en/readers-overview.html", "title": "Readers overview", "language": "en"}} {"page_content": "\n\nSupported reader-parser combinationsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSupported reader-parser combinationsPrevNextSupported reader-parser combinationsThe following reader-parser combinations are supported:Apache access log (AAL Parser)AvrobinaryDSVfree-form textJSONname-value pairParquetXMLFile Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713GCS Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713HDFS Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713HTTP Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713JMS Reader\u2713\u2713\u2713\u2713\u2713\u2713Kafka Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713MapR FS Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713MQTT Reader\u2713\u2713\u2713\u2713\u2713MultiFile Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713S3 Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713\u2713TCP Reader\u2713\u2713\u2713\u2713\u2713\u2713\u2713UDP Reader*\u2713\u2713\u2713\u2713\u2713\u2713\u2713*Collectd Parser, NetFlow Parser, SNMP Parser, and Striim Parser require the UDP Reader and are not usable with other readers.No parser is selected or specified for the following readers:all CDC readersDatabase ReaderIncremental Batch ReaderIncremental Batch ReaderOPCUA ReaderSalesforce ReaderWindows Event Log ReaderIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-16\n", "metadata": {"source": "https://www.striim.com/docs/en/supported-reader-parser-combinations.html", "title": "Supported reader-parser combinations", "language": "en"}} {"page_content": "\n\nDatabase ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesDatabase ReaderPrevNextDatabase ReaderReturns data from a JDBC query against one or more tables in one of the following:HP NonStop SQL/MX (and SQL/MP via aliases in SQL/MX)MariaDBMariaDB Galera ClusterMySQLOraclePostgreSQLSQL ServerSybaseTeradataNoteSources supported by Database Reader are also supported by Incremental Batch Reader.If the connection to the database is interrupted, the application will halt.NoteFor all databases, when this adapter is deployed to a Forwarding Agent, the appropriate JDBC driver must be installed as described in Installing third-party drivers in the Forwarding Agent.NoteIf you are using Database Reader to perform an initial load before running an application using a CDC reader, enable CDC on the source and perform the setup tasks for the CDC reader before starting the initial load.Database Reader propertiespropertytypedefault valuenotesConnection URLStringThe following databases are supported.for HP NonStop SQL/MX: jdbc:t4sqlmx://: or jdbc:t4sqlmx://:/catalog=;schema=for MariaDB: jdbc:mariadb://:/for MariaDB Galera Cluster: specify the IP address and port for each server in the cluster, separated by commas: jdbc:mariadb://:,:,...; optionally, append /for MySQL: jdbc:mysql://:/To use an Azure private endpoint to connect to Azure Database for MySQL, see Specifying Azure private endpoints in sources and targets.for Oracle: jdbc:oracle:thin:@:: (using\u00a0Oracle 12c with PDB, use the SID for the PDB service) or jdbc:oracle:thin:@:/; if one or more source tables contain LONG or LONG RAW columns, append ?useFetchSizeWithLongColumn=truefor PostgreSQL, jdbc:postgresql://:/for SQL Server: jdbc:sqlserver://:;DatabaseName= or jdbc:sqlserver://\\\\:;DatabaseName=for Sybase: jdbc:jtds:sybase::/for Teradata: jdbc:teradata:///DBS_PORT=,DATABASE=Database Provider TypeStringDefaultControls which icon appears in the Flow Designer.Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude from the query. Specify the value exactly as for Tables. For example, to include data from all tables whose names start with HR except HRMASTER:Tables='HR%',\nExcludedTables='HRMASTER'Fetch SizeInteger100maximum number of records to be fetched from the database in a single JDBC method execution (see the discussion of fetchsize in the documentation for the your JDBC driver)JAAS ConfigurationStringThis is not supported in Striim Cloud.Passwordencrypted passwordThe password for the specified user. See Encrypted passwords.QueryStringSQL statement specifying the data to return. You may query tables, aliases, synonyms, and views.When Query is specified and Tables is not, the WAEvent TableName metadata field value will be QUERY. When both Query and Tables are specified, the data specified by Query will be returned, and the Tables setting will be used only to populate the TableName field.If the query includes a synonym containing a period, it must be enclosed in escaped quotes. For example: select * from \\\"synonym.name\\\"If using a query when the output of a DatabaseReader source is the input of a DatabaseWriter target, specify the target table name as the value of DatabaseReader's Tables field.Quiesce on IL CompletionBooleanFalseWith the default value of False, you must stop the application manually after all data has been read.Set to True to automatically quiesce the application after all data has been read (see discussion of QUIESCE in Console commands). When you see on the Apps page that the application is in the Quiescing state, it means that all the data that existed when the query was submitted has been read and that the target(s) are writing it. When you see that the application is in the Quiesced state, you know that all the data has been written to the target(s). At that point, you can undeploy the initial load application and then start another application for continuous replication of new data.Console commandsNoteSet to True only if all targets in the application support auto-quiesce (see Writers overview).Writers overviewReturn DateTime AsStringJodaSet to\u00a0String to return timestamp values as strings rather than Joda timestamps. The primary purpose of this option is to avoid losing precision when microsecond timestamps are converted to Joda milliseconds. The format of the string is yyyy-mm-dd hh:mm:ss.ffffff.SSL ConfigStringIf the source is Oracle and it uses SSL, specify the required SSL properties (see the notes on SSL Config in Oracle Reader properties).TablesStringThe table(s) or view(s) to be read. MySQL, Oracle, and PostgreSQL names are case-sensitive, SQL Server names are not. Specify names as . for MySQL, .
for Oracle and PostgresQL, and\u00a0..
for SQL Server.You may specify multiple tables and views as a list separated by semicolons or with the % wildcard. For example, HR% would read all the tables whose names start with HR. You may use the % wildcard only for tables, not for schemas or databases. The wildcard is allowed only at the end of the string: for example, mydb.prefix% is valid, but mydb.%suffix is not.If you are using the Query property, specify QUERY as the table name.UsernameStringthe DBMS user name the adapter will use to log in to the server specified in ConnectionURLFor all databases, this user must have SELECT permission or privileges on the tables specified in the Tables property. For Oracle, this user must also have SELECT privileges on DBA_TAB_COLS and ALL_COLL_TYPES.The output type is WAevent.NoteTo read from tables in both Oracle CDB and PDB databases, you must create two instances of DatabaseReader, one for each.Database Reader sample codeThe following example creates a cache of data retrieved from a MySQL table:CREATE TYPE RackType(\n rack_id String KEY,\n datacenter_id String,\n rack_aisle java.lang.Integer,\n rack_row java.lang.Integer,\n slot_count java.lang.Integer\n);\nCREATE CACHE ConfiguredRacks USING DatabaseReader (\n ConnectionURL:'jdbc:mysql://10.1.10.149/datacenter',\n Username:'username',\n Password:'passwd',\n Query: \"SELECT rack_id,datacenter_id,rack_aisle,rack_row,slot_count FROM RackList\"\n)\nQUERY (keytomap:'rack_id') OF RackType;\nThe following example creates a cache of data retrieved from an Oracle table:\nCREATE TYPE CustomerType (\n IPAddress String KEY,\n RouterId String,\n ConnectionMode String,\n CustomerId String,\n CustomerName String\n);\nCREATE CACHE Customers USING DatabaseReader (\n Password: 'password',\n Username: 'striim',\n ConnectionURL: 'jdbc:oracle:thin:@node05.example.com:1521:test5',\n Query: 'SELECT ip_address, router_id, connection_mode, customer_id, customer_name FROM customers',\n FetchSize: 1000\n)\nQUERY (keytomap:'IPAddress') OF CustomerType;DatabaseReader data type support and correspondenceJDBC column typeTQL typenotesTypes.ARRAYjava.lang.StringTypes.BIGINTjava.lang.LongTypes.BITjava.lang.BooleanTypes.CHARjava.lang.StringTypes.DATEorg.joda.time.LocalDateTypes.DECIMALjava.lang.StringTypes.DOUBLEjava.lang.DoubleTypes.FLOATjava.lang.DoubleTypes.INTEGERjava.lang.IntegerTypes.NUMERICjava.lang.StringTypes.REALjava.lang.FloatTypes.SMALLINTjava.lang.ShortTypes.TIMESTAMPorg.joda.time.DateTimeTypes.TINYINTjava.lang.ShortFor MySQL, if the source tables contain columns of this type, append ?tinyInt1isBit=false to the connection URL (jdbc:mysql://:/?tinyInt1isBit=false).Types.VARCHARCHARjava.lang.Stringother typesjava.lang.StringDatabaseReader can not read Oracle RAW or LONG RAW columns (Oracle Reader can).Sample Database ReaderWAEventFor the following row:id first_name last_name phone street city state zip_code\n1 Deborah Burks NULL 9273 Thorne AV Orchard Park NY 14127The WAEvent would be similar to:WAEvent{\n data: [1,\"Deborah\",\"Burks\",null,\"9273 Thorne AV\",\"Orchard Park\",\"NY\",\"14127\"]\n metadata: {\"TableName\":\"BikeStores.sales.customers\",\"ColumnCount\":8,\n \"OperationName\":\"SELECT\",\"OPERATION_TS\":1681412863364}\n userdata: null\n before: null\n dataPresenceBitMap: \"fwM=\"\n beforePresenceBitMap: \"AAA=\"\n typeUUID: {\"uuidstring\":\"01edda2e-77f7-9b21-83c2-8e859085da65\"}\n};The operation name for Database Reader WAEvents is always SELECT.In this section: Database ReaderDatabase Reader propertiesDatabase Reader sample codeDatabaseReader data type support and correspondenceSample Database ReaderWAEventSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-21\n", "metadata": {"source": "https://www.striim.com/docs/en/database-reader.html", "title": "Database Reader", "language": "en"}} {"page_content": "\n\nFile ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesFile ReaderPrevNextFile ReaderReads files from disk using a compatible parser.You can create FileReader sources in the web UI using\u00a0Source Preview.See\u00a0Supported reader-parser combinations) for parsing options.File Reader propertiespropertytypedefault valuenotesBlock SizeInteger64amount of data in KB for each read operationCompression TypeStringSet to gzip when wildcard specifies a file or files in gzip format. Otherwise, leave blank.DirectoryStringSpecify the path to the directory containing the file(s). The path may be relative to the Striim installation directory (for example, Samples/PosApp/appdata) or from the root.Include SubdirectoriesBooleanFalseSet to True if the files are written to subdirectories of the\u00a0Directory path, for example, if each day's files are in a subdirectory named by date.Position By EOFBooleanTrueIf set to True, reading starts at the end of the file, so only new data is acquired.If set to False, reading starts at the the beginning of the file and then continues with new data.When FileReader is used with a cache, this setting is ignored and reading always begins from the beginning of the file.When you create a a FileReader using Source Preview, this is set to False.Rollover StyleStringDefaultSet to log4j if reading Log4J files created using RollingFileAppender.Skip BOMBooleanTrueIf set to True, when the wildcard value specifies multiple files, Striim will read the Byte Order Mark (BOM) in the first file and skip the BOM in all other files. If set to False, it will read the BOM in every file.WildcardStringSpecify the name of the file, or a wildcard pattern to match multiple files (for example, *.xml).When reading multiple files, Striim will read them in the default order for the operating system.While File Reader is reading a file, it will ignore any changes to the portion of the file that has already been read.If a file is modified after File Reader has read it, it will be read again, resulting in it sending duplicate events.The output type is WAevent except when using\u00a0Avro Parser\u00a0 or\u00a0JSONParser.File Reader sample codeWhen used with DSV Parser, the type for the output stream can be created automatically from the file header (see Creating the FileReader output stream type automatically).Striim also provides templates for creating applications that read from files and write to various targets. See\u00a0Creating an application using a template for details.An example from the PosApp sample application:CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'posdata.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n)\nOUTPUT TO CsvStream;See PosApp for a detailed explanation and MultiLogApp for additional examples.Creating the output stream type automaticallyWhen FileReader is used with DSV Parser, the type for the output stream can be created automatically from the file header using OUTPUT TO MAP(filename:'') . For example:CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'posdata*.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n)\nOUTPUT TO CsvStream MAP(filename:\u2019posdata*.csv\u2019);Notes:The specified source file must exist when the source is created.The header must be the first line of the file (the HeaderLineNo setting is ignored by MAP).If multiple files are specified by the wildcard property, the header will be taken from the first one read.All files must be like the first one read, with headers in the first line and the same number of fields.Creating the FileReader output stream type automaticallyWhen FileReader is used with DSV Parser, the type for the output stream can be created automatically from the file header using OUTPUT TO MAP(filename:''). A regular, unmapped output stream must also be specified. For example:CREATE SOURCE PosSource USING FileReader (\n wildcard: 'PosDataPreview*.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n)\nOUTPUT TO PosSource_Stream,\nOUTPUT TO PosSource_Mapped_Stream MAP(filename:'PosDataPreview.csv');\nNotes:When you use a MAP clause, you may not specify the Column Delimit Till. Header Line No, Line Number, or No Column Delimiter properties.The file specified in the MAP clause must be in the directory specified by FileWriter's directory property when the source is created.The header must be the first line of that file.The column names in the header can contain only alphanumeric, _ (underscore) and $ (dollar sign) characters, and may not begin with numbers.All files to be read must be similar to the one specified in the MAP clause, with headers (which will be ignored) in the first line and the same number of fields.All fields in the output stream type will be of type String.In this release, this feature is available only in the console, the MAP clause cannot be edited in the web UI, and changing the Wildcard property value in the web UI will break the source.In this section: File ReaderFile Reader propertiesFile Reader sample codeCreating the FileReader output stream type automaticallySearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/file-reader.html", "title": "File Reader", "language": "en"}} {"page_content": "\n\nGCS ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesGCS ReaderPrevNextGCS ReaderReads from a configured Google Cloud Storage bucket.Google Cloud Storage is a RESTful online file storage web service for storing and accessing data of the Google Cloud Platform infrastructure. It provides unified object storage for live or archived data. Objects stored in Google Cloud Storage are grouped into buckets. Buckets are containers within the cloud that you can individually assign to storage classesStreaming supportThe GCS Reader is capable of fetching objects from a bucket in the following modes.Download mode: Download the objects to the local folder and start processing. Once a file is processed successfully, Striim will delete that downloaded file from local storage.Streaming mode: Open a remote InputStream and stream the bytes from the remote object and start processing. Streaming mode is enabled when you turn on the \"Use Streaming\" property. With the streaming approach performance is expected to be faster as the bytes are streamed directly instead of requiring additional download steps. Local testing shows for sample data of 411 DSV files of varying size with 1M events in total, the download approach took 162 seconds vs 55 seconds by the streaming approach.SecurityThe following security considerations apply to accessing Google Cloud Storage:You can make the connection to Google Cloud Storage only by using the Service Account key JSON file.You can access the Google Cloud Audit log service via a GoogleCredentials JSON file.RecoveryRecovery is supported. Upon a restart, the GCS Reader uses the details of the last successfully processed file and positions it appropriately in the GCS bucket.GCS Reader propertiespropertytypedefault valuenotesBucket NameStringThe Google Cloud Storage bucket to read from.For example: BucketNameCompression TypeStringSet to gzip when the files to be read are in gzip format. Otherwise, leave blank.Connection Retry PolicyStringretryInterval=30, maxRetries=3This policy determines how the connection should be retried on a failure. The retryInterval is the wait time after which the next try to establish connection is attempted.On exhausting the maxRetries count, the reader will halt the application.Download PolicyStringDiskLimit=2048, FileLimit=10DiskLimit=; FileLimit=The download of objects is throttled based on the configured limits. When one of the limits is hit, the adapter waits for already downloaded objects to be processed and deleted before attempting to download the next object.Configure DiskLimit at least twice the size of the object with maximum size. Set 0 to disable the limit.Note: This property is disregarded when UseStreaming is set to true.Folder NameStringThe folder path under the bucket where objects are picked and processed. When the property is left empty, adapter picks objects from the bucket's root.Include Sub FoldersBooleanWhen you enable sub folder processing, the adapter processes objects under the configured path and recursively processes the folders under the configured path.Object Detection ModeEnumGCSDirectoryListingChoose one of these modes:GCSDirectoryListing: The adapter fetches the metadata of all objects from the specified path (bucket, folder) and identifies the delta based on the last object fetched in the previous fetch.GCSAuditLogNotification: The adapter fetches the \"object create\" audit log entries starting from the timestamp of the last object fetched in the previous fetch.Object FilterStringA wildcard of the object name.For example:*obj.csv*obj*First-Object**We currently support only the \"*\" character.ParserStringDefines how to parse data from the adapter.Polling IntervalInteger5000This property controls how often the adapter reads from the source. By default, it checks the source for new data every five seconds (5000 milliseconds). If there is new data, the adapter reads it and sends it to the adapter's output stream.Private Service Connect EndpointStringName of the Private Service Connect endpoint created in the target VPC.This endpoint name will be used to generate the private hostname internally and will be used for all connections.See Private Service Connect support in Google cloud adapters.Project IdStringThe Google Cloud Platform project for the bucket.Service Account KeyFileThe path (from root or the Striim program directory) and file name to the JSON credentials file downloaded from Google (see the information about Service Accounts in Prerequisites). You must copy this file to the same location on each Striim server that will run this adapter, or to a network location accessible by all servers.If you do not specify a value for this property, Striim will use the $GOOGLE_APPLICATION_CREDENTIALS environment variable.Start TimestampStringIn Incremental mode, this property lets you start from a particular point in time.This property is not honored in case of resuming from a recovery and the position supplied by the recovery will take precedence.Supported formats and examples:YYYY-MM-DD'T'hh:mm:ss.sssTZDFor example:2022-01-21T13:38:00.42022-01-21T13:38:00.811File processing starts from the specified point of time, meaning that for any file modified at or after the specified StartTimeStamp, those files will be processed.Where:YYYY = four-digit yearMM = two-digit monthDD = two-digit dayhh = two digits of hourmm = two digits of minutess = two digits of secondsss = one to three digits representing a decimal fraction of a secondTZD = optional time zoneUse StreamingBooleanTrueDisable this mode of operation when you want to download files to local storage for processing instead of streaming directly from GCS.PrerequisitesThe following prerequisites are needed before configuring the GCS Reader:Service account: To access Google Cloud Storage you need valid user credentials with authorization.To use the GCSAuditLogNotification object detection mode, you must configure the audit log and users with necessary permission to access the logs.See Setting up Google Cloud Storage permissions.Setting up Google Cloud Storage permissionsYou must configure the following Google Cloud Storage permissions depending on which object detection modes you will use:To enable reading files from Google Cloud Storage, you must create a custom Google Cloud Storage role with get and list permissions and assign it to your Service Account.To enable reading the audit log on Google Cloud Storage, you must enable the audit log and grant audit log permissions to your custom role.If audit log access is needed, check the Data Write property to enable the audit log on GCS.Create a custom role with the following permissions:GCS permissions: storage.objects.get and storage.objects.listGCS audit log permissions (if Audit Log access needed): logging.logEntries.list and logging.privateLogEntries.listCreate a Service Account and assign this custom role.Generate the Service Account key in JSON format.In your Striim GCS Reader configuration, copy the downloaded JSON key path to the Service Account Key property.GCS Reader sample applicationSample TQL for a GCS Reader application:CREATE APPLICATION GCS_Json;\nCREATE SOURCE GCSReader_Json USING Global.GCSReader (\nConnectionRetryPolicy: 'retryInterval=30, maxRetries=3',\nPollingInterval: 5000,\nObjectFilter: '*.json',\nProjectId: 'striimdev',\nUseStreaming: true,\nBucketName: 'kart_bucket',\nDownloadPolicy: 'DiskLimit=2048,FileLimit=10',\nServiceAccountKey:\n'/user/striim/accesskey/GCSuser_access_Key.json',\nObjectDetectionMode: 'GCSDirectoryListing',\nIncludeSubfolders: true )\nPARSE USING Global.JSONParser (\n)\nOUTPUT TO GCSOutput;\nEND APPLICATION GCS_Json;Monitoring metricsThe following monitoring metrics are published by the GCS Reader:Count of cloud objects metadata fetched: The number of object metadata fetched in the last fetch.External I/O latency: The latency of the last metadata fetch call.Name of the last cloud objects metadata fetched.Cloud object statistics:Count of cloud objects metadata fetched: Total objects metadata fetched so far.Downloaded count: Number of files downloaded.Processed count: Number of files processed.Missing count: Number of files deleted in bucket after fetching metadata.Total object size in MB: Total size in MB of all objects metadata fetched so far.Total downloaded size in MB: Total size in MB of all downloaded objects. This metric is not published for the UseStreaming option.Disk utilization in MB: Current disk utilization of the download directory (.striim/componentname/).Current filename.Last file read.LimitationsThe following limitations apply to the GCS Reader:No support for reading encrypted GCS objects.The GCS Reader can read Avro files with an embedded schema, but not with a separate Avro schema file.The GCS Reader adapter's download mode is not supported on Windows OS.If the object name is bigger than what current OS filename length supports, then you should enable the Use Streaming option to avoid exceptions from downloading a filename larger than what the OS supports.If a bucket contains a huge number of objects, the reader may consume a high level of memory and CPU to fetch and process the metadata. This applies to both the GCSDirectoryListing and GCSAuditLogNotification modes.For the GCSDirectoryListing mode, a full metadata fetch happens when the adapter starts and for every subsequent polling fetch.For the GCSAuditLogNotification mode, a full metadata fetch happens when the adapter starts, and subsequent polling calls fetch only the incremental changes from the audit log.In GCSDirectoryListing mode, if the bucket contains a huge number (in the order of millions) of objects, app recovery after crash/stop will take a considerable time since the full metadata has to be fetched to locate the check-pointed object. You are recommended to use the GCSAuditLogNotification mode for better performance.In the GCSAuditLogNotification mode, the Google cloud provider has a set default limit (60) on the number of requests per min on reading the audit log. If you are running multiple apps then you should set the polling interval based on the number of apps you are running and the audit log read limit.A time offset of 5 minutes is applied to queries to avoid a conflict during high volume data loading. To modify the 5 minutes default, contact Striim support.In this section: GCS ReaderGCS Reader propertiesPrerequisitesSetting up Google Cloud Storage permissionsGCS Reader sample applicationMonitoring metricsLimitationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/en/gcs-reader.html", "title": "GCS Reader", "language": "en"}} {"page_content": "\n\nGG Trail ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesGG Trail ReaderPrevNextGG Trail ReaderSee Oracle GoldenGate.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-06-18\n", "metadata": {"source": "https://www.striim.com/docs/en/gg-trail-reader.html", "title": "GG Trail Reader", "language": "en"}} {"page_content": "\n\nHDFS ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesHDFS ReaderPrevNextHDFS ReaderReads files from Hadoop Distributed File System (HDFS\u00a0) volumes. You can create HDFSReader sources in the web UI using\u00a0Source Preview.See\u00a0Supported reader-parser combinations) for parsing options.The output type is WAevent except when using\u00a0JSONParser.HDFS Reader propertiespropertytypedefault valuenotesAuthentication PolicyStringIf the HDFS cluster uses Kerberos authentication, provide credentials in the format Kerberos, Principal:, KeytabPath:. Otherwise, leave blank. For example: authenticationpolicy:'Kerberos, Principal:nn/ironman@EXAMPLE.COM, KeytabPath:/etc/security/keytabs/nn.service.keytab'Compression TypeStringSet to gzip when wildcard specifies a file or files in gzip format. Otherwise, leave blank.DirectoryStringoptional directory from which the files specified by the wildcard property will be read; otherwise files will be read relative to the Hadoop URLEOF DelayInteger100milliseconds to wait after reaching the end of a file before starting the next read operationHadoop Configuration PathStringIf using Kerberos authentication, specify the path to Hadoop configuration files such as core-site.xml and hdfs-site.xml. If this path is incorrect or the configuration changes, authentication may fail.Hadoop URLStringThe URI for the HDFS cluster NameNode. See below for an example. The default HDFS NameNode IPC port is 8020 or 9000 (depending on the distribution). Port 50070 is for the web UI and should not be specified here.For an HDFS cluster with high availability, use the value of the dfs.nameservices property from hdfs-site.xml with the syntax hadoopurl:'hdfs://', for example,\u00a0hdfs://'mycluster'.\u00a0 When the current NameNode fails, Striim will automatically connect to the next one.In MapRFSReader, you may start the URL with\u00a0hdfs:// or\u00a0maprfs:///\u00a0(there is no functional difference).Include SubdirectoriesBooleanFalseSet to True to read files in subdirectories.\u00a0Position by EOFBooleanTrueIf set to True, reading starts at the end of the file, so only new data is acquired. If set to False, reading starts at the the beginning of the file and then continues with new data.Rollover StyleStringDefaultDo not change.Skip BOMBooleanTrueIf set to True, when the wildcard value specifies multiple files, Striim will read the Byte Order Mark (BOM) in the first file and skip the BOM in all other files. If set to False, it will read the BOM in every file.WildcardStringname of the file, or a wildcard pattern to match multiple files (for example, *.xml)HDFS Reader exampleCREATE SOURCE CSVSource USING HDFSReader (\n hadoopurl:'hdfs://myserver:9000/',\n WildCard:'posdata.csv',\n positionByEOF:false\n)In this section: HDFS ReaderHDFS Reader propertiesHDFS Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/hdfs-reader.html", "title": "HDFS Reader", "language": "en"}} {"page_content": "\n\nHP NonStop Enscribe, SQL/MP, and SQL/MX ReadersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesHP NonStop Enscribe, SQL/MP, and SQL/MX ReadersPrevNextHP NonStop Enscribe, SQL/MP, and SQL/MX ReadersSee HP NonStop.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/hp-nonstop-enscribe,-sql-mp,-and-sql-mx-readers.html", "title": "HP NonStop Enscribe, SQL/MP, and SQL/MX Readers", "language": "en"}} {"page_content": "\n\nHTTP ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesHTTP ReaderPrevNextHTTP ReaderListens for HTTP POST requests on the specified port.See\u00a0Supported reader-parser combinations) for parsing options.HTTP Reader propertiespropertytypedefault valuenotesAuthenticate ClientBooleanFalseSet to True to have the server authenticate the client.Compression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.Defer ResponseBooleanFalseWith the default setting of False, HTTP Reader returns code 200 (success) or 400 (failure) to the client. Set to True to return a custom response defined with HTTP Writer.Defer Response TimeoutString5sWhen Defer Response is True, the amount of time to wait for HTTP Writer to respond. See HTTP Writer for more details.IP AddressStringThe Striim server binding IP address for the TCP socket. Set to 0.0.0.0 to allow any available network interface.KeystoreStringLocation of the Java keystore file containing the Striim application\u2019s own certificate and private key. If this is blank and a value is specified for Keystore Type, an empty keystore is created. If Keystore Type is blank, leave blankKeystore Passwordencrypted passwordProvide a password if required to unlock the keystore or to check the integrity of the keystore data. Otherwise, leave blank. See Encrypted passwords.Keystore TypeStringSet to JKS, JCEKS, or PKCS12 to enable SSL. Otherwise, leave blank.Port NumberIntegerThe TCP socket listening port on the Striim server, typically 80 for HTTP or 443 for HTTPS. This port must not be used by another process. In TQL, the property is portno.Thread CountInteger10The number of threads to be initialized for handling multiple concurrent HTTP connections. Valid values are 1\u00a0to\u00a0500.The output type is WAevent.See HTTP Writer for a sample application.In this section: HTTP ReaderHTTP Reader propertiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/http-reader.html", "title": "HTTP Reader", "language": "en"}} {"page_content": "\n\nIncremental Batch ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesIncremental Batch ReaderPrevNextIncremental Batch ReaderWorks like DatabaseReader but has two additional properties, Check Column and Start Position, which allow you to specify that\u00a0reading will begin at a user-selected position. To specify the starting point, the table(s) to be read must have a column containing either a timestamp or a sequential number. The most common use case is for populating data warehouses.See also Spanner Batch Reader.See Connection URL below for a list of supported databases. See\u00a0Database Reader\u00a0for supported data types.DatabaseReaderWarningFor all databases, when this adapter is deployed to a Forwarding Agent, the appropriate JDBC driver must be installed as described in Installing third-party drivers in the Forwarding Agent.Incremental Batch Reader propertiespropertytypedefault valuenotesCheck ColumnStringSpecify the name of the column containing the start position value. The column must have an integer or timestamp data type (such as the creation timestamp or an employee ID number) and the values must be unique and continuously increasing.MySQL and Oracle names are case-sensitive, SQL Server names are not. Use the syntax .
= for MySQL and Oracle and\u00a0..
= for SQL Server.If you specify multiple tables in the Tables property, you may specify different check columns for the tables separated by semicolons. In this case, you may specify the check column for the remaining tables using wildcards: for example,\u00a0MYSCHEMA.TABLE1=UUID; MYSCHEMA.%=LAST_UPDATED would use UUID as the start column for TABLE1 and LAST_UPDATED as the start column for the other tables.Connection URLStringSee Connection URL notes in Database Reader.Database ReaderDatabase Provider TypeStringDefaultControls which icon appears in the Flow Designer.Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude from the query. Specify the value exactly as for Tables. For example, to include data from all tables whose names start with HR except HRMASTER:Tables='HR%',\nExcludedTables='HRMASTER'Fetch SizeInteger100maximum number of records to be fetched from the database in a single JDBC method execution (see the discussion of fetchsize in the documentation for the your JDBC driver)Passwordencrypted passwordThe password for the specified user. See Encrypted passwords.Polling IntervalString120secThis property controls how often the adapter reads from the source. By default, it checks the source for new data every two minutes (120 seconds). If there is new data, the adapter reads it and sends it to the adapter's output stream. The value may be specified in seconds (as in the default) or milliseconds (for example, 500ms).Return DateTime AsStringJodaSet to\u00a0String to return timestamp values as strings rather than Joda timestamps. The primary purpose of this option is to avoid losing precision when microsecond timestamps are converted to Joda milliseconds. The format of the string is yyyy-mm-dd hh:mm:ss.ffffff.SSL ConfigStringIf the source is Oracle and it uses SSL, specify the required SSL properties (see the notes on SSL Config in Oracle Reader properties).Start PositionStringThe value in the specified check column from which Striim will start reading. Striim will read rows in which the check column's value is the same as or greater or later than this value and skip the other rows. Since Check Column may specify multiple tables you must specify the corresponding table name or wildcard for each value. With the Check Column example above, the Start Position value could be\u00a0MYSCHEMA.TABLE1=1234; MYSCHEMA.%=2018-OCT-07 18:37:55.TablesStringThe table(s) or view(s) to be read. MySQL, Oracle, and PostgreSQL names are case-sensitive, SQL Server names are not. Specify names as .
for MySQL, .
for Oracle and PostgresQL, and\u00a0..
for SQL Server.You may specify multiple tables and views as a list separated by semicolons or with the % wildcard. For example, HR% would read all the tables whose names start with HR. You may use the % wildcard only for tables, not for schemas or databases. The wildcard is allowed only at the end of the string: for example, mydb.prefix% is valid, but mydb.%suffix is not.UsernameStringthe DBMS user name the adapter will use to log in to the server specified in ConnectionURLFor all databases, this user must have SELECT permission or privileges on the tables specified in the Tables property. For Oracle, this user must also have SELECT privileges on DBA_TAB_COLS and ALL_COLL_TYPES.Incremental Batch Reader sample codeThe following would read rows from TABLE1 with UUID column values equal to or greater than 1234:\u00a0CREATE SOURCE OraSource USING IncrementalBatchReader ( \n Username: 'striim',\n Password: '********',\n ConnectionURL: '192.0.2.:1521:orcl',\n Tables: 'MYSCHEMA.TABLE1',\n CheckColumn: 'MYSCHEMA.TABLE1=UUID',\n StartPosition: 'MYSCHEMA.TABLE1=1234'\n) \nOUTPUT TO OraSourceOutput;If IncrementalBatchReader sends duplicate records to a DatabaseWriter target, by default the application will terminate. This can happen, for example, when recovery is enabled (see Recovering applications), there are multiple rows with the same CheckColumn timestamp, and only some of them were written before a system failure,\u00a0 To avoid this, specify the appropriate IgnorableException in the target: for example, for CosmosDBWriter,\u00a0RESOURCE_ALREADY_EXISTS.Recovering applicationsIn this section: Incremental Batch ReaderIncremental Batch Reader propertiesIncremental Batch Reader sample codeSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/en/incremental-batch-reader.html", "title": "Incremental Batch Reader", "language": "en"}} {"page_content": "\n\nJMS ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesJMS ReaderPrevNextJMS ReaderReads data from the Java Message Service.See\u00a0Supported reader-parser combinations) for parsing options.JMS Reader propertiespropertytypedefault valuenotesCompression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.Connection Factory NameStringthe name of the ConnectionFactory containing the queue or topicCrash On Unsupported Message TypeBooleanTrueWith the default value of True, when JMSReader encounters a message of an unsupported type, the application will terminate. Set to False to ignore such messages.CtxStringthe JNDI initial context factory nameDurable Subscriber NameStringLeave blank to create a nondurable subscription. Specify a subscriber name to create a durable subscription.Enable TransactionBooleanFalseSet to True to use transaction mode. This will ensure that all messages are processed by JMSReader before they are removed from the queue. Transactions will commit based on the Transaction Policy. If Transaction Policy is blank, JMS Reader will use the policy MessageCount:1000, interval:1m.JMS Provider ConfigStringOptionally, specify any required path variables as =, separated by semicolons. For examplecom.tibco.tibjms.naming.security_protocol=ssl; \nssl_enable_verify_hostname=false; \ncom.tibco.tibjms.naming.ssl_identity=client_identity.p12; \ncom.tibco.tibjms.naming.ssl_password=password; \ncom.tibco.tibjms.naming.ssl_trusted_certs=server_root.cert.pem; \njava.property.https.protocols=SSLv3; \ncom.tibco.tibjms.naming.ssl_trace=true'Passwordencrypted passwordsee Encrypted passwordsProviderStringthe path to the JNDI bindingProvider NameStringIf reading from IBM MQ, set to ibmmq. Otherwise leave blank.Queue NameStringLeave blank if Topic is specified.TopicStringLeave blank if QueueName is specified.Transaction PolicyStringWhen Enable Transaction is True, specify a message count and/or interval (s / m / h / d) to control when transactions are committed.For example, with the settingTransactionPolicy='MessageCount:100, Interval:10s, JMSReader will send a commit message to the broker every ten seconds or sooner if it accumulates 100 messages. If JMSReader is stopped or terminates before sending a commit, the broker will resend the messages in the current transaction when JMSReader is restarted.When using a transaction policy:Messages must have a Message ID.The output stream of the JMSReader source must be persisted to Kafka (see Persisting a stream to Kafka)Persisting a stream to KafkaWe recommend that recovery be enabled (see Recovering applications.Recovering applicationsThis feature has been tested with ActiveMQ, IBM MQ, and WebLogic.User NameStringa messaging system user with the necessary permissionsNote that JMSReader's properties must accurately reflect your configuration. See Using JMSReader with IBM WebSphere MQ for a detailed discussion.The output type is WAevent except when using\u00a0JSONParser.JMS Reader exampleThe following example is for ActiveMQ:CREATE SOURCE AMQSource USING JMSReader (\n ConnectionFactoryName:'jms/TestConnectionFactory'\n Ctx:'org.apache.activemq.jndi.ActiveMQInitialContextFactory',\n Provider:'tcp://192.168.123.200:61616',\n QueueName:'jms/TestJMSQueue',\n UserName:'striim',\n Password:'******'\n) ...Message headers are included in the output. For example:SNMPNT: WAEvent{\n data: [\"abc\",\"def\"]\n metadata: {\"RecordEnd\":9,\"JMSType\":\"\",\"RecordOffset\":0,\"JMSExpiration\":0,\n \"JMSDestinationName\":\"TanuTopic\",\"JMSRedelivered\":false,\"AMQ_SCHEDULED_REPEAT\":3,\n \"JMSTimestamp\":1599633667256,\n \"messageid\":\"ID:Apples-MacBook-Pro-2.local-54631-1599632751529-4:1:1:1:1\",\n \"JMSDestinationType\":\"Topic\",\"JMSDeliveryMode\":1,\"JMSPriority\":0,\"JMSCorrelationID\":\"\",\n \"RecordStatus\":\"VALID_RECORD\"}\n userdata: null\n};TIBCO_EMS_SSL_sysout: JsonNodeEvent{\n data: {\"idx\":\"0\",\"Test\":\"Test0\"}\n metadata: {\"JMSPriority\":4,\"JMSType\":null,\"JMSXDeliveryCount\":1,\"JMSExpiration\":0,\n \"JMSDestinationName\":\"newqueue5\",\"JMSRedelivered\":false,\"JMSTimestamp\":1598523499812,\n \"JMSCorrelationID\":null,\"JMSDestinationType\":\"Queue\",\"JMSDeliveryMode\":2}\n userdata: null\n};In this section: JMS ReaderJMS Reader propertiesJMS Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/jms-reader.html", "title": "JMS Reader", "language": "en"}} {"page_content": "\n\nUsing JMSReader with IBM WebSphere MQSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesJMS ReaderUsing JMSReader with IBM WebSphere MQPrevNextUsing JMSReader with IBM WebSphere MQThe following summary assumes that you are, or are working with, an experienced WebSphere system administrator. This has been tested on WMQ version 7.5.WMQ ConfigurationIf an appropriate user account for use by JMSReader does not already exist, create one. Specify this as the value for JMSReader's UserName property.Add the Striim user to the mqm group.Create a QueueManager.Start the QueueManager\u2019s listener.Create a server connection channel for the QueueManager.Create a queue.Copy JMSAdmin.config from /java/lib/ to the JNDI-Directory where you want to create the binding for use by Striim.Edit this file and set INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory and PROVIDER_URL=/JNDI-Directory.Using MQ Explorer, under JMS administered objects create a new initial context and connection factory as described in chapter 3 of IBM's white paper, \"Configuring and running simple JMS P2P and Pub/Sub applications in MQ 7.0, 7.1, 7.5 and 8.0.\" Specify this connection factory's name (for example, StriimCF) as the value for JMSReader's ConnectionFactoryName property.Create a JMS queue using the queue, initial context, and connection factory from the steps above. Specify this queue's name (for example, StriimJMSQueue) as the value for JMSReader's QueueName property.The following steps may also be necessary:Provide access to MCAUSER and configure CHLAUTH rules accordingly.Configure firewall to allow inbound connections if needed.Striim configurationCopy the following files from /java/ to Striim/lib and restart Striim:com.ibm.mq.jms.jarcom.ibm.mq.jmqi.jarcom.ibm.mq.headers.jarfscontext.jarproviderutil.jarjms.jarcom.ibm.mq.allclient.jarUsing the JMSAdmin tool, generate a .bindings file for the JMSReader. Details on the use of the JMSAdmin tool are available in IBM's documentation. The following example creates a sample .bindings file.Example\u00a01.\u00a0Using JMSAdmin to create a .bindings file$ cd /opt/mqm/java/bin\n-- sample JMSAdmin.config\n$ cat JMSAdmin.config\nINITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory\nPROVIDER_URL=file:///tmp/jndi\nSECURITY_AUTHENTICATION=none\n$ mkdir /tmp/jndi\n$ ./JMSAdmin\nInitCtx> DEFINE QCF(mqqcf) QMGR(QM1) tran(client) chan(DEV.APP.SVRCONN) host(localhost) port(1414)\nInitCtx> DEFINE Q(DEV.QUEUE.1) QUEUE(DEV.QUEUE.1) QMGR(QM1)\nInitCtx> ENDCopy the .bindings file from the JNDI-Directory you created above to a location accessible to Striim (for example, /users/striim/JNDI). This is the location to specify in JMSReader's Provider property. Since this is a file, set JMSReader's Ctx property to com.sun.jndi.fscontext.RefFSContextFactory.Edit the .bindings file and change all occurrences of localhost with the IP address of server hosting the queue. For example, if the IP address were 198.51.100.1, you would change localhost(1414) to 198.51.100.0(1414).JMSReader propertiesWith the above configuration, the JMSReader properties would be:CREATE SOURCE WMQSource using JMSReader (\n connectionfactoryname:'mqqcf',\n Ctx:'com.sun.jndi.fscontext.RefFSContextFactory',\n Provider:'file:/users/striim/JNDI/',\n Queuename:'StriimJMSQueue',\n username:'striim',\n password:'******'\n) ...In this section: Using JMSReader with IBM WebSphere MQWMQ ConfigurationStriim configurationJMSReader propertiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/using-jmsreader-with-ibm-websphere-mq.html", "title": "Using JMSReader with IBM WebSphere MQ", "language": "en"}} {"page_content": "\n\nJMX ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesJMX ReaderPrevNextJMX ReaderReads data from Java virtual machines running Java applications that support Java Management Extensions (JMX).JMX Reader propertiespropertytypedefault valuenotesConnection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Poll IntervalDouble\u00a0how often JMXReader will poll the JVM(s), in seconds\u00a0Service URLStringa URL specifying how to connect to the JVM(s); you may specify multiple URLs separated by commasThe output type is JSONNodeEvent.If multiple JVMs are specified in serviceurl and the application is deployed on multiple servers, the JVMs will automatically be distributed among the servers.JMX Reader exampleThe following application will write Kafka broker operational metrics to a WActionStore:CREATE SOURCE JmxSrc USING JmxReader (\n serviceurl:\"service:jmx:rmi:///jndi/rmi://localhost:9998/jmxrmi\",\n pollinterval:\"1\"\n) \nOUTPUT TO jmxstream;\n\nCREATE TYPE MBeanType (\n time datetime,\n objectname string,\n attributes com.fasterxml.jackson.databind.JsonNode\n);\nCREATE STREAM MetricStream OF MBeanType;\n\nCREATE CQ getMetric\nINSERT INTO MetricStream\nSELECT DNOW(),\n data.get(\"ObjectName\").textValue(),\n data.get(\"Attributes\")\nFROM jmxstream;\n\nCREATE TYPE MetricType (\n objectname string,\n time datetime,\n metricname string,\n metrictype string,\n metricval string\n);\nCREATE WACTIONSTORE MetricStore CONTEXT OF MetricType;\n\nCREATE CQ getAttr\nINSERT INTO MetricStore\nSELECT ms.objectname,\n ms.time, \n attritr.get(0).textValue(),\n attritr.get(1).textValue(),\n attritr.get(2).textValue()\nFROM MetricStream ms, iterator(ms.attributes) attritr;The query\u00a0SELECT * FROM MetricStore; will return events similar to:[\n\u00a0\u00a0 objectname = kafka.network:type=RequestMetrics,name=LocalTimeMs,request=Offsets\n\u00a0\u00a0 time = 2018-06-05T09:01:00.087-07:00\n\u00a0\u00a0 metricname = 50thPercentile\n\u00a0\u00a0 metrictype = double\n\u00a0\u00a0 metricval = 0.0\n]\nIn this section: JMX ReaderJMX Reader propertiesJMX Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/jmx-reader.html", "title": "JMX Reader", "language": "en"}} {"page_content": "\n\nKafka ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesKafka ReaderPrevNextKafka ReaderReads data from Apache Kafka 0.11, 2.1, or 3.3.2. Support for Kafka 0.8, 0.9, and 0.10 is deprecated.For information on configuring Kafka streams, see Reading a Kafka stream with KafkaReader and Persisting a stream to Kafka.See\u00a0Supported reader-parser combinations) for parsing options.Kafka Reader propertiespropertytypedefault valuenotesAuto Map PartitionBooleanTrueWhen reading from multiple partitions, if there are multiple servers in the Striim deployment group on which KafkaReader is deployed ON ALL, partitions will be distributed automatically among the servers. Partitions will be rebalanced automatically as Striim servers are added to or removed from the group.When deploying on a Forwarding Agent, set to False.Broker AddressStringIgnorable ExceptionStringAvailable only in Kafka 0.11 and later versions.You can specify a list of exceptions that Striim should ignore so as to avoid a system halt.Currently, the only supported exception is \"OFFSET_MISSING_EXCEPTION\" which allows you to avoid a system halt when there is a partition purge. A partition purge is Kafka's process of freeing up Kafka message logs after the retention period expires. Messages have a TTL (time to live) with retention period properties in place. Upon expiry, messages are marked for deletion according to their creation timestamp. Once a message/offset is marked for deletion, and the consumer tries to read from the purged offset, Kafka throws an \"OFFSET_MISSING_EXCEPTION\".Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Kafka ConfigStringSpecify any properties required by the authentication method used by the specified Kafka broker (see Configuring authentication in Kafka Config.Optionally, specify Kafka consumer properties, separated by semicolons.When reading from a topic in Confluent Cloud, specify the appropriate SASL properties.When messages are in Confluent wire format, specify value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer.When each message contains one Avro record and it is not length-delimited, set value.deserializer= com.striim.avro.deserializer.SingleRecordAvroRecordDeserializer.The default is:max.partition.fetch.bytes=10485760;\nfetch.min.bytes=1048576;\nfetch.max.wait.ms=1000;\nreceive.buffer.bytes=2000000;\npoll.timeout.ms=10000Kafka Config Property SeparatorString;Available only in Kafka 0.11 and later versions. Specify a different separator if one of the producer property values specified in KafkaConfig contains a semicolon.Kafka Config Value SeparatorString=Available only in Kafka 0.11 and later versions. Specify a different separator if one of the producer property values specified in KafkaConfig contains an equal symbol.Partition ID ListStringPartition numbers to read from, separated by semicolons (for example, 0;1), or leave blank to read from all partitionsStart OffsetLong-1With default value of -1, reads from the end of the partition. Change to 0 to read from the beginning of the partition.If you specify startOffset, leave startTimestamp at its default value.You can also specify multiple partition values for the Start Offset and Start Timestamp properties. Specify these values as a list of key-value pairs in the format \"key=value; key=value\". The keys and values are a list of partition IDs with an associated offset. For example:StartOffset: '0=1; 1=0; 2=1024' , where 0, 1, and 2 represent partition numbers, and the values represented the associated offsets.Start TimestampStringFor KafkaReader 0.10 and later only:If not specified, only new transactions (based on current Kafka host system time) are read. Specify a value in the format yyyy-MM-dd hh:mm:ss:SSS (for example, 2017-10-20 13:55:55.000) to start reading from an earlier point.If the Kafka host and Striim host are not in the same time zone, specify the start time using the Striim host's time zone.If you specify startTimestamp, leave startOffset at its default value.You can also specify multiple partition values for the Start Offset and Start Timestamp properties. Specify these values as a list of key-value pairs in the format \"key=value; key=value\". The keys and values are a list of partition IDs with an associated timestamp. For example:StartTimestamp: '0=;1=;2=\" , where 0, 1, and 2 represent partition numbers, and the values represented the associated start timestamps.TopicStringConfigure Kafka Consumer Properties for Kafka Reader and Kafka WriterSpecify the Kafka version for the broker being read using VERSION '0.#.0': for example, to read from a Kafka 2.1 or 3.3 cluster, the syntax is,\u00a0CREATE SOURCE USING KafkaReader VERSION '2.1.0'.The output type is WAevent except when using\u00a0Avro Parser\u00a0 or\u00a0JSONParser.Configure Kafka for a topic in Confluent CloudWhen reading or writing from a topic in Confluent Cloud, you specify the appropriate SASL properties. Kafka Writer uses Confluent\u2019s Avro serializer which registers the schema in the Confluent Cloud schema registry and adds the schema registry ID with the respective Avro records in Kafka messages. You specify the Confluent Cloud configuration properties (server URL, SASL config, schema registry URL, and schema registry authentication credentials) as a part of the KafkaConfig properties.For example:KafkaConfig: 'max.request.size==10485760:\n batch.size==10000120:\n sasl.mechanism==PLAIN:\n schema.registry.url==https://example.us-central1.gcp.confluent.cloud:\n security.protocol==SASL_SSL:sasl.jaas.config==org.apache.kafka.common.security.plain.PlainLoginModule \n required username=\\\"\\\" \n password=\\\"\\\";'When messages are in Confluent wire format, specify value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer.Sample applicationThe following sample application will read the data written to Kafka by the\u00a0Kafka Writer sample application and write it to striim/KR11Output.00:CREATE SOURCE KR11Sample USING KafkaReader VERSION '0.11.0'(\n brokerAddress:'localhost:9092',\n topic:'KafkaWriterSample',\n startOffset:'0'\n)\nPARSE USING DSVParser ()\nOUTPUT TO RawKafkaStream;\n\nCREATE TARGET KR11Out USING FileWriter (\n filename:'KR11Output'\n)\nFORMAT USING DSVFormatter ()\nINPUT FROM RawKafkaStream;Configuring authentication in Kafka ConfigFor information on configuring Kafka streams, see Reading a Kafka stream with KafkaReader and Persisting a stream to Kafka.Use Kafka SASL (Kerberos) authentication with SSL encryptionTo use SASL authentication with SSL encryption, include the following properties in your Kafka Reader or Kafka Writer KafkaConfig, adjusting the paths to match your environment and using the passwords provided by your Kafka administrator.KafkaConfigPropertySeparator: ':',\nKafkaConfigValueSeparator: '==',\nKafkaConfig:'security.protocol==SASL_SSL:\n sasl.mechanism==GSSAPI:\n sasl.jaas.config==com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true doNotPrompt=true serviceName=\"kafka\" client=true keyTab=\"/etc/krb5.keytab\" principal=\"striim@REALM.COM\";:\n sasl.kerberos.service.name==kafka:\n ssl.truststore.location==/opt/striim/kafka.truststore.jks:\n ssl.truststore.password==secret:\n ssl.keystore.location==/opt/striim/kafka.keystore.jks:\n ssl.keystore.password==secret:ssl.key.password==secret'Use Kafka SASL (Kerberos) authentication without SSL encryptionTo use SASL authentication without SSL encryption, include the following properties in your Kafka Reader or Kafka Writer KafkaConfigKafkaConfigPropertySeparator: ':',\nKafkaConfigValueSeparator: '==',\nKafkaConfig:'security.protocol==SASL_PLAINTEXT:\n sasl.mechanism==GSSAPI:\n sasl.jaas.config==com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true doNotPrompt=true serviceName=\"kafka\" client=true keyTab=\"/etc/krb5.keytab\" principal=\"striim@REALM.COM\";:\n sasl.kerberos.service.name==kafka'Using Kafka SSL encryption without SASL (Kerberos) authenticationTo use SSL encryption without SASL authentication, include the following properties in your Kafka stream property set or KafkaReader or KafkaWriter KafkaConfig, adjusting the paths to match your environment and using the passwords provided by your Kafka administrator.KafkaConfigPropertySeparator: ':',\nKafkaConfigValueSeparator: '==',\nKafkaConfig:'security.protocol==SASL_SSL: \n ssl.truststore.location==/opt/striim/kafka.truststore.jks:\n ssl.truststore.password==secret:\n ssl.keystore.location==/opt/striim/kafka.keystore.jks:\n ssl.keystore.password==secret:\n ssl.key.password==secret'\nUsing Kafka without SASL (Kerberos) authentication or SSL encryptionTo use neither SASL authentication nor SSL encryption, do not specify security.protocol in the KafkaReader or KafkaWriter KafkaConfig.Reading a Kafka stream with KafkaReaderFor an overview of this feature, see Introducing Kafka streams.Reading from a Kafka stream can be useful for development tasks such as doing A/B comparisons of variations on a TQL application. If you modified Samples.PosApp.tql to persist\u00a0PosDataStream to Kafka, the following source would read the persisted data from Kafka.CREATE SOURCE AccessLogSource USING KafkaReader VERSION 0.11.0(\n brokerAddress:'localhost:9998',\n Topic:'Samples_PosDataStream',\n PartitionIDList:'0',\n startOffset:0\n)\nPARSE USING StriimParser ()\nOUTPUT TO KafkaDSVStream;For more information, see Kafka Reader.Kafka ReaderIn this section: Kafka ReaderKafka Reader propertiesConfigure Kafka Consumer Properties for Kafka Reader and Kafka WriterConfiguring authentication in Kafka ConfigUse Kafka SASL (Kerberos) authentication with SSL encryptionUse Kafka SASL (Kerberos) authentication without SSL encryptionUsing Kafka SSL encryption without SASL (Kerberos) authenticationUsing Kafka without SASL (Kerberos) authentication or SSL encryptionReading a Kafka stream with KafkaReaderSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/kafka-reader.html", "title": "Kafka Reader", "language": "en"}} {"page_content": "\n\nMapR FS ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesMapR FS ReaderPrevNextMapR FS ReaderReads from a file in the MapR File System. Except for the name of the adapter, it is functionally identical to HDFSReader.\u00a0 See HDFS Reader for documentation of the properties.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-16\n", "metadata": {"source": "https://www.striim.com/docs/en/mapr-fs-reader.html", "title": "MapR FS Reader", "language": "en"}} {"page_content": "\n\nMariaDBSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesMariaDBPrevNextMariaDBSee Database Reader and MariaDB / SkySQL.Database ReaderIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-03\n", "metadata": {"source": "https://www.striim.com/docs/en/mariadb---readers---new.html", "title": "MariaDB", "language": "en"}} {"page_content": "\n\nMongoDB ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesMongoDB ReaderPrevNextMongoDB ReaderSee\u00a0MongoDB.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-16\n", "metadata": {"source": "https://www.striim.com/docs/en/mongodb-reader.html", "title": "MongoDB Reader", "language": "en"}} {"page_content": "\n\nMQTT ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesMQTT ReaderPrevNextMQTT ReaderReads messages from an MQTT broker.See the MQTT FAQ for information on firewall settings.See\u00a0Supported reader-parser combinations) for parsing options.MQTT Reader propertiespropertytypedefault valuenotesBroker URIStringformat is tcp://:Client IDStringMQTT client ID (maximum 23 characters). Must be unique (not used by any other client) in order to identify this instance of MQTTReader. The MQTT broker will use this ID close the connection when MQTTReader goes offline and resend events when it restarts.KeystoreStringLocation of the Java keystore file containing the Striim application\u2019s own certificate and private key. If this is blank and a value is specified for Keystore Type, an empty keystore is created. If Keystore Type is blank, leave blankKeystore Passwordencrypted passwordProvide a password if required to unlock the keystore or to check the integrity of the keystore data. Otherwise, leave blank. See Encrypted passwords.Keystore TypeStringSet to DKS, JKS, JCEKS, PKCS11, or PKCS12 to enable SSL. Otherwise, leave blank.Passwordencrypted passwordThe password for Username, if specified. See Encrypted passwords.QoSInteger00: at most once1: at least once2: exactly onceTopicStringUsernameStringIf the MQTT broker is using application-level authentication, provide a username. Otherwise leave blank.The output type is WAevent except when using\u00a0Avro Parser\u00a0 or\u00a0JSONParser.MQTT Reader exampleCREATE SOURCE tempsensor \nUSING MQTTReader (\n brokerUri:'tcp://m2m.eclipse.org:1883',\n Topic:'/striim/room687/temperature',\n QoS:0,\n clientId:'Striim'\n)\nPARSE USING JSONParser ( eventType:'') \nOUTPUT TO tempstream;In this section: MQTT ReaderMQTT Reader propertiesMQTT Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/mqtt-reader.html", "title": "MQTT Reader", "language": "en"}} {"page_content": "\n\nMS SQL ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesMS SQL ReaderPrevNextMS SQL ReaderSee Microsoft SQL Server.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-16\n", "metadata": {"source": "https://www.striim.com/docs/en/ms-sql-reader.html", "title": "MS SQL Reader", "language": "en"}} {"page_content": "\n\nMultiFile ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesMultiFile ReaderPrevNextMultiFile ReaderReads files from disk. This reader is similar to File Reader\u00a0except that it reads from multiple files at once.See\u00a0Supported reader-parser combinations) for parsing options.MultiFile Reader propertiespropertytypedefault valuenotesBlock SizeInteger64amount of data in KB for each read operationCompression TypeStringSet to gzip when wildcard specifies a file or files in gzip format. Otherwise, leave blank.DirectoryStringSpecify the path to the directory containing the file(s). The path may be relative to the Striim installation directory (for example, Samples/PosApp/appdata) or from the root.Group PatternStringa regular expression defining the rollover pattern for each set of files (see Using regular expressions (regex))Position by EOFBooleanTrueIf set to True, reading starts at the end of the file, so only new data is acquired. If set to False, reading starts at the the beginning of the file and then continues with new data.Rollover StyleStringDefaultSet to log4j if reading Log4J files created using RollingFileAppender.Skip BOMBooleanTrueIf set to True, when the wildcard value specifies multiple files, Striim will read the Byte Order Mark (BOM) in the first file and skip the BOM in all other files. If set to False, it will read the BOM in every file.Thread Pool SizeInteger20For best performance, set to the maximum number of files that will be read at once.WildcardStringname of the file, or a wildcard pattern to match multiple files (for example, *.xml)Yield AfterInteger20the number of events after which a thread will be handed off to the next read processThe output type is WAevent except when using\u00a0Avro Parser\u00a0 or\u00a0JSONParser.MultiFIle Reader exampleThis example would recognize log.proc1.0 and log.proc1.1 as parts of one log and log.proc2.0 and log.proc2.1 as parts of another, ensuring that all the events from each log will be read in the correct order.CREATE SOURCE MFRtest USING MultiFileReader (\n directory:'Samples',\n WildCard:'log.proc*',\n grouppattern:'(?:(?:(?:<[^>]+>)*[^<.]*)*.){2}'\n)Alternatively, you can use this statement to ensure the events from each log are read in the correct order:CREATE SOURCE MFRtest USING MultiFileReader (\n directory:'Samples',\n WildCard:'log.proc*',\n grouppattern:'log\\\\.proc[0-9]{1,3}'\n)In this section: MultiFile ReaderMultiFile Reader propertiesMultiFIle Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/multifile-reader.html", "title": "MultiFile Reader", "language": "en"}} {"page_content": "\n\nMySQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesMySQLPrevNextMySQLSee Database Reader and MySQL.Database ReaderIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-01\n", "metadata": {"source": "https://www.striim.com/docs/en/mysql---readers.html", "title": "MySQL", "language": "en"}} {"page_content": "\n\nOPCUA ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesOPCUA ReaderPrevNextOPCUA ReaderReads data from an OPC-UA server. See the OPC UA Unified Architecture Specification, Part 4: Services\u00a0for information on the MonitoredItem, Session, and Subscription service sets used by this reader.If PKCS 12 is required, generate a certificate for Striim and add its public key in DER format to the\u00a0the OPC-UA server's keystore. See\u00a0OPC UA Unified Architecture Specification, Part 6: Mappings, Annex E.See Firewall Settings for information on which firewall ports must be open.OPCUA Reader propertiespropertytypedefault valuenotesApp URIStringURI to connect to the OPC-UA server: for example,\u00a0rn:striim:opcua:sensor:connectorIf a PKCS 12 certificate has been provided, make sure that its AppUriName field matches this value.Client AliasStringPKCS 12 keystore certificates alias, required when keystore is specifiedKeep Alive CountLong10number of publish intervals with no data after which Striim will send an empty notification to let the server know the client is still runningKeystoreStringfully qualified name of PKCS 12 certificate: for example,\u00a0/path/to/certificate.pfxKeystore Passwordencrypted passwordPKCS 12 keystore password, required when keystore is specifiedLifetime CountLong20number of publish intervals with no requests from Striim after which the OPC-UA server will assume that the Striim client is no longer running and remove its subscription; must be higher than the keepAliveCountMax Notification per PublishInteger2number of events to receive per published notificationMessage Security ModeStringNonesupported values (case-sensitive) are None, Sign, SignAndEncrypt: see\u00a0 OPC UA Unified Architecture Specification, Part 4: Services, section 7.15, and Part 6: Mappings,\u00a0section 6.1Node ID ListStringcomma-separated list (case sensitive) of variable or object nodes to be monitored: for example, ns=1;s=Sensor/Temperature,ns=2;s=Sensor/Pressure; if an object node is specified, all its variables are returnedOPCUA Endpoint URLStringendpoint for the OPC-UA server: for example,\u00a0opc.tcp://localhost:12686/examplePasswordencrypted passwordthe password for the specified usernamePublish IntervalDouble2000how often (in milliseconds) the OPC-UA server checks for requests from StriimDo not set to -1 or 0. Instead of using a very small publishInterval, consider using a smaller samplingInterval.Queue SizeInteger10see\u00a0OPC UA Unified Architecture Specification, Part 4: Services, section 5.12.15Read TimeoutLong10000time (in milliseconds) to wait for the server to respond to a connection request before terminating the applicationSampling IntervalDouble2000how often (in milliseconds) the OPC-UA updates the values for the variables specified in nodeIdListSecurity PolicyStringNonesupported values (case-sensitive) are Basic128Rsa15, Basic256, Basic256Sha256, and None: see\u00a0OPC UA Unified Architecture Specification, Part 7: Profiles, sections 6.5.147-150SeverityInteger15see\u00a0OPC UA Unified Architecture Specification, Part 5: Information Model, section 6.4.2UsernameStringsee\u00a0OPC UA Unified Architecture Specification, Part 4: Services, section 7.36.3The output format is\u00a0OPCUADataChangeEvent:OPCUA Reader field type mappingStriim field / typeUA built-in type / field name / data typenotesdataValue /\u00a0ObjectDataValue / Value / Variantthe new value after the change\u00a0dataIsNull /\u00a0BooleandataIsNotNull /\u00a0BooleandataTypeNodeId /\u00a0StringNodeIdsee\u00a0OPC UA Unified Architecture Specification, Part 6: Mappings, section 5.2.2.9 and A.3sourceTime /\u00a0LongDataValue / SourceTimestamp / DateTimethe time the change was made in the source device or programsourcePicoSeconds /\u00a0LongDataValue / SourcePicoSeconds / UInt16see\u00a0OPC UA Unified Architecture Specification, Part 6: Mappings, section 5.2.2.17serverTime /\u00a0LongDataValue / ServerTimestamp / DateTimethe time the server recorded the changeserverPicoSeconds /\u00a0LongDataValue / ServerPicoSeconds / UInt16see\u00a0OPC UA Unified Architecture Specification, Part 6: Mappings, section 5.2.2.17statusCodeValue /\u00a0LongDataValue / Status / StatusCodesee\u00a0https://github.com/OPCFoundation/UA-.NET/blob/master/Stack/Core/Schema/Opc.Ua.StatusCodes.csv for a list of possible valuesstatusCodeIsGood /\u00a0Booleanderived from https://files.opcfoundation.org/schemas/UA/1.02/statusCodeIsBad /\u00a0Booleanderived from https://files.opcfoundation.org/schemas/UA/1.02/statusCodeIsUncertain /\u00a0Booleanderived fromderived from https://files.opcfoundation.org/schemas/UA/1.02/statusCodehasOverflowSet /\u00a0Booleansee OPC UA Unified Architecture Specification, Part 4: Services, section 5.12.1.5metadata /\u00a0java.util.Mapsee example in the sample event belowOPCUA Reader examplesSample\u00a0OPCUADataChangeEvent:{\n dataValue: 3.14\n dataIsNull: false\n dataIsNotNull: true\n dataTypeNodeId: \"ns=0;i=11\"\n sourceTime: 131517670918430000\n sourcePicoSeconds: null\n serverTime: 131517670918430000\n serverPicoSeconds: null\n statusCodeValue: 0\n statusCodeIsGood: true\n statusCodeIsBad: false\n statusCodeIsUncertain: false\n statusCodehasOverflowSet: false\n metadata: \n{\n \"Description\": {\n \"locale\": null,\n \"text\": null\n },\n \"monitoringMode\": \"Reporting\",\n \"requestedQueueSize\": 10,\n \"readValueId\": {\n \"typeId\": {\n \"namespaceIndex\": 0,\n \"identifier\": 626,\n \"type\": \"Numeric\",\n \"null\": false,\n \"notNull\": true\n },\n \"nodeId\": {\n \"namespaceIndex\": 2,\n \"identifier\": \"HelloWorld/ScalarTypes/Double\",\n \"type\": \"String\",\n \"null\": false,\n \"notNull\": true\n },\n \"binaryEncodingId\": {\n \"namespaceIndex\": 0,\n \"identifier\": 628,\n \"type\": \"Numeric\",\n \"null\": false,\n \"notNull\": true\n },\n \"xmlEncodingId\": {\n \"namespaceIndex\": 0,\n \"identifier\": 627,\n \"type\": \"Numeric\",\n \"null\": false,\n \"notNull\": true\n },\n \"attributeId\": 13,\n \"indexRange\": null,\n \"dataEncoding\": {\n \"namespaceIndex\": 0,\n \"name\": null,\n \"null\": true,\n \"notNull\": false\n }\n },\n \"ValueRank\": -1,\n \"requestedSamplingInterval\": 2000.0,\n \"revisedSamplingInterval\": 2000.0,\n \"filterResult\": {\n \"bodyType\": \"ByteString\",\n \"encoded\": null,\n \"encodingTypeId\": {\n \"namespaceIndex\": 0,\n \"identifier\": 0,\n \"type\": \"Numeric\",\n \"null\": true,\n \"notNull\": false\n }\n },\n \"BrowseName\": {\n \"namespaceIndex\": 2,\n \"name\": \"Double\",\n \"null\": false,\n \"notNull\": true\n },\n \"ArrayDimensions\": \"-1\",\n \"NodeId\": {\n \"namespaceIndex\": 2,\n \"identifier\": \"HelloWorld/ScalarTypes/Double\",\n \"type\": \"String\",\n \"null\": false,\n \"notNull\": true\n },\n \"DataType\": \"Double\",\n \"clientHandle\": 11,\n \"monitoredItemId\": 11,\n \"revisedQueueSize\": 10\n}\n}\nThe following sample application writes OPC-UA data to an HBase table. The table contains the most recently reported value for each node. The application is divided into two flows so that the source and CQ can run in a Forwarding Agent on the OPC-UA server.CREATE APPLICATION OPCUAapp;\n\nCREATE FLOW OPCUAFlow;\n\nCREATE SOURCE OPCUASource USING OPCUAReader (\n OPCUAEndpointURL:'opc.tcp://mfactorengineering.com:4840',\n nodeIdList:'ns=1;s=EVR2.state.Empty_Box_Timer'\n)\nOUTPUT TO NotificationStream;\n\nCREATE TYPE OPCUADataChange (\n data java.lang.Object,\n dataIsNull java.lang.Boolean,\n dataIsNotNull java.lang.Boolean,\n dataTypeNodeId java.lang.Object,\n sourceTime java.lang.Long,\n serverTime java.lang.Long,\n sourcePicoSeconds java.lang.Long,\n serverPicoSeconds java.lang.Long,\n statusCodeValue java.lang.Long,\n statusCodeGood java.lang.Boolean,\n statusCodeBad java.lang.Boolean,\n statusCodeUncertain java.lang.Boolean,\n statusCodeHasOverflowSet java.lang.Boolean,\n dataType java.lang.String,\n valueRank java.lang.Integer,\n arrayDimensions java.lang.Object,\n BrowseName java.lang.Object,\n Description java.lang.Object,\n monitoringMode java.lang.String,\n nodeIdNS java.lang.Integer,\n nodeIdIdentifier java.lang.Object,\n displayName java.lang.String KEY\n);\nCREATE STREAM OPCUAStream OF OPCUADataChange;\n\nCREATE CQ OPCUADataCQ\nINSERT INTO OPCUAStream\nSELECT dataValue,\n dataIsNull,\n dataIsNotNull,\n dataTypeNodeId,\n sourceTime,\n serverTime,\n sourcePicoSeconds,\n serverPicoSeconds,\n statusCodeValue,\n statusCodeIsGood,\n statusCodeIsBad,\n statusCodeIsUncertain,\n statusCodehasOverflowSet,\n META(x,\"DataType\"),\n META(x,\"ValueRank\"),\n TO_JSON_NODE(META(x,\"ArrayDimensions\")),\n TO_JSON_NODE(META(x,\"BrowseName\")),\n META(x,\"Description\"),\n META(x,\"monitoringMode\"),\n TO_JSON_NODE(META(x,\"NodeId\")).get(\"namespaceIndex\").asInt(),\n TO_JSON_NODE(META(x,\"NodeId\")).get(\"identifier\"),\n TO_JSON_NODE(META(x,\"displayName\")).get(\"text\").asText()\nFROM NotificationStream x;\n\nEND FLOW OPCUAFlow;\n\nCREATE FLOW HBASEFlow;\nCREATE TARGET HBaseTarget USING HBaseWriter (\n HBaseConfigurationPath:\"/path/to/hbase/conf/hbase-site.xml\",\n Tables: \"OPCUAEvents.data\",\n PKUpdateHandlingMode: \"DELETEANDINSERT\"\n)\nINPUT FROM OPCUAStream;\nEND FLOW HBASEFlow;\n\nEND APPLICATION OPCUAapp;In this section: OPCUA ReaderOPCUA Reader propertiesOPCUA Reader field type mappingOPCUA Reader examplesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/opcua-reader.html", "title": "OPCUA Reader", "language": "en"}} {"page_content": "\n\nOracle DatabaseSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesOracle DatabasePrevNextOracle DatabaseSee Database Reader and Oracle Database.Database ReaderIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-06-21\n", "metadata": {"source": "https://www.striim.com/docs/en/oracle-database-readers.html", "title": "Oracle Database", "language": "en"}} {"page_content": "\n\nPostgreSQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesPostgreSQLPrevNextPostgreSQLSee Database Reader and PostgreSQL.Database ReaderIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-01\n", "metadata": {"source": "https://www.striim.com/docs/en/postgresql-readers.html", "title": "PostgreSQL", "language": "en"}} {"page_content": "\n\nSalesforce readersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersPrevNextSalesforce readersStriim has four readers for various types of Salesforce data:Salesforce Reader reads Salesforce sObjects using the Force.com REST API.Salesforce Pardot Reader reads from the Salesforce Pardot marketing tool.Salesforce Platform Event Reader reads Salesforce platform events (user-defined notifications) using a subscription model.Salesforce Push Topic Reader reads Salesforce sObject data using a PushTopic subscription.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-19\n", "metadata": {"source": "https://www.striim.com/docs/en/salesforce-readers.html", "title": "Salesforce readers", "language": "en"}} {"page_content": "\n\nSalesforce ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersSalesforce ReaderPrevNextSalesforce ReaderReads Salesforce sObject data. See also Salesforce Pardot Reader,\u00a0Salesforce Platform Event Reader, and\u00a0Salesforce Push Topic Reader.See API Request Limits and Allocations for information on Salesforce-side limits that may require you to limit how much and how quickly Salesforce Reader ingests data.Configuring OAuth for Salesforce ReaderAuthenticating Striim to Salesforce requires an active Salesforce account and a Striim app connected to Salesforce.Configuring OAuth for automatic authentication token renewalFrom the connected app, get the values of the Consumer Key and Consumer Secret.In the Salesforce Reader, set the values of the Consumer Key and Consumer Secret.Generate a security token following the instructions in Salesforce documentation.In the Salesforce Reader, set the value of the Security token.Configuring OAuth for manual authentication token renewalGenerate an authentication token using the following command:curl https://login.salesforce.com/services/oauth2/token -d \"grant_type=password\"\\\n -d \"client_id=\"\\\n -d \"client_secret=\"\\\n\u00a0-d \"username=\"\\\n -d \"password=\"In the Salesforce Reader, set the value of the authentication token.Generate a security token following the instructions in Salesforce documentation.In the Salesforce Reader, set the value of the Security token.Salesforce Reader propertiespropertytypedefault valuenotesAPI End PointStringThe endpoint for your Force.com REST API.Auth Tokencom.webaction. security.PasswordSee Configuring OAuth for Salesforce Reader.When Auto Auth Token Renewal is True, this property is ignored and does not appear in Flow Designer.If autoAuthTokenRenewal is set to false , specify your Salesforce access token (see\u00a0Set Up Authorization on\u00a0developer.salesforce.com: the first section, \"Setting Up OAuth 2.0,\" explains how to create a \"connected app\"; the second section, \"Session ID Authorization,\" explains how to get the token using curl).Auto Auth Token RenewalStringfalseSee Configuring OAuth for Salesforce Reader.With the default value of False, when the specified Auth Token expires the application will halt and you will need to modify it to update the auth token before restarting. This setting is recommended only for development and testing, not in a production environment. When this property is False, you must specify Auth Token, Password, and Username.Set to True to renew the auth token automatically. In this case, leave Auth Token blank and set the Consumer Key, Consumer Secret, Password, Security Token, and Username properties.Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Consumer KeyStringWhen Auto Auth Token Renewal is False, this property is ignored and does not appear in Flow Designer.If autoAuthTokenRenewal is set to true, specify the Consumer Key (see\u00a0Set Up Authorization\u00a0on\u00a0developer.salesforce.com).Consumer Secretcom.webaction. security.PasswordWhen Auto Auth Token Renewal is False, this property is ignored and does not appear in Flow Designer.If autoAuthTokenRenewal is set to true, specify the Consumer Secret (see\u00a0Set Up Authorization\u00a0on\u00a0developer.salesforce.com).Custom Objects OnlyBooleanFalseBy default, both standard and custom objects are included. Set to true to include only custom objects and exclude standard objects.JWT Keystore PathStringWhen Auto Auth Token is False or OAuth Authorization Flows is PASSWORD, this property is ignored and not visible in the Flow Designer.See Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.JWT Keystore Passwordcom.webaction. security.PasswordWhen Auto Auth Token is False or OAuth Authorization Flows is PASSWORD, this property is ignored and not visible in the Flow Designer.See Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.JWT Certificate NameString\u00a0When Auto Auth Token is False or OAuth Authorization Flows is PASSWORD, this property is ignored and not visible in the Flow Designer.See Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.Migrate SchemaBooleanFalseDo not change this setting. It is reserved for use by applications created using Auto Schema Conversion wizards (see Using Auto Schema Conversion).ModeStringInitialLoadUse the default setting InitialLoad to load all existing data using\u00a0force-rest-api 0.28 and stop.Set to\u00a0BulkLoad\u00a0to load all existing data using the Salesforce\u00a0Bulk API and stop.Set to Incremental to read new data continuously using\u00a0force-rest-api 0.28.OAuth Authorization FlowsEnumPASSWORDWhen Auto Auth Token Renewal is False, this property is ignored and does not appear in Flow Designer.With the default value of PASSWORD, Salesforce Writer will authorize using OAuth 2.0 username and password (see Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 Username-Password Flow for Special Scenarios). In this case, you must specify values for the Consumer Key, Consumer Secret, Password, Security Token, and Username properties.Set to JWT_BEARER to authorize using OAuth 2.0 JWT bearer tokens instead (see Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration). In this case, you must specify the Consumer Key, JWT Certificate Name, JWT Keystore Password, JWT Keystore Path, and Username properties.ObjectsSee sObjects.Passwordcom.webaction. security.PasswordWhen Auto Auth Token Renewal is False or OAuth Authorization Flows is JWT_BEARER, this property is ignored and not visible in the Flow Designer.When Auto Auth Token Renewal is set to true, specify the password for the specified username.Polling IntervalString5 minThis property controls how often the adapter reads from the source. By default, it checks the source for new data every five minutes. If there is new data, the adapter reads it and sends it to the adapter's output stream. If you encounter Salesforce REQUEST_LIMIT_EXCEEDED errors, you may need to increase this value or contact Snowflake to raise your API limits (see Salesforce Developer Limits and Allocations Quick Reference). The maximum value is 120 min.Security TokenStringWhen Auto Auth Token Renewal is False or OAuth Authorization Flows is JWT_BEARER, this property is ignored and not visible in the Flow Designer.When Auto Auth Token Renewal is set to true, specify the security token for the specified username (see Reset Your Security Token on help.salesforce.com).SObjectsString%In the Flow Designer this property is shown as Objects.With the default wildcard value %, all available objects associated with the specified API End Point will be read. To exclude standard objects, set Custom Objects Only to true.Alternatively, specify one or more objects to read, separating multiple objects with semicolons, for example, Account; Business_C. Note that any child objects (see Object Relationships Overview) must be specified explicitly, they are not included automatically when you specify the parent.The objects must be queryable. The account associated with the specified Auth Token must have View All Data permission for the objects.Start TimestampStringBy default, Salesforce Reader reads only new events. Optionally, specify the time (based on LastModifiedDate) from which to start reading older events in the format YYYY-MM-DDTHH:MM:SS. If the Salesforce organization's time zone is not the same as Striim's, convert the Salesforce start time to UTC (GMT+00:00) and include a Z at the end of the string. See SimpleDateFormat for more information.Thread Pool SizeInteger5Set this to match your Salesforce Concurrent API Request Limit (see API Request Limits and Allocations).UsernameStringWhen Auto Auth Token Renewal is False, this property is ignored and does not appear in Flow Designer.When Auto Auth Token Renewal is set to true, specify an appropriate username (see Add a Single User on help.salesforce.com).The output type is WAEvent.Salesforce Reader WAEvent exampleWAEvent{\n data: [\n \"a025j000004He03AAC\",\n \"AA\",\n 100.0,\n \"WA\",\n \"USD\",\n null,\n null,\n null]\n metadata: {\n \"LastModifiedDate\":1646674721000,\n \"TableName\":\"Business__c\",\n \"IsDeleted\":false,\n \"CustomObject\":true,\n \"OwnerId\":\"0055j000004c2D2AAI\",\n \"CreatedById\":\"0055j000004c2D2AAI\",\n \"OperationName\":\"INSERT\",\n \"CreatedDate\":1646674721000,\n \"attributes\":\n \"{type=Business__c, \n url=\\/services\\/data\\/v51.0\\/sobjects\\/Business__c\\/a025j000004He03AAC}\",\n \"LastModifiedById\":\"0055j000004c2D2AAI\",\n \"SystemModstamp\":\"2022-03-07T17:38:41.000Z\"},\n }\n userdata: null\n before: null\n dataPresenceBitMap: \"HwA=\"\n beforePresenceBitMap: \"AAA=\"\n typeUUID: {\"uuidstring\":\"01ed7a69-eb41-3f71-8a71-8cae4cf129d6\"}\n};Salesforce Reader monitoringThe Salesforce reader monitors the following metrics:MetricRead timestamp (called as Last Event Modified Tiime)Number of deletesNumber of insertsNumber of updatesCPUCPU rateCPU rate per nodeNumber of serversNumber of events seen per monitor snapshot intervalSource inputSource rateInputInput rateRateLast event positionLatest activityRead lagI/O latency in msTable nameVerifying Salesforce Reader configurationUse the following cURL commands (see Using cURL in the REST Examples and curl.haxx.se) to verify your configuration and get necessary information about available resources and sObjects.Get an access token using the Salesforce login URL.curl https://login.salesforce.com/services/oauth2/token -d \"grant_type=password\" \\\n-d \"client_id=\" -d \"client_secret=\" \\\n-d \"username=\" -d \"password=\"\nUsing the access token returned by that command, test the REST API URL for your organization. The instance is typically the first part of the URL you see in your browser when logged into Salesforce, such as \"mycompany\" in mycompany.salesforce.com. Alternatively, ask your Salesforce technical administrator for access to a connected app. (For more information, see Understanding the Username-Password OAuth Authentication Flow.)If you do not have a proxy server:curl https://.salesforce.com/services/data/ \\\n-H 'Authorization: Bearer 'If you have a proxy server (change the proxy server URL to match yours):curl -x http://mycompany.proxy.server.com:8080/ \\\nhttps://.salesforce.com/services/data/ \\\n-H 'Authorization: Bearer 'List available REST resources and sObjects (see List Available REST Resources and Get a List of Objects).curl https://.salesforce.com/services/data/v41.0 \\\n-H 'Authorization: Bearer '\ncurl https://.salesforce.com/services/data/v41.0/sobjects \\\n-H 'Authorization: Bearer '\nFor additional information, see Salesforce's REST API Developer Guide .Salesforce Reader exampleThe following TQL will read from the Business__c sObject and create an appropriate typed stream:CREATE SOURCE SFPoller USING SalesforceReader (\n sObjects: 'Business__c',\n authToken: '********',\n apiEndPoint: '',\n mode: 'InitialLoad',\n autoAuthTokenRenewal: 'false'\n)\nOUTPUT TO DataStream;\n\nCREATE TYPE OpStream_Type (\n Id String KEY,\n Name String,\n POSDataCode__c String,\n Currency__c String\n);\nCREATE STREAM OpStream OF OpStream_Type;\n\nCREATE CQ CQ1\n INSERT INTO OpStream\n SELECT data[0],data[1],data[2],data[3]\n FROM DataStream;In this section: Salesforce ReaderConfiguring OAuth for Salesforce ReaderConfiguring OAuth for automatic authentication token renewalConfiguring OAuth for manual authentication token renewalSalesforce Reader propertiesSalesforce Reader WAEvent exampleSalesforce Reader monitoringVerifying Salesforce Reader configurationSalesforce Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/en/salesforce-reader.html", "title": "Salesforce Reader", "language": "en"}} {"page_content": "\n\nSalesforce Pardot ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersSalesforce Pardot ReaderPrevNextSalesforce Pardot ReaderThe Salesforce Pardot reader reads data from an instance of the Salesforce Pardot marketing automation tool using the Account Engagement API (see Get Started with Account Engagement API).Feature summary and supported objectsThe Striim Salesforce Pardot Reader supports the following features:Salesforce Pardot Object API versions 3 and 4.Salesforce Pardot sObjects.OAuth authentication.Reading from multiple Pardot objects with column filtering and exclusion.Preserving existing operations and operation metadata.Starting data capture from a specified initial timestamp.Recovering data after a pipeline or system failure.The Striim Salesfort Pardot Reader supports the following objects:NameSupported modesSupported operationsAccountInitial LoadInsertCampaignInitial LoadInsertCustomFieldAllInsertCustomRedirectAllInsert, UpdateDynamicContentAllInsert, UpdateEmailInitial LoadInsertEmailClicksAllInsertEmailTemplateInitial LoadInsertFormAllInsert, UpdateLifecycleHistoryAllInsertLifecycleStageInitial LoadInsertListMembershipAllInsertListAllInsert, UpdateOpportunityAllInsertProspectAllInsert, UpdateProspectAccountInitial LoadInsertTagAllInsert, UpdateTagObjectAllInsertUserAllInsertVisitInitialLoadInsertVisitorAllInsert, UpdateVisitorActivityAllInsertVerifying the Salesforce Pardot Reader configurationUse the following cURL commands (see Using cURL in the REST Examples and curl.haxx.se) to verify your configuration and get necessary information about available resources and sObjects.Get an access token using the Salesforce login URL.curl https://login.salesforce.com/services/oauth2/token -d \"grant_type=password\" \\\n-d \"client_id=\" -d \"client_secret=\" \\\n-d \"username=\" -d \"password=\"\nUsing the access token returned by that command, test the REST API URL for your organization. The instance is typically the first part of the URL you see in your browser when logged into Salesforce, such as \"mycompany\" in mycompany.salesforce.com. Alternatively, ask your Salesforce technical administrator for access to a connected app. (For more information, see Understanding the Username-Password OAuth Authentication Flow.)If you do not have a proxy server:curl https://.salesforce.com/services/data/ \\\n-H 'Authorization: Bearer 'If you have a proxy server (change the proxy server URL to match yours):curl -x http://mycompany.proxy.server.com:8080/ \\\nhttps://.salesforce.com/services/data/ \\\n-H 'Authorization: Bearer 'List available REST resources and sObjects (see List Available REST Resources and Get a List of Objects).curl https://.salesforce.com/services/data/v41.0 \\\n-H 'Authorization: Bearer '\ncurl https://.salesforce.com/services/data/v41.0/sobjects \\\n-H 'Authorization: Bearer '\nFor additional information, see Salesforce's REST API Developer Guide .Configuring OAuth for Salesforce Pardot ReaderAuthenticating Striim to Salesforce Pardot requires an active Salesforce account, a license for Salesforce Pardot, and a Striim app connected to Salesforce. Add the pardot_api OAuth scope to the connected app.Configuring OAuth for automatic authentication token renewalFrom the connected app, get the values of the Consumer Key and Consumer Secret.In the Salesforce Pardot Reader, set the values of the Consumer Key and Consumer Secret.Generate a security token following the instructions in Salesforce documentation.In the Salesforce Pardot Reader, set the value of the Security token.Configuring OAuth for manual authentication token renewalGenerate an authentication token using the following command:curl https://login.salesforce.com/services/oauth2/token -d \"grant_type=password\"\\\n -d \"client_id=\"\\\n -d \"client_secret=\"\\\n\u00a0-d \"username=\"\\\n -d \"password=\"In the Salesforce Pardot Reader, set the value of the authentication token.Generate a security token following the instructions in Salesforce documentation.In the Salesforce Pardot Reader, set the value of the Security token.Salesforce Pardot Reader propertiespropertytypedefault valuenotesAuth Tokencom.webaction. security.PasswordIf autoAuthTokenRenewal is set to false , specify your Salesforce access token (see\u00a0Set Up Authorization on\u00a0developer.salesforce.com: the first section, \"Setting Up OAuth 2.0,\" explains how to create a \"connected app\"; the second section, \"Session ID Authorization,\" explains how to get the token using curl).When Auto Auth Token Renewal is True, this property is ignored and does not appear in Flow Designer.Auto Auth Token RenewalBooleanFalseWith the default value of False, when the specified Auth Token expires the application will halt and you will need to modify it to update the auth token before restarting. This setting is recommended only for development and testing, not in a production environment. When this property is False, you must specify Auth Token, Password, and Username.Set to True to renew the auth token automatically. In this case, leave Auth Token blank and set the Consumer Key, Consumer Secret, Password, Security Token, and Username properties.Business Unit IDStringSpecify the Account Engagement instance from which the adapter will read (see Find my Account Engagement Account ID).Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Consumer KeyStringIf Auto Auth Token Renewal is set to true, specify the Consumer Key (see\u00a0Set Up Authorization\u00a0on\u00a0developer.salesforce.com).Consumer SecretStringIf Auto Auth Token Renewal is set to true, specify the Consumer Key (see\u00a0Set Up Authorization\u00a0on\u00a0developer.salesforce.com).When Auto Auth Token Renewal is False, this property is ignored and does not appear in Flow Designer.Custom ParamsStringThis property optionally enables overrides for individual source parameters while fetching records. Salesforce documentation provides a complete list of parameters for different objects.The parameter format is =:|:. Separate multiple objects with the ; character. For example, when Use Bulk Export is True,, Prospect=created_after:2021-01-01; VisitorActivity=created_before:2023-01-01. The date format must be YYYY-MM-DD.Exclude ObjectsStringOptionally, specify a list of objects, separated by semicolons, to be excluded from any wildcard selection specified in sObjects. This property does not support wildcards.Export Status Check intervalString120sThis property specifies how often, in seconds, Striim will check the status of a bulk export job. Higher values result in fewer API calls and directly affect the performance of an export job.When Use Bulk Export is False, this property is ignored and does not appear in Flow Designer.JWT Certificate NameStringSee Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.When Auto Auth Token is False or OAuth Authorization Flows is PASSWORD, this property is ignored and not visible in the Flow Designer.JWT Keystore Passwordcom.webaction. security.PasswordSee Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.When Auto Auth Token is False or OAuth Authorization Flows is PASSWORD, this property is ignored and not visible in the Flow Designer.JWT Keystore PathStringSee Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.When Auto Auth Token is False or OAuth Authorization Flows is PASSWORD, this property is ignored and not visible in the Flow Designer.Migrate SchemaBooleanFalseDo not change this setting. It is reserved for use by applications created using Auto Schema Conversion wizards (see Using Auto Schema Conversion).ModeEnumInitialLoadThis setting controls the basic behavior or the adapter.Use the default value of InitialLoad to read all existing data and stop.Set to IncrementalLoad to read all new data continuously.OAuth Authorization FlowsenumPASSWORDThis property selects the authorization method the adapter will use.With the default value of PASSWORD, Salesforce Writer will authorize using OAuth 2.0 username and password (see Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 Username-Password Flow for Special Scenarios). In this case, you must specify values for the Consumer Key, Consumer Secret, Password, Security Token, and Username properties.Set to JWT_BEARER to authorize using OAuth 2.0 JWT bearer tokens instead (see Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration). In this case, you must specify the Consumer Key, JWT Certificate Name, JWT Keystore Password, JWT Keystore Path, and Username properties.ObjectsSee sObjects.Pardot API VersionEnumV4With the default value of V4, the reader will use Salesforce Pardot API version 4.Set to V3 to use API version 3.Passwordcom.webaction. security.PasswordWhen Auto Auth Token Renewal is set to true, specify the password for the specified\u00a0Username (see Encrypted passwords).When Auto Auth Token Renewal is False or OAuth Authorization Flows is JWT_BEARER, this property is ignored and not visible in the Flow Designer.Polling IntervalString120sThis property controls how often the adapter reads from the source. By default, it checks the source for new data every two minutes (120 seconds). If there is new data, the adapter reads it and sends it to the adapter's output stream. If you encounter Salesforce REQUEST_LIMIT_EXCEEDED errors, you may need to increase this value or contact Snowflake to raise your API limits (see Salesforce Developer Limits and Allocations Quick Reference).When Mode is InitialLoad, this property is ignored and not displayed in the Flow Designer.Security Tokencom.webaction. security.PasswordWhen Auto Auth Token Renewal is set to true, specify the security token for the specified username (see Reset Your Security Token on help.salesforce.com).When Auto Auth Token Renewal is False or OAuth Authorization Flows is JWT_BEARER, this property is ignored and not visible in the Flow Designer.sObjectsStringSpecify which standard objects to be read from Salesforce Pardot. To read all objects, use the % wildcard. Alternatively, list multiple objects separated by semicolons.In the Flow Designer this property is shown as Objects.For more information, see Account Engagement API / Get Started / Object Field References.Start TimestampStringBy default, Salesforce Pardot Reader reads only new events. Optionally, specify the time (based on LastModifiedDate) from which to start reading older events in the format yyyy-MM-dd HH:mm:ss. If the Salesforce organization's time zone is not the same as Striim's, convert the Salesforce start time to UTC (GMT+00:00) and include a Z at the end of the string. See SimpleDateFormat for more information.Thread Pool CountInteger0With the default value of 0, the reader uses a single thread in the Striim JVM. Set this number to match the number of concurrent transactions for your Account Engagement API (see Get Started with Account Engagement API > Rate Limits).Use Bulk ExportBooleanFalseWhen Mode is InitialLoad, this controls which API the adapter will use. (When Mode is IncrementalLoad, this property is ignored and not displayed in the Flow Designer.)With the default value of False, the reader uses the Account Engagement API.Set to true to use the asynchronous bulk export API (see Version 4 Docs / Export / Export API Overview) during initial load.The following objects support bulk export:ExternalActivityListMembershipProspectProspectAccountVisitorVisitorActivityBulk export is limited to one year of historical data. Use the created_after and created_before in the Custom Params property to specify a custom export window (see Version 4 Docs / Export / Export API Overview / Query).When Use Bulk Export is True, the Export Status Check Interval property is enabled.UsernameStringIf Auto Auth Token Renewal is set to true, specify an appropriate username (see Add a Single User on help.salesforce.com).When Auto Auth Token Renewal is False, this property is ignored and does not appear in Flow Designer.The output type is WAEvent.Sample TQL for Salesforce Pardot ReaderThe following TQL will perform an initial load (since the default Mode is InitialLoad):CREATE SOURCE PardotIL USING SalesforcePardotReader ( \n \u00a0autoAuthTokenRenewal: true, \n \u00a0OAuthAuthorizationFlows: 'PASSWORD', \n \u00a0ThreadPoolCount: '5',\n \u00a0securityToken: '',\n \u00a0UserName: '',\n \u00a0BusinessUnitId: '',\n \u00a0consumerKey: '',\n \u00a0consumerSecret: '',\n \u00a0SObjects: '%',\n \u00a0Password: ''\n) \nOUTPUT TO PardotIL_OutputStream;\nSalesforce Pardot reader limitationsThe Account Engagement API supports up to five concurrent transactions (see Get Started with Account Engagement API > Rate Limits). Exceeding this limit may cause your application to terminate.When an object does not support incremental load, all records for that object are synced at each poll. To avoid duplicate records, enable Merge mode for the targets of such objects.When an object does not have a createAt or updatedAt field and Striim can not otherwise determine a timestamp for the object, it will be duplicated in the target at every polling interval.In this section: Salesforce Pardot ReaderFeature summary and supported objectsVerifying the Salesforce Pardot Reader configurationConfiguring OAuth for Salesforce Pardot ReaderSalesforce Pardot Reader propertiesSample TQL for Salesforce Pardot ReaderSalesforce Pardot reader limitationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/en/salesforce-pardot-reader.html", "title": "Salesforce Pardot Reader", "language": "en"}} {"page_content": "\n\nSaleforce Pardot Reader sample WAEventSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersSalesforce Pardot ReaderSaleforce Pardot Reader sample WAEventPrevNextSaleforce Pardot Reader sample WAEventWAEvent{\n \u00a0data: [\"India\",\"New Prospect\",null,\"PSE\",\"0055j000004c0kjAAA\",\"\",100,true,1674191288000,16494441,\n \"Karnataka\",\"Never active.\",\"\",1675086644000,\"560087\",65391,null,[],false,\"Debargha\",[],\"\",\n \"00Q5j00000K8jgpEAB\",null,{\"id\":65391,\"name\":\"NewStriimCampaign\",\"cost\":null,\"folderId\":null},\n \"\",\"Unicca emporis bangalore\",\"100\",\"10000\",\"00Q5j00000K8jgpEAB\",\"Ganguly\",\"Bangalore\",{},\"\",\n null,0,{\"user\":{\"id\":56686231,\"account\":1003481,\"email\":\"debargha.ganguly@striim.com\",\n \"firstName\":\"Debargha\",\"lastName\":\"Ganguly\",\"jobTitle\":\"\",\"role\":\"Administrator\",\n \"createdAt\":[2022,10,31,4,39,11,0],\"updatedAt\":[2023,1,10,23,20,56,0],\"activation\":null}},false,\n \"\",false,\"https://striim2.my.salesforce.com/00Q5j00000K8jgpEAB\",[],\"Striim\",\"Engineering\",\n \"debargha.ganguly@striim.com\",\"\",\"\",true,1675086643000,\"\",null,null,\"\"]\n \u00a0metadata: {\"TableName\":\"Prospect\",\"OperationName\":\"Update\"}\n \u00a0userdata: null\n \u00a0before: [null,null,null,null,null,null,null,null,null,16494441,null,null,null,null,null,null,null,\n null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,\n null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null]\n \u00a0dataPresenceBitMap: \"f39/f39/fw8=\"\n \u00a0beforePresenceBitMap: \"AAQAAAAAAAA=\"\n \u00a0typeUUID: {\"uuidstring\":\"01edaba8-3d43-c3a1-b5d0-baea82be6d02\"}\n};In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. \n", "metadata": {"source": "https://www.striim.com/docs/en/saleforce-pardot-reader-sample-waevent.html", "title": "Saleforce Pardot Reader sample WAEvent", "language": "en"}} {"page_content": "\n\nSalesforce Platform Event ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersSalesforce Platform Event ReaderPrevNextSalesforce Platform Event ReaderReads Salesforce platform events (user-defined notifications) using a subscription model (see\u00a0Delivering Custom Notifications with Platform Events).This adapter is based on\u00a0Salesforce Reader. The output type and data type support are the same.\u00a0The properties are the same, except for the following:Salesforce ReaderMode and Polling Interval: Since platform events use a subscription model, these properties are omitted. Platform events are received as they are published by Salesforce.Event Name: Specify the name of the platform event to subscribe to. The account associated with the\u00a0authToken must have View All Data permission for the platform event.\u00a0Any fields added to the platform event while the reader is running will be ignored.Replay From (default value Earliest): With the default value, will read all platform events currently in the retention window. To read only new events, set to\u00a0Tip. To start reading from a specific point in the retention window, set to a valid ReplayID value (see Platform Event Fields.sObjects: Omitted since this adapter reads platform events, not sObjects.When the output of this adapter is the input for\u00a0DatabaseWriter and other table-based targets, events are insert-only. (Since platform events are one-time notifications, it does not make sense to update or delete them.)Recovery (see Recovering applications) is supported provided that Striim is restarted within 24 hours, the length of time Salesforce holds events in the retention window.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-04-27\n", "metadata": {"source": "https://www.striim.com/docs/en/salesforce-platform-event-reader.html", "title": "Salesforce Platform Event Reader", "language": "en"}} {"page_content": "\n\nSalesforce Push Topic ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersSalesforce Push Topic ReaderPrevNextSalesforce Push Topic ReaderReads Salesforce sObject data using a PushTopic subscription (see Salesforce's Streaming API Developer Guide).This gives you the latest data sooner than SalesforceReader. Note, however, that Salesforce restricts use of PushTopics in various ways, including limiting how many events you can read in a day and how many PushTopics you can create. See the PushTopic section of\u00a0Streaming API Allocations for details.This adapter is based on\u00a0Salesforce Reader. The output type and data type support are the same.\u00a0The properties are the same, except for the following:Salesforce ReaderMode and Polling Interval: Since this uses a subscription model, these properties are omitted. sObject data is received as it is published by Salesforce.PushTopic: The PushTopic to subscribe to. You must also specify the sObject to be read. (To read multiple sObjects from a PushTopic, create a source for each.)\u00a0The account associated with the\u00a0authToken\u00a0must have View All Data permissions for both the PushTopic and\u00a0the sObject.\u00a0Replay From (default value Earliest): With the default value, will read all events currently in the PushTopic retention window. To read only new events, set to\u00a0Tip. To start reading from a specific point in the retention window, set to a valid ReplayID value (see Message Durability).SObject: the standard or custom object to read, for example, Account or Business_C (the account associated with the authToken must have View All Data permission for the object)Recovery (see Recovering applications) is supported provided that Striim is restarted within 24 hours, the length of time Salesforce holds events in the retention window.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-04-27\n", "metadata": {"source": "https://www.striim.com/docs/en/salesforce-push-topic-reader.html", "title": "Salesforce Push Topic Reader", "language": "en"}} {"page_content": "\n\nReplicating Salesforce data to OracleSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersReplicating Salesforce data to OraclePrevNextReplicating Salesforce data to OracleThe following TQL will load all data from the Salesforce Business_c object to the Oracle table BUSINESS_ORACLE_P, then stop writing (in other words, new data will be ignored). Replace ******** with your Salesforce access token (see Understanding the Web Server OAuth Authentication Flow\u00a0on\u00a0developer.salesforce.com).CREATE SOURCE SalesforceCloud USING SalesForceReader (\n sObjects: 'Business__c',\n authToken: '********',\n mode: 'BulkLoad',\n apiEndPoint: 'https://ap2.salesforce.com',\n autoAuthTokenRenewal: false\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET OracleOnPremise USING DatabaseWriter (\n ConnectionURL:'jdbc:oracle:thin:@192.168.123.18:1521/XE',\n Username:'system',\n Password:'manager',\n Tables: 'Business__c,SYSTEM.BUSINESS_ORACLE_P'\n)\nINPUT FROM DataStream;The following TQL will replicate multiple objects:CREATE SOURCE SalesforceCloud USING SalesForceReader (\n sObjects: 'Business1__c;Business2__c',\n apiEndPoint: 'https://ap2.salesforce.com',\n mode: 'Incremental',\n authToken: '***' )\nOUTPUT TO DataStream;\n\nCREATE TARGET OracleOnPremise USING DatabaseWriter (\n ConnectionURL: 'jdbc:oracle:thin:@localhost:1521/XE',\n Username: 'qatest',\n Tables: 'Business1__c,qatest.BUSINESS_ORACLE_P1 COLUMNMAP(ID=ID);Business2__c,qatest.BUSINESS_ORACLE_P2 COLUMNMAP(ID=ID)',\n Password: '***'\nINPUT FROM DataStream;The following TQL will replicate new data continuously:CREATE SOURCE SalesforceCloud USING SalesForceReader ( \n sObject: 'Business__c',\n authToken: '********',\n pollingInterval: '5 min',\n apiEndPoint: 'https://ap2.salesforce.com',\n mode: 'Incremental'\n ) \nOUTPUT TO DataStream;\n\nCREATE TARGET OracleOnPremise USING DatabaseWriter ( \n DriverName:'oracle.jdbc.OracleDriver',\n ConnectionURL:'jdbc:oracle:thin:@192.168.123.18:1521/XE',\n Username:'system',\n Password:'manager',\n Tables: 'Business__c,SYSTEM.BUSINESS_ORACLE_P'\n ) \nINPUT FROM DataStream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-05\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-salesforce-data-to-oracle.html", "title": "Replicating Salesforce data to Oracle", "language": "en"}} {"page_content": "\n\nSalesforce data type support and correspondenceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSalesforce readersSalesforce data type support and correspondencePrevNextSalesforce data type support and correspondenceSalesforce data typeStriim typebase64java.lang.ObjectbooleanStringbyteBytedateorg.joda.time.LocalDatedateTimeorg.joda.time.DateTimedoubleDoubleintLongstringStringtimeStringSalesforce sObject field to Striim type mappingsObject fieldStriim typeaddressjava.lang.String (see discussion below)anyTypeStringcalculatedStringcomboboxStringcurrencyDoubleDataCategoryGroupReferenceStringemailStringencryptedstringStringIDStringJunctionIdListStringlocationString (see discussion below)masterrecordStringmultipicklistStringpercentDoublephoneStringpicklistStringreferenceStringtextareaStringurlStringThe address and location fields are compound types. The lava.lang.String data field contains a JSON representation of the sObject values and the WAEvent metadata map. The following is the WAEvent for a location:data: [\"a067F00000B52obQAB\",\"a067F00000B52ob\",\"{latitude=1.0, longitude=1.0}\"]\nmetadata: {\"LastModifiedDate\":\"2018-09-11T05:45:43.000+0000\",\"IsDeleted\":false,\n \"CustomObject\":true,\"OperationName\":\"INSERT\",\"SystemModstamp\":\"2018-09-11T05:45:43.000+0000\",\n \"TableName\":\"compoundobject__c\",\"OwnerId\":\"0057F000001oImoQAE\",\"CreatedById\":\"0057F000001oImoQAE\",\n \"location__Latitude__s\":1.0,\"CreatedDate\":\"2018-09-11T05:45:43.000+0000\",\n \"location__Longitude__s\":1.0,\"attributes\":{\"type\":\"compoundobject__c\",\n \"url\":\"/services/data/v34.0/sobjects/compoundobject__c/a067F00000B52obQAB\"},\n \"LastModifiedById\":\"0057F000001oImoQAE\"}\nIn this section: Salesforce data type support and correspondenceSalesforce sObject field to Striim type mappingSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/salesforce-data-type-support-and-correspondence.html", "title": "Salesforce data type support and correspondence", "language": "en"}} {"page_content": "\n\nServiceNow ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesServiceNow ReaderPrevNextServiceNow ReaderReads ServiceNow tables. For full table access, the ServiceNow user account must have the admin and snc_read_only roles. For per-table access, the ServiceNow user account must have the sys_db_object and sys_glide_object roles at the row level and field level ACL as well as the personalize_dictionary role. See Access control list rules in ServiceNow's documentation for details on how to create access privileges for users.To generate the Client ID and Client secret properties, set up an OAuth application endpoint as directed in the ServiceNow documentation.In this release, ServiceNow Reader sends only INSERT and UPDATE operations, not DELETE operations. Thus when replicating to a target, data deleted in the source will remain in the target.ServiceNow Reader propertiespropertytypedefault valuenotesBatch APIBooleanTrueWith the default value of True, multiple requests for different tables into a single batch API request.Set to False to use a separate table API request for each fetch.Client IDencrypted passwordclient ID given by the ServiceNow account user to enable third-party access (see Encrypted passwords)Client Secretencrypted passwordclient secret given by the ServiceNow account user to enable third-party access (see Encrypted passwords)Connection RetriesInteger3The number of times Striim try to connect before halting the application.Connection TimeoutInteger60Number of seconds Striim will with for a connection before retrying (see Connection Retries) or halting the application. With the default setting of 60 and the default Connection Retries setting of 3, if a connection attempt does not succeed within 60 seconds, the adapter will try again. If the second attempt does not succeed within 60 seconds it will try a third time. If the third try is unsuccessful, the application will halt.Connection URLStringURL of the ServiceNow instanceExcluded TablesStringOptionally, specify tables to exclude from the query. Does not support wildcards.Fetch SizeInteger10000the number of records to fetch for a single paginated API call for a given tableMax ConnectionsInteger20number of connections for the HTTP client poolModeenumSupported values:InitialLoad: Captures all the data for specified tables from the given start timestamp to the current timestamp. Stops after synchronizing with the current timestamp.IncrementalLoad: Captures ongoing data changes for the specified tables from the given start timestamp. By default, the start timestamp is the current timestamp. Continues to run based on the given poll interval.InitialAndIncrementalLoad: Combines initial and incremental mode. Uses start timestamp and poll interval.Passwordencrypted passwordthe password of the ServiceNow account (see Encrypted passwords)Polling IntervalString120This property controls how often the adapter reads from the source. By default, it checks the source for new data every two minutes (120 seconds). If there is new data, the adapter reads it and sends it to the adapter's output stream.Start TimestampStringWhen this property is not specified, only new data is read. Optionally, specify the timestamp from which the adapter will read.The timestamp format is YYYY-MMM-DD HH:MM:SS. For example, to start at 5:00 pm on February 1, 2020, specify 2020-FEB-01 17:00:00.TablesStringThe table(s) or view(s) to be read. The SQL wild card character % may be used. For example, inc% will read all tables with names that start with \"inc\".Thread Pool CountString10The number of parallel threads used to read from the specified tables. For optimal performance, set to the number of tables to be read divided by 40.UsernameStringUser ID of a ServiceNow account.The output type is WAEvent.ServiceNow Reader monitoringThe ServiceNow Reader monitors the following metrics.Metric nameRead timestampNumber of insertsNumber of updatesCPUCPU rateCPU rate per nodeNumber of serversNumber of events seen per monitor snapshot intervalSource inputSource rateInputInput rateRateLast event positionLatest activityRead lagTable-level informationTable nameServiceNow Reader exampleUse the following cURL command (see Using cURL in the REST Examples and curl.haxx.se) to verify your configuration and get information about available resources.curl -X GET \\\n 'https://< your ServiceNow instance >.service-now.com/api/now/table/sys_db_object?sysparm_fields=name%2Csuper_class' \\\n -H 'accept: application/json' \\\n -H 'authorization: Bearer YWRtaW46RGJCcHY4ZzJFUWdK' \\\n -H 'content-type: application/json'\nThe following TQL will read from the snow2fw_source table:CREATE SOURCE asnow2fw_source USING ServiceNowReader \n(\n Mode:'InitialLoad',\n PollingInterval:'1',\n ClientSecret:'********',\n Password:'********',\n Tables:'u_empl',\n UserName:'myusername',\n ClientID:'********',\n ConnectionUrl:'https://myinstance.service-now.com/'\n)\nOUTPUT TO snow2fw_stream;In this section: ServiceNow ReaderServiceNow Reader propertiesServiceNow Reader monitoringServiceNow Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/en/servicenow-reader.html", "title": "ServiceNow Reader", "language": "en"}} {"page_content": "\n\nSpanner Batch ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSpanner Batch ReaderPrevNextSpanner Batch ReaderSpannerBatchReader is identical to Incremental Batch Reader, minus the Username and Password properties, plus the Augment Query Clause and Statement Timeout properties.Incremental Batch ReaderAugment Query ClauseOptionally, use this property to specify FORCE_INDEX directives (see Table hints) using the syntax:{\"
\": {\"tablehints\": {\"FORCE_INDEX\": \"\"}}You may specify multiple indexes (no more than one per table) separated by commas, for example:{\"order\": {\"tablehints\": {\"FORCE_INDEX\": \"order_amount_index\"}},\n{\"customer\": {\"tablehints\": {\"FORCE_INDEX\": \"customer_ID_index\"}}Connection URLSpecify this as jdbc:cloudspanner:/projects//instances//databases/?credentials=. See Spanner Writer for a detailed description of the service account key.Spanner WriterStatement TimeoutIf you encounter deadline exceeded errors (see Troubleshoot Cloud Spanner deadline exceeded errors) with tables that have a large number of rows, set this property to increase the amount of time allowed to read tables. For example, to set the timeout to five seconds, specify 5s. You may specify the timeout in seconds (s), milliseconds (ms), microseconds (us), or nanoseconds (ns). If this property is blank, the timeout will be controlled by Spanner.In this section: Spanner Batch ReaderAugment Query ClauseConnection URLStatement TimeoutSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/spanner-batch-reader.html", "title": "Spanner Batch Reader", "language": "en"}} {"page_content": "\n\nSQL ServerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSQL ServerPrevNextSQL ServerSee Database Reader and SQL Server.Database ReaderSQL ServerIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-09-14\n", "metadata": {"source": "https://www.striim.com/docs/en/sql-server.html", "title": "SQL Server", "language": "en"}} {"page_content": "\n\nS3 ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesS3 ReaderPrevNextS3 ReaderReads from Amazon S3.See\u00a0Supported reader-parser combinations) for parsing options.S3 Reader propertiespropertytypedefault valuenotesaccesskeyidStringSpecify an AWS access key ID (created on the AWS Security Credentials page) for a user with read permissions (ListBucket, GetObject) on the bucket.When Striim is running in Amazon EC2 and there is an IAM role with those permissions associated with the VM, leave accesskeyid and secretaccesskey blank to use the IAM role.blocksizeInteger64amount of data in KB for each read operationbucketnameStringS3 bucket to read fromclientconfigurationStringOptionally, specify one or more of the following property-value pairs, separated by commas.If you access S3 through a proxy server, specify it here using the syntax\u00a0ProxyHost=,ProxyPort=,ProxyUserName=,ProxyPassword=. Omit the user name and password if not required by your proxy server.Specify any of the following to override Amazon's defaults:ConnectionTimeout=: how long to wait to establish the HTTP connection, default is 50000MaxErrorRetry=: the number of times to retry failed requests (for example, 5xx errors), default is 3SocketErrorSizeHints=: TCP buffer size, default is 2000000See\u00a0http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/section-client-configuration.html for more information about these settings.compressiontypeStringSet to gzip when the files to be read are in gzip format. Otherwise, leave blank.foldernameStringSpecify a folder within the bucket, or leave blank to read from the root.objectnameprefixStringThe start of the names of the files to be read. For example,\u00a0myfile will read\u00a0myfile*.*. Specify * to read all files.secretaccesskeyencrypted passwordSpecify the AWS secret access key for the specified access key.The output type is WAevent except when using\u00a0Avro Parser\u00a0 or\u00a0JSONParser.S3 Reader exampleCREATE SOURCE S3Source USING S3Reader (\n bucketname:'MyBucket',\n objectnameprefix:'posdata',\n accesskeyid:'********************',\n secretaccesskey:'****************************************',\n foldername:'MyFolder'\n)\nPARSE USING DSVParser ()\nOUTPUT TO S3SourceStream;Create an IAM user for use with S3 Reader or S3 WriterNoteThe user interfaces described below are subject to change by Amazon at any time.Follow these steps to create an IAM user with the necessary permissions to read from or write to an S3 bucket and to get the access key and secret access key for that user. If appropriate in your environment you may use the same IAM user for both S3 Reader and S3 Writer.If the bucket does not already exist, create it.Select the bucket and click Copy ARN.Go to the AWS Policy Generator at https://awspolicygen.s3.amazonaws.com/policygen.htmlFor Select Type of Policy, select IAM Policy.For AWS Service, select Amazon S3.Select the individual actions you want to allow or select All Actions.In the Amazon Resource Name (ARN) field, paste the bucket's ARN that you copied.Click Add Statement. You should see something similar to this:Click Generate Policy.Copy the Policy JSON Document and close the dialog.Go to the IAM Policies page and click Create policy.Select the JSON tab, replace the existing text with the policy JSON document you copied, and click Next: Tags > Next: Review.Enter a descriptive name for the policy (make a note of this as you will need it later), optionally enter a description, and click Create Policy.Go to the IAM Users page and click Add users.Enter a name for the IAM user, select Access key, click Next: Permissions.Select Attach existing policies directly, in Filter policies enter the name of the policy you created, select the policy, and click Next: Tags > Next: Review > Create user.Click Download .csv. This file contains the access key and secret access key you must provide to S3 Reader and/or S3 Writer.In this section: S3 ReaderS3 Reader propertiesS3 Reader exampleCreate an IAM user for use with S3 Reader or S3 WriterSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-03\n", "metadata": {"source": "https://www.striim.com/docs/en/s3-reader.html", "title": "S3 Reader", "language": "en"}} {"page_content": "\n\nSybaseSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesSybasePrevNextSybaseSee Database Reader.Database ReaderIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-14\n", "metadata": {"source": "https://www.striim.com/docs/en/sybase.html", "title": "Sybase", "language": "en"}} {"page_content": "\n\nTCP ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesTCP ReaderPrevNextTCP ReaderReads data via TCP.See\u00a0Supported reader-parser combinations) for parsing options.TCP Reader propertiespropertytypedefault valuenotesBlock SizeInteger64amount of data in KB for each read operationCompression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.IP AddressStringlocalhostIf the server has more than one IP address, specify the one to use.Max Concurrent ClientsInteger5Port NoInteger10000port number where the TCPReader will listenThe output type is WAevent except when using\u00a0JSONParser.TCP Reader exampleThis example uses the DSV parser.create source CSVSource using TCPReader (\n IpAddress:'10.1.10.55',\n PortNo:'3546'\n)\nparse using DSVParser (\n header:'yes'\n)In this section: TCP ReaderTCP Reader propertiesTCP Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/tcp-reader.html", "title": "TCP Reader", "language": "en"}} {"page_content": "\n\nTeradataSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesTeradataPrevNextTeradataSee Database Reader.Database ReaderIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-08-31\n", "metadata": {"source": "https://www.striim.com/docs/en/teradata.html", "title": "Teradata", "language": "en"}} {"page_content": "\n\nUDP ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesUDP ReaderPrevNextUDP ReaderReads data via UDP.See\u00a0Supported reader-parser combinations) for parsing options.UDP Reader propertiespropertytypedefault valuenotesBlock SizeInteger64amount of data in KB for each read operationCompression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.IP AddressStringlocalhostthe IP address of the Striim server that will receive the UDP data (do not use localhost unless testing with a source running on the same system as Striim)Port NoInteger10000port number where the UDPReader will listenThe output type is WAevent, except when using CollectdParser\u00a0or\u00a0JSONParser.UDP Reader exampleThis example uses the DSV parser.CREATE SOURCE CSVSource USING UDPReader (\n IpAddress:'192.0.2.0',\n PortNo:'3546'\n)\nPARSE USING DSVParser (\n header:'yes'\n)In this section: UDP ReaderUDP Reader propertiesUDP Reader exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/udp-reader.html", "title": "UDP Reader", "language": "en"}} {"page_content": "\n\nWindows Event Log ReaderSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesWindows Event Log ReaderPrevNextWindows Event Log ReaderUse with the Forwarding Agent (Using the Striim Forwarding Agent) to read Windows event logs.Windows Event Log Reader propertiespropertytypedefault valuenotesEOF DelayInteger1000milliseconds to wait after reaching the end of a file before starting the next read operationEvent Source NameStringSecuritythe log to readother supported values are\u00a0 Application and\u00a0Systemyou may also specify any custom log name\u00a0Include Event ID ListString*specify a comma-separated list of eventIDs to output only those events, or use default value to output all eventsStart Event Record NumberInteger-1the recordNumber from which to begin reading the logwith the default value, reads new events only\u00a0set to 1 to read the entire logThis adapter uses Microsoft's\u00a0OpenEventLog function so returns only data provided by that function. In some cases this may not include all the fields displayed in the Event Log UI.Windows Event Log Reader examplesThe following example reads only Security log events with EventID 4625 (logon failures):CREATE SOURCE WindowsLogSource USING WindowsEventLogReader(\n includeEventIDList:'4625'\n)\nOUTPUT TO SecurityLogStream;The data type for the output is WindowsLogEvent, which contains a single single field, data, an array containing the events' fields. The first nine fields are always the same and are selected using data. (as shown in the example below):field nametypesample valuesourceNamestringMicrosoft-Windows-Security-AuditingcomputerNamestringwsrv2012-00userSidstringrecordNumberlong1138timeGeneratedDateTime1400798337timeWrittenDateTime1400798337eventIDlong4625eventTypelong16eventCategorylong12544The remaining fields are selected using data.stringPayload[#] (as shown in the example below). How many fields there are and what they contain vary depending on the EventID. For example, for Windows 2012 Security Log EventID 4625:#field namesample value0SubjectUserSidS-1-5-181SubjectUserNameWSRV2012-00$2SubjectDomainNameWORKGROUP3SubjectLogonId0x3e74TargetUserSidS-1-0-05TargetUserNameAdministrator6TargetDomainNameWSRV2012-007Status0xc000006d8FailureReason%%23139SubStatus0xc000006a10LogonType711LogonProcessNameUser3212AuthenticationPackageNameNegotiate13WorkstationNameWSRV2012-0014TransmittedServices15LmPackageName16KeyLength017ProcessId0x73818ProcessNameC:\\Windows\\System32\\winlogon.exe19IpAddress10.1.10.18020IpPort0The following example creates a stream FailedLoginStream containing all the fields for Windows 2012 Security Log events with EventID 4625 (\"an account failed to log on\"). See Using the Striim Forwarding Agent for an explanation of the DEPLOY statement.CREATE APPLICATION EventId4625;\n\nCREATE FLOW agentFlow;\n\nCREATE SOURCE WindowsEventLogReaderSource USING WindowsEventLogReader ( \n includeEventIDList: '4625',\n eventSourceName: 'Security'\n ) \nOUTPUT TO rawLog;\n\nEND FLOW agentFlow;\n\nCREATE FLOW serverFlow;\n\nCREATE TYPE WindowsSecurityLogType(\t\n sourceName String,\n computerName String,\n userSid String,\n recordNumber long,\n timeGenerated DateTime,\n timeWritten DateTime,\n eventID long,\n eventType long,\n eventCategory long,\n SubjectUserSid String,\n SubjectUserName String,\n SubjectDomainName String,\n SubjectLogonId String,\n TargetUserSid String,\n TargetUserName String,\n TargetDomainName String,\n Status String,\n FailureReason String,\n SubStatus String,\n LogonType String,\n LogonProcessName String,\n AuthenticationPackageName String,\n WorkstationName String,\n TransmittedServices String,\n LmPackageName String,\n KeyLength String,\n ProcessId String,\n ProcessName String,\n IpAddress String,\n IpPort String\n);\nCREATE STREAM FailedLogonStream OF WindowsSecurityLogType;\n\nCREATE CQ MappingCQ \nINSERT INTO FailedLogonStream\nSELECT \n data.sourceName,\n data.computerName,\n data.userSid,\n data.recordNumber,\n data.timeGenerated,\n data.timeWritten,\n data.eventID,\n data.eventType,\n data.eventCategory,\n data.stringPayload[0],\n data.stringPayload[1],\n data.stringPayload[2],\n data.stringPayload[3],\n data.stringPayload[4],\n data.stringPayload[5],\n data.stringPayload[6],\n data.stringPayload[7],\n data.stringPayload[8],\n data.stringPayload[9],\n data.stringPayload[10],\n data.stringPayload[11],\n data.stringPayload[12],\n data.stringPayload[13],\n data.stringPayload[14],\n data.stringPayload[15],\n data.stringPayload[16],\n data.stringPayload[17],\n data.stringPayload[18],\n data.stringPayload[19], \n data.stringPayload[20] \nFROM rawLog;\n\nCREATE TARGET winlogLout USING SysOut ( \n name:winlog\n ) \nINPUT FROM FailedLogonStream;\n\nEND FLOW serverFlow;\n\nEND APPLICATION EventId4625;\n\nDEPLOY APPLICATION EventId4625 with agentFlow in agent, serverFlow in default;See Handling variable-length events with CQs for an example of handling multiple EventIDs.In this section: Windows Event Log ReaderWindows Event Log Reader propertiesWindows Event Log Reader examplesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/windows-event-log-reader.html", "title": "Windows Event Log Reader", "language": "en"}} {"page_content": "\n\nReading from other sourcesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesReading from other sourcesPrevNextReading from other sourcesWhen there is no compatible reader for your data source, consider Using Apache Flume\u00a0or Using the Java event publishing API.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-07-11\n", "metadata": {"source": "https://www.striim.com/docs/en/reading-from-other-sources.html", "title": "Reading from other sources", "language": "en"}} {"page_content": "\n\nParsersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersPrevNextParsersWhen a reader supports multiple input types, the type is selected by specifying the appropriate parser.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-04-27\n", "metadata": {"source": "https://www.striim.com/docs/en/parsers.html", "title": "Parsers", "language": "en"}} {"page_content": "\n\nAAL (Apache access log) ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersAAL (Apache access log) ParserPrevNextAAL (Apache access log) ParserParses Apache access logs. See Supported reader-parser combinations for compatible readers.AAL Parser propertiespropertytypedefault valuenotesArchive DirStringif specified, the adapter will also read the rotated log files from the archive directoryCharsetStringUTF-8Column Delimit TillInteger-1With the default value of -1, all delimiters are interpreted as columns. If a positive value is specified, that number of delimiters are interpreted as columns, and any additional delimiters are treated as if escaped. For example, if the columndelimiter value is a space, and columndelimittill is 4, this row:\u00a02012-12-10 10:30:30:256 10.1.10.12 jsmith User Login Error, invalid username or passwordwould be interpreted as five columns:2012-12-10\n10:30:30:256\n10.1.10.12\njsmith\nUser Login Error, invalid username or passwordColumn DelimiterStringdefault value is one space (UTF-8 0x20)Ignore Empty EolumnBooleanTrueQuote SetString[]~\\\"characters that mark the start and end of each fieldRow DelimiterString\\nsee Setting rowdelimiter valuesSeparatorString~The output type of a source using AALParser is WAEvent.AAL Parser exampleCREATE SOURCE AALSource USING FileReader (\n directory:'Samples/appData',\n wildcard:'access_log.log',\n positionByEOF:false\n)\nPARSE USING AALParser ()\nOUTPUT TO RawAccessStream;\n\t\nCREATE TYPE AccessLogEntry (\n srcIp String KEY,\n accessTime DateTime,\n timeStr String,\n request String);\n\nCREATE STREAM AccessStream OF AccessLogEntry;\n\nCREATE CQ ParseAccessLog\nINSERT INTO AccessStream\nSELECT data[0],\n TO_DATE(data[3],\"dd/MMM/yyyy:HH:mm:ss Z\"),\n data[3],\n data[4]\nFROM RawAccessStream;In this section: AAL (Apache access log) ParserAAL Parser propertiesAAL Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/aal--apache-access-log--parser.html", "title": "AAL (Apache access log) Parser", "language": "en"}} {"page_content": "\n\nAvro ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersAvro ParserPrevNextAvro ParserParses input in Avro format. See Supported reader-parser combinations for compatible readers.When reading .avro files, no properties are required. For non-file sources, specify one of the two properties.Avro Parser propertiespropertytypedefault valuenotesSchema File NameStringthe\u00a0path and name of the Avro schema fileSchema Registry ConfigurationStringWhen using Confluent Cloud's schema registry, specify the required authentication properties in the format basic.auth.user.info=,basic.auth.credentials.source=. Otherwise, leave blank.Schema Registry URIStringthe URI for a Confluent or Hortonworks schema registry, for example,\u00a0http://198.51.100.55:8081For detailed discussion of the schema registry, see\u00a0Using the Confluent or Hortonworks schema registry.The output type of a source using AvroParser is AvroEvent, which contains the elements of the metadata\u00a0map (see Using the META() function) and the\u00a0data\u00a0array, and is of the type org.apache.avro.generic.GenericRecord.Avro Parser examplesYou can download the following example TQL files as AvroParser.zip from https://github.com/striim/doc-downloads.The following application will generate an Avro schema file PosDataPreview.avsc and convert Samples/PosApp/appData/PosDataPreview.csv to an Avro data file PosDataPreview.avro.CREATE APPLICATION WritePosData2Avro;\nCREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO CsvStream;\n \nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT\n TO_STRING(data[0]) as businessName,\n TO_STRING(data[1]) as merchantId,\n TO_STRING(data[2]) as primaryAccountNumber,\n TO_STRING(data[3]) as posDataCode,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n TO_STRING(data[5]) as expDate,\n TO_STRING(data[6]) as currencyCode,\n TO_DOUBLE(data[7]) as authAmount,\n TO_STRING(data[8]) as terminalId,\n TO_STRING(data[9]) as zip,\n TO_STRING(data[10]) as city\nFROM CsvStream;\n \nCREATE TARGET AvroFileOut USING FileWriter(\n filename:'Samples.PosDataPreview.avro'\n)\nFORMAT USING AvroFormatter (\n schemaFileName:'Samples.PosDataPreview.avsc'\n)\nINPUT FROM PosDataStream;\nEND APPLICATION WritePosData2Avro;See AVROFormatter for more information.The following sample application uses the files created by WritePosData2Avro and writes a subset of the fields from PosDataPreview.avro to SysOut:CREATE APPLICATION AvroParserTest;\n\nCREATE SOURCE AvroSource USING FileReader (\n directory:'Samples',\n WildCard:'PosDataPreview.avro',\n positionByEOF:false\n)\nPARSE USING AvroParser (\n schemaFileName:\"Samples/PosDataPreview.avsc\"\n)\nOUTPUT TO AvroStream;\n\nCREATE CQ parseAvroStream \nINSERT INTO ParsedAvroStream\nSELECT \n-- conversion from org.apache.avro.util.Utf8 to String is required here\n data.get(\"merchantId\").toString() as merchantId,\n TO_DATE(data.get(\"dateTime\").toString()) as dateTime,\n TO_DOUBLE (data.get(\"authAmount\")) as amount,\n data.get(\"zip\").toString() as zip\nFROM AvroStream; \n \nCREATE TARGET AvroOut\nUSING SysOut (name:Avro)\nINPUT FROM ParsedAvroStream;\n\nEND APPLICATION AvroParserTest;The following sample application will read from the Kafka topic created by the\u00a0Oracle2Kafka sample application from Using the Confluent or Hortonworks schema registry.CREATE APPLICATION ReadFromKafka RECOVERY 1 SECOND INTERVAL;\n \nCREATE SOURCE Kafkasource USING KafkaReader VERSION '0.11.0'(\nbrokerAddress:'localhost:9092',\nTopic:'test',\nstartOffset:0\n)\nPARSE USING AvroParser()\nOUTPUT TO DataStream;\n \nCREATE TYPE CompleteRecord(\n completedata com.fasterxml.jackson.databind.JsonNode\n);\n \nCREATE STREAM CompleteRecordInJSONStream OF CompleteRecord;\n \nCreate CQ AvroTOJSONCQ\n INSERT INTO CompleteRecordInJSONStream\n SELECT AvroToJson(y.data) FROM DataStream y;\n \nCREATE TYPE ElementsOfNativeRecord(\n datarecord com.fasterxml.jackson.databind.JsonNode,\n before com.fasterxml.jackson.databind.JsonNode,\n metadata com.fasterxml.jackson.databind.JsonNode,\n userdata com.fasterxml.jackson.databind.JsonNode,\n datapresenceinfo com.fasterxml.jackson.databind.JsonNode,\n beforepresenceinfo com.fasterxml.jackson.databind.JsonNode\n);\n \nCREATE STREAM NativeRecordStream OF ElementsOfNativeRecord;\n \nCREATE CQ GetNativeRecordInJSONCQ\nINSERT INTO NativeRecordStream\nSELECT\n completedata.get(\"data\"),\n completedata.get(\"before\"),\n completedata.get(\"metadata\"),\n completedata.get(\"userdata\"),\n completedata.get(\"datapresenceinfo\"),\n completedata.get(\"beforepresenceinfo\")\nFROM CompleteRecordInJSONStream;\n \nCREATE TAREGT bar using SysOut(name:'complete_record') input from CompleteRecordInJSONStream;\n \nEND APPLICATION ReadFromKafka;For additional Avro Parser examples, see Reading from and writing to Kafka using Avro.In this section: Avro ParserAvro Parser propertiesAvro Parser examplesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/avro-parser.html", "title": "Avro Parser", "language": "en"}} {"page_content": "\n\nBinary ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersBinary ParserPrevNextBinary ParserCan be configured to parse a wide variety of binary formats. See Supported reader-parser combinations for compatible readers.Binary Parser propertiespropertytypedefault valuenotesEndianBooleanFalseFalse means the format is little endian, set to True if the format is big endianMetadataStringpath from root or relative to the Striim program directory and the name of JSON file defining how to parse the dataString Terminated By NullBooleanTrueWith the default value of True, strings must be terminated by a single null character (ASCII 00). If set to False, string length must be defined in the JSON file specified by the metadata property.The output type of a source using BinaryParser is WAEvent.Binary Parser data typesThe data types supported by the metadata JSON are:typelength in bytesBYTE1DOUBLE8FLOAT4INTEGER4LONG8SHORT2STRINGas set by stringColumnLengthBinary Parser examplesFor a sample application, download BinaryParser.zip, which contains insorders.c,\u00a0metadata.json, and bin.tql, from https://github.com/striim/doc-downloads.\u00a0To run this sample application:Run\u00a0insorders.c and copy its\u00a0test.bin output to the\u00a0Samples directory./***** insorders.c ******************************************\n** *\n** Generate order records in binary format *\n** Usage insorders *\n** Max recordcount 1000000 *\n** Note *\n** *\n************************************************************/\n#include \n#include \n#include \n#include \n/* Define some useful constants */\n#define HIGH 1\n#define MEDIUM 2\n#define LOW 3\n#define PRICE_TYPES 3\n#define FILENAME2 \"/Users/mahadevan/binary/test.bin\"\n#define MAX_LEN_CNAME 20\n#define MAX_DIG_CNAME 10\n#define NAMELEN 20\n#define MAX_ITEM_COUNT 200\n#define EXPENSIVE_ITEM_PRICE 745.99\n#define MEDIUM_ITEM_PRICE 215.99\n#define LOW_ITEM_PRICE 34.79\n \n/* Function prototypes */\nint testdata(int numRecords);\ndouble setPrice(int cid);\nint main(int argc, char *argv[])\n{\n int count = 1000;\n int ret = 1;\n /* just basic error checking for now is sufficient */\n \n \n printf(\" Inserting %d Records\\n\", count);\n \n \n if ((ret = testdata(count)) !=0)\n printf(\"TestData Failed\\n\");\n} /* End Main */\n/* Start Function for test data generation */\nint testdata(int numRecords)\n{\n int i,nm1,nm2 = 0;\n \n /* Declare variables for random record creation \\n\")*/ \n int order_id=10000;\n int district_id=0;\n int warehouse_id = 100;\n char cname[numRecords][MAX_LEN_CNAME]; \n char PrefixName[NAMELEN]=\"John\";\n char numRecStr[MAX_DIG_CNAME];\n int count_id=0;\n double price=0;\n /* time_t now; Just generate sysdate for now */\n bzero(numRecStr,MAX_DIG_CNAME);\n FILE *fptr_bin =fopen(FILENAME2, \"w\");\n if (fptr_bin==NULL) return -1;\n for (nm1=0; nm1 < numRecords; nm1++)\n {\n sprintf(numRecStr, \" %d\",nm1); \n strcat(PrefixName,numRecStr); \n strcpy(cname[nm1],PrefixName); \n //printf(\"Generated Name is %s\\n\", cname[nm1]);\n /* Re-init to base root for name */\n strcpy(PrefixName, \"John\");\n /* Generate a random count of items between 0 and 20 */\n count_id = rand()%MAX_ITEM_COUNT;\n price = setPrice(count_id);\n \n printf(\"Price is %f\\n\",price);\n short cnamelen = strlen(cname[nm1]);\n /* Generate record with the following fields */\n fwrite((const void *) (&order_id), sizeof(int), 1, fptr_bin); order_id++;\n fwrite((const void *) (&district_id), sizeof(int), 1, fptr_bin); district_id++;\n fwrite((const void *) (&warehouse_id), sizeof(int), 1, fptr_bin); warehouse_id++;\n fwrite((const void *) (&cnamelen), sizeof(short), 1, fptr_bin); \n fwrite((const void *) (cname[nm1]), sizeof(char), strlen(cname[nm1]), fptr_bin); \n fwrite((const void *) (&count_id), sizeof(int), 1, fptr_bin); count_id++;\n fwrite((const void *) (&price), sizeof(double), 1, fptr_bin); \n }\n fclose(fptr_bin);\n return 0;\n}\n/* Start setPrice */\n double setPrice(int cid)\n {\n short i;\n double price, total_price=0;\n \n i = rand()%PRICE_TYPES;\n switch (i) \n {\n case(HIGH):\n price = EXPENSIVE_ITEM_PRICE;\n case(MEDIUM):\n price = MEDIUM_ITEM_PRICE;\n case(LOW):\n price = LOW_ITEM_PRICE;\n }\n total_price = cid*price;\n return total_price;\n } /* End setPrice */Copy metadata.json to the\u00a0Samples directory.{\n \"namespace\": \"test\",\n \"name\": \"SensorFeed\",\n \"version\": \"1.0\",\n \"type\": \"record\",\n \"fields\": [\n {\n \"name\": \"order_id\",\n \"type\": \"INTEGER\",\n \"position\":\"0\"\n }, \n {\n \"name\": \"district_id\",\n \"type\": \"INTEGER\",\n \"position\":\"1\"\n }, \n {\n \"name\": \"warehouse_id\",\n \"type\": \"INTEGER\",\n \"position\":\"2\"\n }, \n {\n \"name\": \"customer_name\",\n \"type\": \"STRING\",\n \"position\":\"3\"\n }, \n {\n \"name\": \"count_id\",\n \"type\": \"INTEGER\",\n \"position\":\"4\"\n }, \n {\n \"name\": \"price\",\n \"type\": \"DOUBLE\",\n \"position\":\"5\"\n } \n ]\n }Load bin.tql\u00a0and run the bin application.CREATE APPLICATION bin;\n \nCREATE SOURCE BinarySource using FileReader (\n directory:'Samples',\n wildcard:'test.bin',\n positionByEOF:false\n)\nPARSE USING BinaryParser (\n metadata:'Samples/metadata.json',\t\n endian:true\n)\nOUTPUT TO BinaryStream;\n\nCREATE TARGET BinaryDump USING LogWriter(\n name:BinaryReader, \n filename:'out.log'\n) INPUT FROM BinaryStream;\n\nCREATE TYPE OrderType (\n OrderId integer,\n DistrictId integer,\n WarehouseId integer,\n Name String,\n CountId integer,\n Price double);\n \nCREATE STREAM OrderStream OF OrderType;\n\nCREATE CQ OrderCQ \nINSERT into OrderStream\nSELECT data[0],\n data[1],\n data[2],\n data[3],\n data[4],\n TO_DOUBLE(data[5])\nFROM BinaryStream;\n\nCREATE TARGET DSVOut\nUSING FileWriter(filename: binary_output)\nFORMAT USING DSVFormatter()\nINPUT FROM OrderStream;\n\nEND APPLICATION bin;The binary_output.csv output file will start like this:10000,0,100,John 0,7,243.53\n10001,1,101,John 1,73,2539.67\n10002,2,102,John 2,130,4522.7\n10003,3,103,John 3,144,5009.76\n10004,4,104,John 4,123,4279.17\n10005,5,105,John 5,40,1391.6In this section: Binary ParserBinary Parser propertiesBinary Parser data typesBinary Parser examplesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/binary-parser.html", "title": "Binary Parser", "language": "en"}} {"page_content": "\n\nCobol Copybook ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersCobol Copybook ParserPrevNextCobol Copybook ParserThe Directory and Wildcard properties of the associated File Reader must specify the path to and names of the data files.\u00a0The output type is JSONNodeEvent.Cobol Copybook Parser propertiespropertytypedefault valuenotesCopybook DialectStringMainframeSelect the value that matches the system on which the files were created: BigEndian, Fujitsu, Intel, Mainframe, MicroFocus, or OpenCobolCopybook File FormatenumUSE_STANDARD_COLUMNSSupported values:FREE_FORMATUSE_COLS_6_TO_80USE_LONG_LINEUSE_STANDARD_COLUMNSUSE_SUPPLIED_COLUMNSCopybook File NameStringThe fully qualified name of the .cpy file that describes the contents of the data files.Copybook SplitenumLevel01How the records in data file are mapped to records in the copybook. Supported values:HighestRepeating: multiple records are defined not at the 01 level; the appropriate value must be specified for Record SelectorLevel01: Multiple records are defined at the 01 level;; the appropriate value must be specified for Record SelectorNone: a single record is defined, starting at 01 levelRedefine: Cobol redefines are used in the copybookTopLevel: a single record is defined, not at 01 levelData File FontStringUTF8The character set of the data files.Data File OrganizationenumUnicodeTextSupported values:UnicodeText: for newline-delmited records in UTF8Text: for newline-delimited records in ASCIIUnicodeFixedLengthFixedLength (ASCII)Variable, VariableDump, VariableOpen: the size of each record is defined in the copy book with no record delimitersData Handling SchemeenumProcessRecordAsEventWith the default value, ProcessRecordAsEvent, each record will be output as a separate event. Set to ProcessFileAsEvent to have all records in each file output as a single event.Group PolicyFor documentation of this feature, Contact Striim support.Process Copybook File AsSingleEventSet to MultipleEvent to enable Group Policy.Record SelectorStringWhen the Copybook Split value is Level01 or HighestRepeating options, specify a set of mappings of record and field value to record type.Skip IndentInteger0If the contents of the data files are indented, specify the number of characters to skip on each line before reading data.Cobol Copybook Parser exampleCREATE SOURCE ReadCobol USING FileReader\n(\n directory:'docs/CobolCopybookParser',\n WildCard:'ACCTSD',\n positionByEOF:false\n)\nPARSE USING CobolCopybookParser (\n copybookFileName : 'docs/CobolCopybookParser/ACCTS.cpy',\n dataFileOrganization: 'Text'\n)\nOUTPUT TO CobolParserStream;\nFor a complete example including TQL file, copybook, and data, download CobolCopybookParser.zip from https://github.com/striim/doc-downloads.If you encounter any errors or other issues using this parser, please Contact Striim support.In this section: Cobol Copybook ParserCobol Copybook Parser propertiesCobol Copybook Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/cobol-copybook-parser.html", "title": "Cobol Copybook Parser", "language": "en"}} {"page_content": "\n\nCollectd ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersCollectd ParserPrevNextCollectd ParserAfter configuring a remote host to collect and transmit system information as detailed in collectd configuration, create a UDPReader and select this parser to use the data in an application.This parser has a single property, authfilelocation. If the remote host has security enabled for collectd, specify the path (relative to the Striim program directory) and name of the authentication file, otherwise leave the property blank. See \"Server setup\" in https://collectd.org/wiki/index.php/Networking_introduction for more information.Collectd Parser output fieldsThe output type is CollectdEvent. Its fields are:field nametypedatavaries depending on the ds-type setting for the pluginName value in collectd's types.db (see http://collectd.org/documentation/manpages/types.db.5.shtml for more information): for ABSOLUTE, COUNTER, and DERIVE, the type is long or long[], for GAUGE, the type is double or double[]hostNamestringintervalHighResolutionlongmessagestringpluginInstanceNamestringpluginNamestringseveritylongtimeDateTimetimeHighResolutionDateTimetimeIntervallongtypeInstanceNamestringtypeNamestringThe fields correspond to collectd part types (see https://collectd.org/wiki/index.php/Binary_protocol#Part_types). data corresponds to the Values part type.Collectd Parser exampleThe following example application receives CPU metrics and computes min, max, last, and average values for each one-minute block of data:CREATE APPLICATION collectd;\nCREATE SOURCE CollectdSource USING UDPReader (\n IpAddress:'127.0.0.1',\n PortNo:'25826'\n)\nPARSE USING CollectdParser ()\nOUTPUT TO CollectdStream;\n\nCREATE TYPE CpuUsageType (\n hname String,\n tStamp DateTime,\n tInstanceName String,\n pInstanceName Integer,\n cUsage double \t\n);\nCREATE STREAM CpuUsageStream OF CpuUsageType;\n\nCREATE CQ CpuUsage \nINSERT INTO CpuUsageStream \nSELECT hostname, \n TimeHighResolution,\n TypeInstanceName,\n TO_INT(PluginInstanceName),\n TO_DOUBLE(data[0]) \nFROM CollectdStream\nWHERE PluginName = 'cpu';\n \nCREATE JUMPING WINDOW CpuUsageWindow OVER CpuUsageStream KEEP WITHIN 1 MINUTE ON tStamp;\n\nCREATE TYPE CpuStatisticsType (\n cpuName Integer,\n cpuType String,\n min double,\n max double,\n last double,\n avg double\n);\nCREATE STREAM CpuStatisticsStream OF CpuStatisticsType;\n\nCREATE CQ CpuStatisticsCQ \nINSERT INTO CpuStatisticsStream\nSELECT x.pInstanceName,\n x.tInstanceName,\n MIN(x.cUsage),\n MAX(x.cUsage),\n x.cUsage,\n AVG(x.cUsage)\nFROM CpuUsageWindow x\nGROUP BY x.pInstanceName,x.tInstanceName;\n\nCREATE TARGET CpuStatisticsDump\nUSING SysOut(name:Stat)\nINPUT FROM CpuStatisticsStream;\n\nEND APPLICATION collectd;The output would look like this:Stat: CpuStatisticsType_1_0{\n cpuName: \"6\"\n cpuType: \"system\"\n min: 252157.0\n max: 252312.0\n last: 252312.0\n avg: 252236.5\n};\nStat: CpuStatisticsType_1_0{\n cpuName: \"7\"\n cpuType: \"user\"\n min: 33387.0\n max: 33393.0\n last: 33393.0\n avg: 33390.166666666664\n};In this section: Collectd ParserCollectd Parser output fieldsCollectd Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/collectd-parser.html", "title": "Collectd Parser", "language": "en"}} {"page_content": "\n\nDSV ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersDSV ParserPrevNextDSV ParserParses delimited text. See Supported reader-parser combinations for compatible readers.DSV Parser propertiespropertytypedefault valuenotesBlock as Complete RecordBooleanFalseWith JMSReader or UDPReader, if set to True, the end of a block will be considered the end of the last record in the block, even if the rowdelimiter is missing. Do not change the default value with other readers.CharsetStringUTF-8Column DelimiterString,use \\t for tabColumn Delimit TillInteger-1With the default value of -1, all delimiters are interpreted as columns. If a positive value is specified, that number of delimiters are interpreted as columns, and any additional delimiters are treated as if escaped. For example, if the columndelimiter value is a space, and columndelimittill is 4, this row:2012-12-10 10:30:30:256 10.1.10.12 jsmith User Login Error, invalid username or passwordwould be interpreted as five columns:2012-12-10\n10:30:30:256\n10.1.10.12\njsmith\nUser Login Error,\n invalid username or passwordComment CharacterCharacterif specified, lines beginning with this character will be skippedEvent TypeStringreservedHeaderBooleanFalseSet to True if the first row (or the row specified by headerlineno) contains field names.When DSVParser is used with FileReader, the output stream type can be created automatically from the header (see Creating the FileReader output stream type automatically).Header Line NoInteger0if the header is not the first line of the file, set this to the line number of the header rowIgnore Empty ColumnBooleanFalseif set to True, empty columns will be skipped instead of output as null valuesIgnore Multiple Record BeginBooleanTruesee FreeFormTextParserIgnore Row Delimiter in QuoteBooleanFalseif set to True, when the rowdelimiter character appears between a pair of quoteset characters it is treated as if escapedLine NumberInteger-1With the default value of -1, reads all lines. Set to n to skip the first n-1 lines and begin with line number n.No Column DelimiterBooleanFalseif set to True, columndelimiter is ignored and the entire line is output as data[0]Quote SetString\"character or characters that mark the start and end of each field; you may specify different start and end characters, such as [] or {}Record BeginStringsee FreeFormTextParserRecord EndStringsee FreeFormTextParserRow DelimiterString\\nsee Setting rowdelimiter valuesSeparatorString:character used to separate multiple values for columndelimiter, quoteset, or rowdelimiter (for example, ,:\\t to recognize both comma and tab as delimiters)Trim QuoteBooleanTrueif set to False, the quoteset and quotecharacter characters are not removed from the outputTrim WhitespaceBooleanFalseset to True if the data has spaces between values and delimiters (for example, \"1\" , \"2\")The output type of a source using DSVParser is WAEvent.DSV Parser example... PARSE USING DSVParser (\n\theader:'yes'\n)...In this section: DSV ParserDSV Parser propertiesDSV Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/dsv-parser.html", "title": "DSV Parser", "language": "en"}} {"page_content": "\n\nFree Form Text ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersFree Form Text ParserPrevNextFree Form Text ParserUse with a compatible reader to use a regular expression to parse unstructured data, such as log files, with events that span multiple lines.Free Form Text Parser propertiespropertytypedefault valuenotesBlock as Complete RecordBooleanFalseWith JMSReader or UDPReader, if blockascompleterecord is set to True, the end of a block will be considered the end of the last record in the block, even if the row delimiter is missing. This does not change the default value with other readers.CharsetStringUTF-8Ignore Multiple Record BeginBooleanTrueWith the default setting of True, additional occurrences of the RecordBegin string before the next RecordEnd string will be ignored. Set to False to treat each occurrence of RecordBegin as the beginning of a new event.Record BeginStringSpecify a string that defines the beginning of each event. The string may include date expressions and/or %IP_ADDRESS% (which will match any IP address). If a RecordBegin pattern starts with the ^\u00a0character, the pattern will be excluded from the data.NOTE: RecordBegin does not support regex.Record EndStringAn optional string that defines the end of each event (see Using regular expressions (regex)). If a RecordEnd pattern starts with the ^\u00a0character, the pattern will be excluded from the data.NOTE: RecordEnd does not support regex.RegexStringA regular expression (regex) defining the beginning (RecordBegin) and end (RecordBegin) of each field to be included in the output. To apply more than one pattern, separate the patterns using the '|' character. For example:regex: '((^([\\\\w]+)) |\n((?<=Author: ).*(\\n)) |\n((?<=\\n\\n).*))'\nFor more information about regular expressions, refer to the following resources:java.util.regex.PatternOracle: The Java Tutorials. Lesson: Regular ExpressionsLars Vogel: Java Regex - TutorialNOTE: You cannot specify a regex pattern in a RecordBegin or RecordEnd string.SeparatorString~the separator between multiple values in other properties For example, if the end of the record could be specified by either \"millisec\" or \"processed,\" with the default separator ~ the RecordEnd value would be millisec~processed.TimestampStringDefines the format of the timestamp in the source data. The values are output to the originTimeStamp key in WAEvent's metadata map\u00a0 and as shown in the sample code below can be retrieved using SELECT META(stream_name,''). Supported pattern strings are:\"EEE, d MMM yyyy HH:mm:ss Z\" \n\"EEE, MMM d, ''yy\" \n\"h:mm a\" \n\"hh 'o''clock' a, zzzz\" \n\"K:mm a, z\" \n\"yyMMddHHmmssZ\" \n\"YYYY-'W'ww-u\"\n\"yyyy-MM-dd'T'HH:mm:ss.SSSXXX\" \n\"yyyy-MM-dd'T'HH:mm:ss.SSSZ\" \n\"yyyy.MM.dd G HH:mm:ss z\" \n\"yyyyy.MMMMM.dd GGG hh:mm aaa\"For more information, see the documentation for the Java class SimpleDateFormat.The output type of a source using FreeFormTextParser is WAEvent.Free Form Text Parser exampleCREATE SOURCE fftpSource USING FileReader (\n directory:'Samples/',\n WildCard:'catalina*.log',\n charset:'UTF-8',\n positionByEOF:false\n)\nPARSE USING FreeFormTextParser (\n -- Timestamp format in log is \"Aug 21, 2014 8:33:56 AM\"\n TimeStamp:'%mon %d, %yyyy %H:%M:%S %p',\n RecordBegin:'%mon %d, %yyyy %H:%M:%S %p',\n regex:'(SEVERE:.*|WARNING:.*)'\n)\nOUTPUT TO fftpInStream;\nCREATE TYPE fftpOutType (\n msg String,\n origTs long\n);\nCREATE STREAM fftpOutStream OF fftpOutType;\nCREATE CQ fftpOutCQ\nINSERT INTO fftpOutStream\nSELECT data[0],\n TO_LONG(META(x,'OriginTimestamp'))\nFROM fftpInStream x;\nCREATE TARGET fftpTarget\nUSING SysOut(name:fftpInfo)\nINPUT FROM fftpOutStream;The RecordBegin value %mon %d, %yyyy %H:%M:%S %p defines the beginning of an event as a timestamp like the one at the beginning of the sample shown below. This is also the event timestamp, as defined by the TimeStamp value.The regular expression (SEVERE:.*|WARNING:.*) looks in each event for a string starting with SEVERE or WARNING. If one is found, the parser returns everything until the next linefeed. If an event does not include SEVERE or WARNING, it is omitted from the output.The following is the beginning of one of the potentially very long log messages this application is designed to process:Aug 22, 2014 11:17:19 AM org.apache.solr.common.SolrException log\nSEVERE: org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: \nCannot parse '((suggest_title:(04) AND suggest_title:(gmc) AND suggest_title: \nAND suggest_title:(6.6l) AND suggest_title:(lb7) AND suggest_title:(p1094,p0234)) \nAND NOT (deleted:(true)))': Encountered \" \"AND \"\" at line 1, column 64.\nWas expecting one of:\n ...\n \"(\" ...\n \"*\" ...\n ...\n ...\n ...\n ...\n ...\n \"[\" ...\n \"{\" ...\n ...\n ...\n \n at org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:147)\n at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:187)\n at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) ...The parser discards everything except the line beginning with SEVERE, and the TO_LONG function in the CQ converts the log entry's timestamp to the format required by Striim:fftpInfo: fftpOutType_1_0{\n msg: \"SEVERE: org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError:\nCannot parse '((suggest_title:(04) AND suggest_title:(gmc) AND suggest_title: AND \nsuggest_title:(6.6l) AND suggest_title:(lb7) AND suggest_title:(p1094,p0234)) AND NOT\n(deleted:(true)))': Encountered \\\" \\\"AND \\\"\\\" at line 1, column 64.\"\n origTs: 1408731439000\n};In this section: Free Form Text ParserFree Form Text Parser propertiesFree Form Text Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/free-form-text-parser.html", "title": "Free Form Text Parser", "language": "en"}} {"page_content": "\n\nGG (GoldenGate) Trail ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersGG (GoldenGate) Trail ParserPrevNextGG (GoldenGate) Trail ParserSee Oracle GoldenGate.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-16\n", "metadata": {"source": "https://www.striim.com/docs/en/gg--goldengate--trail-parser.html", "title": "GG (GoldenGate) Trail Parser", "language": "en"}} {"page_content": "\n\nJSON ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersJSON ParserPrevNextJSON ParserParses JSON data. See Supported reader-parser combinations for compatible readers.JSON Parser propertiespropertytypedefault valuecommentsEvent TypeStringthe Striim data type to be used (leave blank to parse manually)Field NameStringif the JSON includes fields, specify the one containing the events defined by eventType (see example below)The output type of a source using JSONParser is JSONNodeEvent.JSON Parser exampleAssume that the JSON being parsed has the following format:{\n \"ConnectedToWifi\": false,\n \"IpAddress\": \"\",\n \"WifiSsid\": \"\",\n \"device\": \"LGE Nexus 5\",\n \"OsVersion\": \"21\",\n \"platfrom\": \"Android\",\n \"scanresult\": {\n \"timestamp1\": \"1424434411\",\n \"rssi\": -90\n },\n \"serviceUUID\": \"8AA10000-0A46-115F-D94E-5A966A3DDBB7\",\n \"majorId\": 15,\n \"minorId\": 562\n}The following code extracts the timestamp1 and rssi properties from the scanresult field:CREATE TYPE ScanResultType (\n timestamp1 String,\n rssi String\n);\nCREATE STREAM ScanResultStream OF ScanResultType;\n\nCREATE SOURCE JSONSource USING FileReader (\n directory: 'Samples',\n WildCard: 'sample.json',\n positionByEOF: false\n)\nPARSE USING JSONParser (\n eventType: 'ScanResultType',\n fieldName: 'scanresult'\n)\nOUTPUT TO ScanResultStream;In the UI, when creating a source that uses the JSON Parser, create the output stream first, specifying a data type corresponding to the fields in the JSON data, then enter the name of that data type as the value for the eventType property in the source.If the data you need is spread among two or more JSON fields, you can parse the file manually by leaving eventType blank. For example, assume your JSON had this format:{\n \"venue\": {\n \"lat\": 41.773228,\n \"lon\": -88.149109,\n \"venue_id\": 23419382,\n \"venue_name\": \"Naperville Theater\"\n },\n \"event\": {\n \"event_id\": \"418361985\",\n \"event_name\": \"Naperville Film Festival\"\n },\n \"group\": {\n \"group_city\": \"Naperville\",\n \"group_state\": \"IL\",\n \"group_country\": \"us\",\n \"group_id\": 8625752,\n \"group_name\": \"NFF\"\n }\n }The following code gets properties from all three fields:CREATE SOURCE RawMeetupJSON USING FileReader (\n directory:'./Samples/meetup',\n wildcard:'one_event_pretty.json',\n positionByEOF:false\n) \nPARSE USING JSONParser (\n eventType:''\n)\nOUTPUT TO RawJSONStream;\n \nCREATE TYPE MeetupJSONType (\n venue_id string KEY,\n group_name string,\n event_name string,\n venue_name string,\n group_city string,\n group_country string,\n lat double,\n lon double\n);\nCREATE STREAM ParsedJSONStream of MeetupJSONType;\n \nCREATE CQ ParseJSON\nINSERT INTO ParsedJSONStream\nSELECT\n data.get('venue').get('venue_id').textValue(),\n data.get('group').get('group_name').textValue(),\n data.get('event').get('event_name').textValue(),\n data.get('venue').get('venue_name').textValue(),\n data.get('group').get('group_city').textValue(),\n data.get('group').get('group_country').textValue(),\n data.get('venue').get('lat').doubleValue(),\n data.get('venue').get('lon').doubleValue()\nFROM RawJSONStream \nWHERE data IS NOT NULL;\nCREATE TARGET MeetupJSONOut USING SysOut(name:meetup) INPUT FROM ParsedJSONStream;The output for the JSON shown above would be as follows:meetup: MeetupJSONType_1_0{\n venue_id: null\n group_name: \"OpenHack Naperville\"\n event_name: \"February Hack Night!!\"\n venue_name: \"Twocanoes Software office (UPSTAIRS above Costellos Jewelry)\"\n group_city: \"Naperville\"\n group_country: \"us\"\n lat: 41.773228\n lon: -88.149109\n};In this section: JSON ParserJSON Parser propertiesJSON Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/json-parser.html", "title": "JSON Parser", "language": "en"}} {"page_content": "\n\nNetFlow ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersNetFlow ParserPrevNextNetFlow ParserThis adapter supports NetFlow v5 and v9 and requires UDP Reader. Its single property is version, which has a default value of all. With this setting, the adapter will automatically detect the packet type and parse it accordingly.The output type of a source using NetflowParser is WAEvent.NetFlow Parser exampleCREATE SOURCE NetflowV5Source USING UDPReader (\n IPAddress:'192.0.2.0',\n portno:'9915'\n)\nPARSE USING NetflowParser()\nOUTPUT TO NetflowV5Stream;The following application counts Type Of Service (TOS) in a NetFlow v9 export packet. The assumption is that a Cisco router has a NetFlow process running that is configured to monitor the type of service, which has three key fields, input interface, output interface, and TOS, plus two non-key fields, in_bytes and in_pkts. The collector address in the NetFlow process is the IP address and port of the Striim server running the application.CREATE SOURCE NetflowV9Source USING UDPReader (\n IPAddress:'192.0.2.0', \n portno:'9915'\n)\nPARSE USING NetflowParser ()\nOUTPUT TO NetflowV9Stream;\n\nCREATE TYPE NetflowTOS_Type (\n protocol string,\n source_ip string,\n dest_ip string,\n input_interface integer,\n output_interface integer,\n src_tos string,\n in_pkts integer,\n in_bytes integer\n);\n\nCREATE TYPE TOS_Type (\n source_ip string,\n dest_ip string,\n input_interface integer,\n src_tos string,\n type_of_service String,\n count integer\n);\n\nCREATE STREAM NetflowTOSMonitorStream of NetflowTOS_Type;\n\nCREATE JUMPING WINDOW NetflowTOSWindow\nOVER NetflowTOSMonitorStream KEEP 10 ROWS\nPARTITION BY src_tos;\n\nCREATE STREAM TOSCountStream of TOS_Type;\n\nCREATE CQ NetflowTOSMonitorCQ\nINSERT INTO NetflowTOSMonitorStream\nSELECT VALUE(x,'PROTOCOL').toString(),\n VALUE(x,'IPV4_SRC_ADDR'), \n VALUE(x,'IPV4_DST_ADDR'),\n VALUE(x,'INPUT_SNMP'),\n VALUE(x,'OUTPUT_SNMP'),\n VALUE(x,'SRC_TOS').toString(),\n VALUE(x,'IN_PKTS'),\n VALUE(x,'IN_BYTES')\nFROM NetflowV9Stream x\nWHERE META(x,\"RecordType\").toString() = \"Data\";\n\nCREATE CQ NetflowTOSCountCQ\nINSERT INTO TOSCountStream\nSELECT x.source_ip, x.dest_ip, x.input_interface, x.src_tos.toString(), \nCASE WHEN x.src_tos = '0' THEN \"Routine\"\n WHEN x.src_tos = '1' THEN \"Priority\"\n WHEN x.src_tos = '2' THEN \"Immediate\"\n WHEN x.src_tos = '3' THEN \"Flash\"\n WHEN x.src_tos = '4' THEN \"Flash Override\"\n WHEN x.src_tos = '5' THEN \"CRITIC/ECP\"\n WHEN x.src_tos = '6' THEN \"Internetwork Control\"\n WHEN x.src_tos = '7' THEN \"Network Control\"\n ELSE \"Unsupported Type\" END,\nCOUNT(x.src_tos.toString())\nFROM NetflowTOSWindow x\nGROUP BY src_tos.toString();\n\nCREATE TARGET NetflowV9StreamDump \nUSING SysOut(name:NetflowV9) \nINPUT FROM NetflowTOSMonitorStream;\n\nCREATE TARGET OperationLog USING LogWriter(\n name:NetflowTOSMonitor,\n filename:'NetflowTOSMonitor.log'\n)\nINPUT FROM TOSCountStream;In this section: NetFlow ParserNetFlow Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/netflow-parser.html", "title": "NetFlow Parser", "language": "en"}} {"page_content": "\n\nNVP (name-value pair) ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersNVP (name-value pair) ParserPrevNextNVP (name-value pair) ParserParses name-value pairs. See Supported reader-parser combinations for compatible readers.NVP Parser propertiespropertytypedefault valuenotesBlock as Complete RecordBooleanFalseCharsetStringUTF-8Pair DelimiterStringdefault value is one space (UTF-8 0x20)Quote SetString\"Row DelimiterString\\nTrim QuoteBooleanTrueValue DelimiterString=The output type of a source using NVPParser is WAEvent.NVP Parser exampleOutput from a source using this parser can be selected using VALUE(x,\"\"). For example, if given the following input event:2014-08-22T11:51:52.920281+03:00 10.184.2.46 date=2014-08-22 time=11:51:52 \ndevname=fw000a08 devid=FGT118 logid=0000000015 type=traffic subtype=forward level=notice \nvd=fbb-dmz srcip=10.46.227.81 srcport=29200 srcintf=\"Int-Channel1\" dstip=195.39.224.106 \ndstport=443 dstintf=\"Mango\" sessionid=102719642 status=start policyid=265 \ndstcountry=\"Japan\" srccountry=\"Japan\" trandisp=dnat tranip=10.1.1.1 tranport=443 \nservice=HTTPS proto=6 duration=0 sentbyte=0 rcvdbyte=0the following code:CREATE SOURCE NVPSource USING FileReader (\n directory:'Samples',\n WildCard:'NVPTestData.txt',\n positionByEOF:false)\nPARSE USING NVPParser ()\nOUTPUT TO NvpStream;\n\nCREATE TYPE nvptype (\n ipaddress String,\n deviceName String,\n status String,\n policyid int);\nCREATE STREAM nvptypedstream OF nvptype;\n\nCREATE CQ typeconversion\n INSERT INTO nvptypedstream\n SELECT VALUE(x,\"column1\"), VALUE(x,\"devid\"),VALUE(x,\"status\"),TO_INT(VALUE(x,\"policyid\")) \n FROM nvpStream x;\n\nCREATE TARGET t USING SysOut(name:NVPtest) INPUT FROM NvptypedStream;will produce the following output:NVPtest: nvptype_1_0{\n ipaddress: \"10.184.2.46\"\n deviceName: \"FGT118\"\n status: \"start\"\n policyid: 265\n}; Note that fields 0 and 1 in the input event are a timestamp and an IP address rather than key-value pairs. The IP address is selected using value(x,\"column1\"). This syntax can be used only for fields at the beginning of the event, before the first key-value pair.In this section: NVP (name-value pair) ParserNVP Parser propertiesNVP Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/nvp--name-value-pair--parser.html", "title": "NVP (name-value pair) Parser", "language": "en"}} {"page_content": "\n\nParquet ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersParquet ParserPrevNextParquet ParserApache Parquet is an open source columnar data file format designed for efficient data storage and retrieval. Parquet is built for complex nested data structures, and uses a record shredding and assembly algorithm. Parquet provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. The benefits of Parquet format include:Fast queries that can fetch specific column values without reading full row dataHighly efficient column-wise compressionHigh compatibility with online analytical processing (OLAP)Parquet is a popular format for data serialization in file systems that are often used with analytical data engines. Amazon S3 and many other cloud services support Parquet. It is good for queries that read particular columns from a wide (many column) table because only the needed columns are read, minimizing I/O operations.For more information on Parquet, see the Apache Parquet documentation.In this release, the Parquet Parser allows you to read Parquet-formatted files using FileReader, HDFSReader, and S3Reader. These sources correspond to files sources on a local filesystem, on a Hadoop Distributed File System (HDFS), or on Amazon S3. You can read both compressed and uncompressed files using the Parquet Parser without needing any configuration.The output stream type of a source using Parquet Parser is ParquetEvent. See Writers overview for writers that can accept such a stream as input. When a writer's input is a ParquetEvent stream, it must use Avro Formatter or Parquet Formatter.Writers overviewExample: Configuring a Striim application with HDFSReader and Parquet ParserSuppose that you have Parquet files in the following Hadoop directory path: \"/data/parquet/files/\".You can create a Striim application with the HDFSReader and configure its details along with a Parquet Parser. For example:CREATE SOURCE hadoopSource USING HDFSReader (\n hadoopconfigurationpath: 'IntegrationTests/TestData/hdfsconf/',\n hadoopurl: 'hdfs://dockerhost:9000/',\n directory: '/data/parquet/files/',\n wildcard: '*'\n)\nPARSE USING ParquetParser()\nOUTPUT TO parquetStream;Parquet Parser propertiespropertytypedefault valuenotesRetryWaitString1mA time interval that specifies the wait between two retry attempts. With the default value 1m, the wait is one minute. Acceptable units of intervals: s,m,h,d. For example: RetryWait:'30s'.Note that if the parser encounters a non-Parquet file or an incomplete Parquet file this means there may be a pause of three minutes before it skips the file and goes on to the next one. A list of skipped files is available in the Monitoring display and inMON output for the reader.ParquetEvent structureThe ParquetEvent for the example Striim application has the following structure:data of type org.apache.avro.GenericRecord\nmetadata of type Map\nuserdata of type MapThe data carries one row read from the Parquet file defined as an Avro GenericRecord.The additional metadata that is appended along with source metadata are as follows:BlockIndex: The block number which the record belongs to.RecordIndex: The record number in a block.You can use the userdata map to add user-defined information in a key-value format.User data sampleCREATE SOURCE fileSrc USING FileReader (\n directory:'/Users/Downloads/pems_sorted/',\n WildCard:'part-r-00282.snappy.parquet',\n positionByEOF:false\n)\nPARSE USING ParquetParser()\nOUTPUT TO ParquetStreams;\n \nCREATE STREAM CQStream1 of Global.parquetevent;\n \nCREATE CQ CQ1\nINSERT INTO CQStream1\nSELECT PUTUSERDATA(\n s,\n 'schemaName',\n s.data.getSchema().getName() \n)\nFROM ParquetStreams s;\n \nCREATE STREAM CQStream2 of Global.parquetevent;\n \nCREATE CQ CQ2\nINSERT INTO CQStream2\nSELECT PUTUSERDATA(\n s2,\n 'schemaNameExtended', \n Userdata(s2,\n 'schemaName').toString().concat(\".PQ\")\n)\nFROM CQStream1 s2;\n \nCREATE TARGET t2 USING FileWriter(\n filename:'fileParquetTest',\n directory:'striim/%@userdata(schemaNameExtended)%',\n rolloverpolicy:'eventcount:100'\n)\nFORMAT USING ParquetFormatter ( schemafilename:'schemaPQ')\nINPUT FROM CQStream2;Compatible targetsThe ParquetEvent can be handled directly in targets writing to a file or streaming into Apache Kafka. You can use file targets with ParquetFormatter or AvroFormatter with a dynamic directory configuration. When Apache Kafka is the target, the AvroFormatter is the only supported formatter and you must configure a schema registry.TQL sample: File targetcreate stream newParquetStream of Global.parquetevent;\nCREATE CQ CQ1 \nINSERT INTO newParquetStream \nSELECT PUTUSERDATA(\n s,\n 'schemaName',\n s.data.getSchema().getName())\nFROM parquetStream s;\n \nCREATE TARGET fileTgt USING Global.FileWriter (\n filename: 'avroFiles',\n directory: 'avrodir/%@userdata(schemaName)%',\n rolloverpolicy: 'EventCount:1000,Interval:30s'\n)\nFORMAT USING Global.AvroFormatter (\n schemaFileName: 'schemaAvro.avsc',\n formatAs: 'default' )\nINPUT FROM newParquetStream;TQL sample: Kafka targetCREATE OR REPLACE TARGET kafkaTgt USING Global.KafkaWriter VERSION '0.11.0'(\n brokerAddress: 'localhost:9092',\n Topic: 'PqTopic1',\n Mode: 'Async' )\nFORMAT USING Global.AvroFormatter (\n schemaregistryurl: 'http://localhost:8081/',\n FormatAs: 'Default'\n)\nINPUT FROM parquetStream;In this section: Parquet ParserExample: Configuring a Striim application with HDFSReader and Parquet ParserParquet Parser propertiesParquetEvent structureUser data sampleCompatible targetsTQL sample: File targetTQL sample: Kafka targetSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/parquet-parser.html", "title": "Parquet Parser", "language": "en"}} {"page_content": "\n\nSNMP ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersSNMP ParserPrevNextSNMP ParserAfter configuring one or more remote hosts to collect and transmit system information as detailed in SNMP configuration, create a UDPReader and select this parser to use the data in an application.SNMP Parser propertyThis parser's single property is alias (string), which can be used to specify a file containing human-readable aliases for SNMP OIDs. For example:1.3.6.1.4.1.2021.10.1.5.2=laLoadInt.2\n1.3.6.1.2.1.88.2.1.4=mteHotOID\n1.3.6.1.4.1.2021.10.1.100.1=laErrorFlag.1\n1.3.6.1.4.1.2021.10.1.100.2=laErrorFlag.2\n1.3.6.1.2.1.1.5=sysName\n1.3.6.1.2.1.88.2.1.1=mteHotTrigger\n1.3.6.1.4.1.2021.10.1.100.3=laErrorFlag.3\n1.3.6.1.4.1.2021.10.1.5.1=laLoadInt.1\n1.3.6.1.4.1.2021.10.1.5.3=laLoadInt.3\n1.3.6.1.4.1.311.1.13.1.9999.25.0=HOST\n1.3.6.1.4.1.311.1.13.1.9999.12.0=MACHINENAME\n1.3.6.1.4.1.311.1.13.1.9999.11.0=USERID\n1.3.6.1.4.1.311.1.13.1.9999.5.0=EVENTCODE\n1.3.6.1.4.1.311.1.13.1.9999.15.0=STATUS\n1.3.6.1.4.1.311.1.13.1.9999.13.0=SUBSTATUS\n1.3.6.1.2.1.88.2.1.2.0=mteHotTargetName.0\n1.3.6.1.2.1.88.2.1.3.0=mteHotContextName.0\n1.3.6.1.2.1.88.2.1.5.0=mteHotValue.0 \n1.3.6.1.2.1.25.1.2=hrSystemDate\n1.3.6.1.2.1.2.2.1.5=ifSpeed\n1.3.6.1.2.1.31.1.1.1.15=ifHighSpeed\n1.3.6.1.2.1.2.2.1.6=ifPhysAddress\n1.3.6.1.2.1.2.2.1.10=ifInOctets \n1.3.6.1.2.1.2.2.1.11=ifInUcastPkts \n1.3.6.1.2.1.2.2.1.16=ifOutOctets\n1.3.6.1.2.1.2.2.1.17=ifOutUcastPkts \n1.3.6.1.2.1.2.2.1.19=ifOutDiscards \n1.3.6.1.2.1.2.2.1.20=ifOutErrors \n1.3.6.1.2.1.2.2.1.8=ifOperStatus\n1.3.6.1.2.1.2.2.1.6=ifPhysAddress\n1.3.6.1.2.1.1.3=sysUpTimeSNMP exampleThis sample application listens for the data sent using the sample settings from SNMP configuration.CREATE APPLICATION snmpnt;\nCREATE SOURCE SNMPSource USING UDPReader (\n ipaddress:'0.0.0.0',\n PortNo:'15021'\n)\nPARSE USING SNMPParser (\n alias: 'Samples/snmpalias.txt'\n)\nOUTPUT TO SNMPStream;\n\nCREATE TYPE networkType(\n mteHotTrigger String,\n mteHotValue Integer, \n mteHotOID String,\n Sysname String,\n MacAddr String,\n Uptime Integer,\n ifSpeed2 Integer, \n ifHighSpeed Integer,\n ifInOctets2 Integer, \n ifInUcastPkts Integer, \n ifOutOctets2 Integer, \n ifOutUcastPkts2 Integer,\n ifOutDiscards2 Integer,\n ifOutOfErrors2 Integer,\n ifOperStatus2 Integer);\nCREATE STREAM NetworkStream OF networkType;\nCREATE CQ ntcq\nINSERT INTO NetworkStream\nSELECT\n TO_STRING(VALUE(x,'mteHotTrigger')),\n TO_INT(VALUE(x,'mteHotValue')),\n TO_STRING(VALUE(x,'mteHotOID')),\n TO_STRING(VALUE(x,'sysName')),\n TO_MACID(VALUE(x, 'ifPhysAddress'),'-'),\n TO_INT(VALUE(x, 'sysUpTime')),\n TO_INT(VALUE(x,'ifSpeed')), \n TO_INT(VALUE(x,'ifHighSpeed')), \n TO_INT(VALUE(x,'ifInOctets')), \n TO_INT(VALUE(x,'ifInUcastPkts')), \n TO_INT(VALUE(x,'ifOutOctets')),\n TO_INT(VALUE(x,'ifOutUcastPkts')), \n TO_INT(VALUE(x,'ifOutDiscards')), \n TO_INT(VALUE(x,'ifOutErrors')), \n TO_INT(VALUE(x,'ifOperStatus'))\nFROM SNMPStream x;\n\nCREATE TARGET SnmpNetWorkInterface\nUSING SysOut(name:SNMPNT)\nINPUT FROM NetworkStream;\n\nEND APPLICATION snmpnt;The output looks like this:SNMPNT: networkType_1_0{\n mteHotTrigger: \"Interface Details\"\n mteHotValue: null\n mteHotOID: \"1.3.6.1.2.1.2.2.1.16.2\"\n Sysname: \"centos57\"\n MacAddr: \"08-00-27-AE-36-99\"\n Uptime: 18006\n ifSpeed2: 1000000000\n ifHighSpeed: 1000\n ifInOctets2: 613514937\n ifInUcastPkts: 658628\n ifOutOctets2: 35572407\n ifOutUcastPkts2: 272710\n ifOutDiscards2: 0\n ifOutOfErrors2: 0\n ifOperStatus2: 1\n};In this section: SNMP ParserSNMP Parser propertySNMP exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/snmp-parser.html", "title": "SNMP Parser", "language": "en"}} {"page_content": "\n\nStriim ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersStriim ParserPrevNextStriim ParserUse with KafkaReader when reading from a Kafka stream. See\u00a0Reading a Kafka stream with KafkaReader.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-16\n", "metadata": {"source": "https://www.striim.com/docs/en/striim-parser.html", "title": "Striim Parser", "language": "en"}} {"page_content": "\n\nXML ParserSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersXML ParserPrevNextXML ParserParses XML. See Supported reader-parser combinations for compatible readers. See also XML Parser V2.XML Parser propertiespropertytypedefault valuenotesColumn ListStringif left blank, all key-value pairs will be returned as part of the data arrayRoot NodeStringSeparatorStringoptionalThe output type of a source using XMLParser is WAEvent.XML Parser example... PARSE USING XMLParser(\n rootnode:'/log4j:event',\n columnlist:'log4j:event/@timestamp,\n log4j:event/@level,\n log4j:event/log4j:message,\n log4j:event/log4j:throwable,\n log4j:event/log4j:locationInfo/@class,\n log4j:event/log4j:locationInfo/@method,\n log4j:event/log4j:locationInfo/@file,\n log4j:event/log4j:locationInfo/@line'\n)...See the discussion of Log4JSource in MultiLogApp for a detailed explanation.In this section: XML ParserXML Parser propertiesXML Parser exampleSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/xml-parser.html", "title": "XML Parser", "language": "en"}} {"page_content": "\n\nXML Parser V2Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 SourcesParsersXML Parser V2PrevNextXML Parser V2Parses XML. See Supported reader-parser combinations for compatible readers. See also XML ParserXMLParserV2 has only a single property, Root Node. Its output format is XMLNodeEvent, which a java.util.Map containing event metadata followed by an org.dom4j.Element containing the data. For example, this is an event from the first sample application's XMLRawStream:\n\n\n\n\n\nJon snow\nPalo Alto\nPlease leave packages in shed by driveway.\n\n\nFor debugging purposes, you may write unparsed XMLNodeEvent to FileWriter and SysOut.Sample Application 1This example simply converts XML input into JSON.Save the following as Striim/Samples/PurchaseOrders1.xml:\n Jon snow\n Palo Alto\n Please leave packages in shed by driveway.\n\n\n Tyrion\n Seattle\n Preferred time : post 4 PM\n\nThen run the following application:CREATE APPLICATION XMLParserV2Test1;\n\nCREATE SOURCE XMLSource USING FileReader (\n directory:'Samples',\n wildcard:'PurchaseOrders1.xml',\n positionByEOF:false\n) \nPARSE USING XMLParserV2(\n rootnode:'PurchaseOrder')\nOUTPUT TO XMLRAwStream;\n\nCREATE CQ XmlCQ\nINSERT INTO XmlParsedStream\nSELECT\n data.attributeValue(\"PurchaseOrderNumber\") as PONumber,\n data.element(\"CustomerName\").getText() as CustomerName,\n data.element(\"CustomerAddress\").getText() as CustomerAddress,\n data.element(\"DeliveryNotes\").getText() as DeliveryNotes\nFROM XMLRawStream;\n\nCREATE TARGET XMLParsedOut USING FileWriter(\n filename:'parsed.json',\n directory: 'XMLParserV2Test1')\nFORMAT USING JSONFormatter ()\nINPUT FROM XmlParsedStream;\n\nEND APPLICATION XMLParserV2Test1;Striim/XMLParserV2Test1/parsed.00.txt should contain the following:[\n {\n \"PONumber\":\"1\",\n \"CustomerName\":\"Jon snow\",\n \"CustomerAddress\":\"Palo Alto\",\n \"DeliveryNotes\":\"Please leave packages in shed by driveway.\"\n },\n {\n \"PONumber\":\"2\",\n \"CustomerName\":\"Tyrion\",\n \"CustomerAddress\":\"Seattle\",\n \"DeliveryNotes\":\"Preferred time : post 4 PM\"\n }\n]Sample Application 2This example iterates through child elements (line items in a purchase order).Save the following as Striim/Samples/PurchaseOrders2.xml:\n
\n Jon snow\n Palo Alto\n Please leave packages in shed by driveway.\n
\n \n \n EarPhones\n 148.95\n \n \n Mouse\n 39.98\n \n \n
\n\n
\n Tyrion\n Seattle\n Preffered time : post 4 PM\n
\n \n \n Monitor\n 148.95\n \n \n Keyboard\n 39.98\n \n \n
\nThen run the following application:CREATE APPLICATION XMLParserV2Test2;\n\nCREATE SOURCE XMLSource USING FileReader (\n directory:'Samples',\n wildcard:'PurchaseOrders2.xml',\n positionByEOF:false\n) \nPARSE USING XMLParserV2(\n rootnode:'PurchaseOrder' )\nOUTPUT TO XMLRAwStream;\n\nCREATE TARGET RawXMLFileOut USING FileWriter(\n filename:'raw.txt',\n directory: 'XMLParserV2Test2')\nFORMAT USING XMLFormatter(\n rootelement:'PurchaseOrder')\nINPUT FROM XmlRawStream;\n\nCREATE CQ IntermediateTransformation \nINSERT INTO IntermediateStream \nSELECT \n data.attributeValue(\"PurchaseOrderNumber\") as PONumber,\n data.element(\"Details\") PODetails,\n data.element(\"Items\").elements(\"Item\") itemlist\nFROM XMLRawStream;\n\n-- iterates over the items in PO and appends common PO details to each item\nCREATE CQ XmlCQ\nINSERT INTO XmlParsedStream\nSELECT \n PO.PONumber as PONumber, \n PO.PODetails.element(\"CustomerName\").getText() as CustomerName,\n PO.PODetails.element(\"CustomerAddress\").getText() as CustomerAddress,\n PO.PODetails.element(\"DeliveryNotes\").getText() as DeliveryNotes, \n item.attributeValue(\"ItemNumber\") as ItemNumber,\n item.element(\"ProductName\").getText() as ProductName,\n item.element(\"USPrice\").getText() as USPrice\nFROM IntermediateStream PO, iterator(PO.itemlist, org.dom4j.Element) item;\n\nCREATE TARGET XMLParsedOut using FileWriter(\n filename:'parsed.json',\n directory: 'XMLParserV2Test2')\nFORMAT USING JSONFormatter ()\nINPUT FROM XmlParsedStream;\n\nEND APPLICATION XMLParserV2Test2;Striim/XMLParserV2Test2/parsed.00.txt should contain the following:[\n {\n \"PONumber\":\"1\",\n \"CustomerName\":\"Jon snow\",\n \"CustomerAddress\":\"Palo Alto\",\n \"DeliveryNotes\":\"Please leave packages in shed by driveway.\",\n \"ItemNumber\":\"1\",\n \"ProductName\":\"EarPhones\",\n \"USPrice\":\"148.95\"\n },\n {\n \"PONumber\":\"1\",\n \"CustomerName\":\"Jon snow\",\n \"CustomerAddress\":\"Palo Alto\",\n \"DeliveryNotes\":\"Please leave packages in shed by driveway.\",\n \"ItemNumber\":\"2\",\n \"ProductName\":\"Mouse\",\n \"USPrice\":\"39.98\"\n },\n {\n \"PONumber\":\"2\",\n \"CustomerName\":\"Tyrion\",\n \"CustomerAddress\":\"Seattle\",\n \"DeliveryNotes\":\"Preffered time : post 4 PM\",\n \"ItemNumber\":\"1\",\n \"ProductName\":\"Monitor\",\n \"USPrice\":\"148.95\"\n },\n {\n \"PONumber\":\"2\",\n \"CustomerName\":\"Tyrion\",\n \"CustomerAddress\":\"Seattle\",\n \"DeliveryNotes\":\"Preffered time : post 4 PM\",\n \"ItemNumber\":\"2\",\n \"ProductName\":\"Keyboard\",\n \"USPrice\":\"39.98\"\n }\n]In this section: XML Parser V2Sample Application 1Sample Application 2Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/xml-parser-v2.html", "title": "XML Parser V2", "language": "en"}} {"page_content": "\n\nChange Data Capture (CDC)Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)PrevNextChange Data Capture (CDC)The Change Data Capture Guide covers use of Striim's CDC readers.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-07\n", "metadata": {"source": "https://www.striim.com/docs/en/change-data-capture--cdc-.html", "title": "Change Data Capture (CDC)", "language": "en"}} {"page_content": "\n\nWhat is change data capture?Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)What is change data capture?PrevNextWhat is change data capture?Change data capture retrieves changed data from a DBMS or other data store. See the\u00a0Change data capture Wikipedia article for an overview.Change data capture using logsRelational database management systems use write-ahead logs, also called redo or transaction logs, that represent DML and DDL changes.\u00a0 Traditionally, RDBMS systems use these logs to guarantee ACID properties and support rollback and roll-forward recovery operations.\u00a0 As DBMS technology has evolved, these logs have been augmented to record additional types of changes. Today they may track virtually every redoable and undoable action in the system, including transaction start and commit boundaries, table and index\u00a0changes,\u00a0data definition changes, rollback operations, indicators of non-logged changes, and more.DBMS vendors and third parties have found additional uses for these logs. Striim, for example, can extract change data from logs in real time in order to make information available before the DBMS has finished processing it, at the same time minimizing the performance load on the RBMS by eliminating additional queries. There are many potential uses for this information, such raising alerts about error conditions sooner and double-checking DBMS operations in order to identify lost data.All of the readers discussed in this Change Data Capture Guide capture change data by reading logs.Change data capture using JDBCYou can use Striim's\u00a0Incremental Batch Reader to capture change data using JDBC based on timestamps or incrementing values.IncrementalBatchReaderIn this section: What is change data capture?Change data capture using logsChange data capture using JDBCSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/what-is-change-data-capture-.html", "title": "What is change data capture?", "language": "en"}} {"page_content": "\n\nWorking with SQL CDC readersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersPrevNextWorking with SQL CDC readersThis section discusses the common characteristics of Striim's SQL-based change data capture readers. See also Using source and target adapters in applications.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/working-with-sql-cdc-readers.html", "title": "Working with SQL CDC readers", "language": "en"}} {"page_content": "\n\nWAEvent contents for change dataSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersWAEvent contents for change dataPrevNextWAEvent contents for change dataThe output data type for sources that use change data capture readers is WAEvent. The fields and valid values vary among the readers, but they all include the following:metadata: a map including the elements:OperationName: INSERT, UPDATE, or DELETETxnID: transaction IDTimeStamp: timestamp from the CDC logTableName: fully qualified name of the table on which the operation was performedTo retrieve the values for these fields, use the META() function. See Parsing the fields of WAEvent for CDC readers.data: an array of fields, numbered from 0, containing:for an INSERT or DELETE operation, the values that were inserted or deletedfor an UPDATE, the values after the operation was completedTo retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operationdataPresenceBitMap, beforePresenceBitMap , and typeUUID are reserved and should be ignored.For information on additional fields and detailed discussion of values, see:HP NonStop reader WAEvent fieldsMySQL Reader WAEvent fieldsOracle Reader and OJet WAEvent fieldsPostgreSQL Reader WAEvent fieldsSQL Server readers WAEvent fieldsIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-05-05\n", "metadata": {"source": "https://www.striim.com/docs/en/waevent-contents-for-change-data.html", "title": "WAEvent contents for change data", "language": "en"}} {"page_content": "\n\nParsing the fields of WAEvent for CDC readersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersParsing the fields of WAEvent for CDC readersPrevNextParsing the fields of WAEvent for CDC readersUse the following functions to parse the output stream of a CDC reader.DATA[#] functionDATA [field_number]For each event, returns the value of the specified field from the data array.The first field in the array is 0.The order of the DATA functions in the SELECT clause determines the order of the fields in the output. These may be specified in any order: for example, data[1] could precede data[0].DATA(x) and DATAORDERED(x) functionsFor each event, returns the values in the WAEvent data\u00a0array as a java.util.HashMap, with column names as the keys. DATAORDERED(x) returns the column values in the same order as in the source table. When using DATA(x), the order is not guaranteed.The following example shows how to use the DATA() function to include database column names from a CDC source in JSONFormatter output. (You could do the same thing with AVROFormatter.)CREATE SOURCE DBROracleIn USING DatabaseReader (\n Username:'striidbr',\n Password:'passwd',\n ConnectionURL:'jdbc:oracle:thin:@192.0.2.49:1521/orcl',\n Tables:'ROBERT.POSAUTHORIZATIONS',\n FetchSize:1\n) \nOUTPUT TO OracleRawStream;\n\nCREATE TYPE OpTableDataType(\n TableName String,\n data java.util.HashMap\n);\nCREATE STREAM OracleTypedStream OF OpTableDataType;\nCREATE CQ ParseOracleRawStream\n INSERT INTO OracleTypedStream\n SELECT META(OracleRawStream, \"TableName\").toString(),\n DATA(OracleRawStream)\n FROM OracleRawStream;\n \nCREATE TARGET DBR2JSONOut USING FileWriter(\n filename:'DBR2JSON.json'\n)\nFORMAT USING JSONFormatter ()\nINPUT FROM OracleTypedStream;The CQ will be easier to read if you use an alias for the stream name. For example:CREATE CQ ParseOracleRawStream\n INSERT INTO OracleTypedStream\n SELECT META(x, \"TableName\").toString(),\n DATA(x)\n FROM OracleRawStream x;Assuming the following Oracle table and data:CREATE TABLE POSAUTHORIZATIONS (\n BUSINESS_NAME varchar2(30),\n MERCHANT_ID varchar2(100),\n PRIMARY_ACCOUNT NUMBER,\n POS NUMBER,CODE varchar2(20),\n EXP char(4),\n CURRENCY_CODE char(3),\n AUTH_AMOUNT number(10,3),\n TERMINAL_ID NUMBER,\n ZIP number,\n CITY varchar2(20));\ncommit;\nINSERT INTO POSAUTHORIZATIONS VALUES(\n 'COMPANY 1',\n 'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu',\n 6705362103919221351,\n 0,\n '20130309113025',\n '0916',\n 'USD',\n 2.20,\n 5150279519809946,\n 41363,\n 'Quicksand');\ncommit;Output for this application would be:{\n \"TableName\":\"MYSCHEMA.POSAUTHORIZATIONS\",\n \"data\":{\"AUTH_AMOUNT\":\"2.2\", \"BUSINESS_NAME\":\"COMPANY 1\", \"ZIP\":\"41363\", \"EXP\":\"0916\", \n\"POS\":\"0\", \"CITY\":\"Quicksand\", \"CURRENCY_CODE\":\"USD\", \n\"PRIMARY_ACCOUNT\":\"6705362103919221351\", \n\"MERCHANT_ID\":\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\", \"TERMINAL_ID\":\"5150279519809946\",\n\"CODE\":\"20130309113025\"}\n}IS_PRESENT() functionIS_PRESENT (, [ before | data ], ) For each event, returns true or false depending on whether the before or data array has a value for the specified field. For example, if you performed the following update on an Oracle table:UPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 5A' where pos=0;The WAEvent for that update would look something like this:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",\"6705362103919221351\",\n \"0\",\"20130309113025\",\"0916\",\"USD\",\"2.2\",\"5150279519809946\",\"41363\",\"Quicksand\"]\nbefore: [\"COMPANY 1\",null,null,null,null,null,null,null,null,null,null]You could use the following code to return values for the updated data fields and NOT_UPDATED for the other fields:SELECT\n CASE WHEN IS_PRESENT(OracleCDCStream,before,0)==true THEN data[0].toString()\n ELSE \"NOT_UPDATED\"\n END,\n CASE WHEN IS_PRESENT(OracleCDCStream,before,1)==true THEN data[1].toString()\n ELSE \"NOT_UPDATED\"\n END ...META() functionMETA(stream_name, metadata_key)For each event, returns the value for the specified metadata key. Metadata keys are specific to each adapter.For example, META(OracleStream, TableName) would return the table name of the relevant Oracle database.In this section: Parsing the fields of WAEvent for CDC readersDATA[#] functionDATA(x) and DATAORDERED(x) functionsIS_PRESENT() functionMETA() functionSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/en/parsing-the-fields-of-waevent-for-cdc-readers.html", "title": "Parsing the fields of WAEvent for CDC readers", "language": "en"}} {"page_content": "\n\nSample TQL application using change dataSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersSample TQL application using change dataPrevNextSample TQL application using change dataThe following sample application uses OracleReader but the approach is the same for all CDC readers.CREATE APPLICATION SampleCDCApp;\nCREATE SOURCE OracleCDCIn USING OracleReader (\n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'MYSCHEMA.POSAUTHORIZATIONS',\n FetchSize:1\n) \nOUTPUT TO OracleCDCStream;\n \nCREATE TYPE PosMeta(\n tableName String,\n operationName String,\n txnID String,\n timestamp String\n);\nCREATE STREAM PosMetaStream OF PosMeta; \nCREATE TYPE PosData(\n businessName String,\n accountName String,\n pos String,\n code String\n);\nCREATE STREAM PosDataStream OF PosData;\n-- extract the metadata values\nCREATE CQ OracleToPosMeta\nINSERT INTO PosMetaStream\nSELECT\n META(m,\"TableName\").toString(),\n META(m,\"OperationName\").toString(),\n META(m,\"TxnID\").toString(),\n META(m,\"TimeStamp\").toString()\n FROM OracleCDCStream m;\n-- write the metadata values to SysOut\nCREATE TARGET Metadump USING SysOut(name:meta) INPUT FROM PosMetaStream;\n \n-- extract the data values\nCREATE CQ OracleToPosData\nINSERT INTO PosDataStream\nSELECT\n CASE WHEN IS_PRESENT(x,data,0)==true THEN data[0].toString()\n ELSE \"NOT_PRESENT\"\n END,\n CASE WHEN IS_PRESENT(x,data,1)==true THEN data[1].toString()\n ELSE \"NOT_PRESENT\"\n END,\n CASE WHEN IS_PRESENT(x,data,2)==true THEN data[2].toString()\n ELSE \"NOT_PRESENT\"\n END,\n CASE WHEN IS_PRESENT(x,data,3)==true THEN data[3].toString()\n ELSE \"NOT_PRESENT\"\n END\nFROM OracleCDCStream x;\n-- write dump the data values to SysOut\nCREATE TARGET Datadump USING SysOut(name:data) INPUT FROM PosDataStream;\nEND APPLICATION SampleCDCApp;The output for the three operations described in OracleReader example output would be similar to:meta: PosMeta_1_0{\n tableName: \"SCOTT.POSAUTHORIZATIONS\"\n operationName: \"INSERT\"\n txnID: \"4.0.1742\"\n timestamp: \"2015-12-11T16:31:30.000-08:00\"\n};\ndata: PosData_1_0{\n businessName: \"COMPANY 1\"\n accountName: \"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\"\n pos: \"6705362103919221351\"\n code: \"0\"\n};\nmeta: PosMeta_1_0{\n tableName: \"SCOTT.POSAUTHORIZATIONS\"\n operationName: \"UPDATE\"\n txnID: \"4.0.1742\"\n timestamp: \"2015-12-11T16:31:30.000-08:00\"\n};\ndata: PosData_1_0{\n businessName: \"COMPANY 5A\"\n accountName: \"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\"\n pos: \"6705362103919221351\"\n code: \"0\"\n};\nmeta: PosMeta_1_0{\n tableName: \"SCOTT.POSAUTHORIZATIONS\"\n operationName: \"DELETE\"\n txnID: \"4.0.1742\"\n timestamp: \"2015-12-11T16:31:30.000-08:00\"\n};\ndata: PosData_1_0{\n businessName: \"COMPANY 5A\"\n accountName: \"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\"\n pos: \"6705362103919221351\"\n code: \"0\"\n};In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-18\n", "metadata": {"source": "https://www.striim.com/docs/en/sample-tql-application-using-change-data.html", "title": "Sample TQL application using change data", "language": "en"}} {"page_content": "\n\nValidating table mappingSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersValidating table mappingPrevNextValidating table mappingIn this release, table mapping is validated only for applications with a single MySQL, Oracle, PostgreSQL, or SQL Server source (DatabaseReader, IncrementalBatchReader, MySQLReader, OracleReader, PostgreSQLReader, or MSSQLReader) and a single MySQL, Oracle, PostgreSQL, or SQL Server DatabaseWriter target.When an application is deployed, Striim will compare the source and target columns and pop up a Validation errors dialog if it finds any of the following:A target table does not exist.The number of columns in a source table exceeds the number of columns in its target.A source data type is incompatible with its target: for example, a VARCHAR2 column is mapped to an integer column.A target data type is not optimal for its source: for example, an INTEGER column is mapped to a text column.The size of a source column exceeds that of its target: for example, a VARCHAR(20) column is mapped to a varchar(10) column.The source column allows nulls but its target does not.A column mapping is incorrect.For example:When you see this dialog, you may:Click any of the source or target links to open the component.Click X\u00a0to close the dialog and fix problems in the source or target DBMS.Click Ignore to run the application as is. This may be appropriate if the issues in the dialog are non-fatal: for example, when you know that there are no nulls in a source column mapped to a target column that does not allow nulls, or you deliberately mapped an INTEGER source column to a text target column.After you have made corrections, choose Validation Errors from the Created menu and click Validate Again.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-15\n", "metadata": {"source": "https://www.striim.com/docs/en/validating-table-mapping.html", "title": "Validating table mapping", "language": "en"}} {"page_content": "\n\nReading from multiple tablesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersReading from multiple tablesPrevNextReading from multiple tablesHP NonStop readers, MSSQLReader, and OracleReader can all read data from multiple tables using a single source. The MAP function allows data from each table to be output to a different stream.NoteWhen you use wildcards to specify multiple tables, only the tables that exist when the application is started will be read. Any new tables added afterwards will be ignored.The following reads all tables from the schema SCOTT.TEST.CREATE SOURCE OraSource USING OracleReader (\n ... Tables:'SCOTT.%' ...The following reads from all tables with names that start with SCOTT.TEST, such as SCOTT.TEST1, SCOTT.TESTCASE, and so on.CREATE SOURCE OraSource USING OracleReader (\n ... Tables:'SCOTT.TEST%' ...The following reads all tables with names that start with S and end with .TEST. Again, any tables added after the application starts will be ignored.CREATE SOURCE OraSource USING OracleReader (\n ... Tables:='S%.TEST' ...The following shows how to query data from one table when a stream contains data from multiple tables.CREATE SOURCE OraSource USING OracleReader (\n ... Tables:'SCOTT.POSDATA;SCOTT.STUDENT' ...\n)\nOUTPUT TO Orders;\n...\nCREATE CQ renderOracleControlLogEvent\nINSERT INTO oracleControlLogStream\n META(x,\"OperationName\"),\n META(x,\"TxnID\"),\n META(x,\"TimeStamp\").toString(),\n META(x,\"TxnUserID\u201d),\n data[0] \nFROM Orders x\nWHERE META(x,\u201dTableName\").toString() = \"SCOTT.POSDATA\";The following takes input from two tables and sends output for each to a separate stream using the MAP function. Note that a regular output stream (in this case Orders) must also be specified, even if it is not used by the application.CREATE SOURCE OraSource USING OracleReader (\n ... Tables:'SCOTT.POSDATA;SCOTT.STUDENT' ...\n)\nOUTPUT TO OrderStream,\n PosDataStream MAP (table:'SCOTT.POSDATA'),\n StudentStream MAP (table:'SCOTT.STUDENT');In some cases, creating a separate source for each table may improve performance.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-06-01\n", "metadata": {"source": "https://www.striim.com/docs/en/reading-from-multiple-tables.html", "title": "Reading from multiple tables", "language": "en"}} {"page_content": "\n\nUsing OUTPUT TO ... MAPSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersUsing OUTPUT TO ... MAPPrevNextUsing OUTPUT TO ... MAPWhen a SQL CDC source reads from multiple tables, you may use OUTPUT TO MAP (Table:'
') to route events from each table to a different output stream. Striim will create a type for each stream using the column names and data types of the source table. Date and time data types are mapped to DateTime. All other types are mapped to String.In this release, OUTPUT TO ... MAP is not supported for PostgreSQLReader.The following takes input from two tables and sends output for each to a separate stream using the MAP function. Note that a regular, unmapped output stream (in this case OrderStream) must also be specified, even if all tables are mapped.CREATE SOURCE OraSource USING OracleReader (\n ... Tables:'SCOTT.POSDATA;SCOTT.STUDENT' ...\n)\nOUTPUT TO OrderStream,\nOUTPUT TO PosDataStream MAP (Table:'SCOTT.POSDATA'),\nOUTPUT TO StudentStream MAP (Table:'SCOTT.STUDENT');WarningMAP is case-sensitive and the names specified must exactly match those specified in the Tables property, except when the source uses an HP NonStop reader, in which case the names\u00a0specified must match the fully-qualified names of the tables in upper case, unless TrimGuardianNames is True, in which case they must match the full shortened names in upper case.MAP is not displayed in or editable in the Flow Designer. Use DESCRIBE to view MAP settings and ALTER and RECOMPILE to modify them.In some cases, creating a separate source for each table may improve performance over using OUTPUT TO ... MAP.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-11-02\n", "metadata": {"source": "https://www.striim.com/docs/en/using-output-to-----map.html", "title": "Using OUTPUT TO ... MAP", "language": "en"}} {"page_content": "\n\nAdding user-defined data to WAEvent streamsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersAdding user-defined data to WAEvent streamsPrevNextAdding user-defined data to WAEvent streamsUse the PutUserData function in a CQ to add an element to the WAEvent USERDATA map. Elements in USERDATA may, for example, be inserted into DatabaseWriter output as described in\u00a0Modifying output using ColumnMap or used to partition KafkaWriter output among multiple partitions (see discussion of the Partition Key property in Kafka Writer).Kafka WriterThe following example would add the sixth element (counting from zero) in the WAEvent data array to USERDATA as the field \"city\":CREATE CQ AddUserData\nINSERT INTO OracleSourceWithPartitionKey\nSELECT putUserData(x, 'city',data[5])\nFROM OracleSourcre_ChangeDataStream x;You may add multiple fields, separated by commas:SELECT putUserData(x, 'city',data[5], 'state',data[6])For examples of how to use Userdata elements in TQL, see\u00a0Modifying output using ColumnMap and the discussions of PartitionKey in\u00a0Kafka Writer and\u00a0S3 Writer.To remove an element from USERDATA, use the removeUserData function (you may specify multiple elements, separated by commas):CREATE CQ RemoveUserData\nINSERT INTO OracleSourceWithPartitionKey\nSELECT removeUserData(x, 'city')\nFROM OracleSourcre_ChangeDataStream x;To remove all elements from the USERDATA map, use the clearUserData function:CREATE CQ ClearUserData\nINSERT INTO OracleSourceWithPartitionKey\nSELECT clearUserData(x)\nFROM OracleSourcre_ChangeDataStream x;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-07-20\n", "metadata": {"source": "https://www.striim.com/docs/en/adding-user-defined-data-to-waevent-streams.html", "title": "Adding user-defined data to WAEvent streams", "language": "en"}} {"page_content": "\n\nModifying the WAEvent data array using replace functionsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersModifying the WAEvent data array using replace functionsPrevNextModifying the WAEvent data array using replace functionsWhen a CDC reader's output is the input of a writer, you may insert a CQ between the two to modify the WAEvent's data array using the following functions. This provides more flexibility when replicating data.NoteWhen you specify a table or column name that contains special characters, use double quotes instead of single quotes and escape special characters as detailed in Using non-default case and special characters in table identifiers.Using non-default case and special characters in table identifiersreplaceData()replaceData(WAEvent s, String 'columnName', Object o)For input stream s, replaces the data array value for a specified column with an object. The object must be of the same type as the column.For example, the following would replace the value of the DESCRIPTION column with the string redacted:CREATE CQ replaceDataCQ\nINSERT INTO opStream\nSELECT replaceData(s,'DESCRIPTION','redacted')\nFROM OracleReaderOutput s;Optionally, you may restrict the replacement to a specific column:replaceData(WAEvent s, String 'tableName', String 'columnName', Object o)replaceString()replaceString(WAEvent s, String 'findString', String 'newString')\u00a0For input stream s, replaces all occurrences of findString in the data array with newString. For example, the following would replace all occurrences of MyCompany with PartnerCompany:CREATE CQ replaceDataCQ\nINSERT INTO opStream\nSELECT replaceString(s,'MyCompany','PartnerCompany')\nFROM OracleReaderOutput s;replaceStringRegex()replaceStringRegex(WAEvent s, String 'regex', String 'newString')For input stream s, replaces all strings in the data array that match the regex expression with newString. For example, the following would remove all whitespace:CREATE CQ replaceDataCQ\nINSERT INTO opStream\nSELECT replaceStringRegex(s,\u2019\\\\\\\\s\u2019,\u2019\u2019)\nFROM OracleReaderOutput s;The following would replace all numerals with x:CREATE CQ replaceDataCQ\nINSERT INTO opStream\nSELECT replaceStringRegex(s,\u2019\\\\\\\\d\u2019,\u2019x\u2019)\nFROM OracleReaderOutput s;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-10-05\n", "metadata": {"source": "https://www.striim.com/docs/en/modifying-the-waevent-data-array-using-replace-functions.html", "title": "Modifying the WAEvent data array using replace functions", "language": "en"}} {"page_content": "\n\nModifying and masking values in the WAEvent data array using MODIFYSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersModifying and masking values in the WAEvent data array using MODIFYPrevNextModifying and masking values in the WAEvent data array using MODIFYWhen a CDC reader's output is the input of a writer, you may insert a CQ between the two to modify the values in the WAEvent's data array. This provides more flexibility when replicating data.In this context, the syntax for the SELECT statement is:SELECT * FROM MODIFY (data[] = ,...)Preceed the\u00a0CREATE CQ statement with a\u00a0CREATE STREAM OF TYPE Global.WAEvent statement that creates the output stream for the CQ.Start the SELECT statement with\u00a0SELECT * FROM .data[]\u00a0specifies the field of the array to be modified. Fields are numbered starting with 0.The expression can use the same operators and functions as SELECT.The MODIFY clause may include CASE statements.The following simple example would convert a monetary amount in the data[4] field using an exchange rate of 1.09:CREATE CQ ConvertAmount \nINSERT INTO ConvertedStream\nSELECT * FROM RawStream\nMODIFY(data[4] = TO_FLOAT(data[4]) * 1.09);The next example illustrates the use of masking functions and CASE statements. It uses the maskPhoneNumber function (see\u00a0Masking functions) to mask individually identifiable information from US and India telephone numbers (as dialed from the US) while preserving the country and area codes. The US numbers have the format ###-###-####, where the first three digits are the area code. India numbers have the format 91-###-###-####, where 91 is the country code and the third through fifth digits are the subscriber trunk dialing (STD) code. The telephone numbers are in\u00a0data[4] and the country codes are in\u00a0data[5].CREATE STREAM MaskedStream OF Global.WAEvent;\nCREATE CQ maskData \nINSERT INTO maskedDataStream\nSELECT * FROM RawStream\nMODIFY(\ndata[4] = CASE\n WHEN TO_STRING(data[5]) == \"US\" THEN maskPhoneNumber(TO_STRING(data[4]), \"###-xxx-xxx\")\n ELSE maskPhoneNumber(TO_STRING(data[4]), \"#####x#xxx#xxxx\")\n END\n);This could be extended with additional WHEN statements to mask numbers from additional countries, or with additional masking functions to mask individually identifiable information such as credit card, Social Security, and national identification numbers.See\u00a0Masking functions for additional examples.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-06-24\n", "metadata": {"source": "https://www.striim.com/docs/en/modifying-and-masking-values-in-the-waevent-data-array-using-modify.html", "title": "Modifying and masking values in the WAEvent data array using MODIFY", "language": "en"}} {"page_content": "\n\nUsing the DATA(), DATAORDERED(), BEFORE(), and BEFOREORDERED() functionsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersUsing the DATA(), DATAORDERED(), BEFORE(), and BEFOREORDERED() functionsPrevNextUsing the DATA(), DATAORDERED(), BEFORE(), and BEFOREORDERED() functionsThe DATA() and BEFORE() functions return the WAEvent\u00a0data and\u00a0before arrays. The following example shows how you could use these functions to write change data event details to a JSON file with the associated Oracle column names. (You could do the same thing with AVROFormatter.) This is supported only for 11g using LogMiner.DATAORDERED(x) and BEFOREORDERED() return the column values in the same order as in the source table. When using DATA(x) and BEFORE(x), the order is not guaranteed.CREATE SOURCE OracleCDCIn USING OracleReader (\n Username:'walm',\n Password:'passwd',\n ConnectionURL:'192.168.1.49:1521:orcl',\n Tables:'myschema.%',\n FetchSize:1\n) \nOUTPUT TO OracleRawStream;CREATE TYPE OpTableDataType(\n OperationName String,\n TableName String,\n data java.util.HashMap,\n before java.util.HashMap\n);\n\nCREATE STREAM OracleTypedStream OF OpTableDataType;\nCREATE CQ ParseOracleRawStream\n INSERT INTO OracleTypedStream\n SELECT META(OracleRawStream, \"OperationName\").toString(),\n META(OracleRawStream, \"TableName\").toString(),\n DATA(OracleRawStream),\n BEFORE(OracleRawStream)\n FROM OracleRawStream;\n \nCREATE TARGET OracleCDCFFileOut USING FileWriter(\n filename:'Oracle2JSON_withFFW.json'\n)\nFORMAT USING JSONFormatter ()\nINPUT FROM OracleTypedStream;The CQ will be easier to read if you use an alias for the stream name. For example:CREATE CQ ParseOracleRawStream\n INSERT INTO OracleTypedStream\n SELECT META(x, \"OperationName\").toString(),\n META(x, \"TableName\").toString(),\n DATA(x),\n BEFORE(x)\n FROM OracleRawStream x;Using this application, the output for the INSERT operation described in OracleReader example output would look like this: {\n \"OperationName\":\"UPDATE\",\n \"TableName\":\"ROBERT.POSAUTHORIZATIONS\",\n \"data\":{\"AUTH_AMOUNT\":\"2.2\", \"BUSINESS_NAME\":\"COMPANY 5A\", \"ZIP\":\"41363\", \"EXP\":\"0916\", \n\"POS\":\"0\", \"CITY\":\"Quicksand\", \"CURRENCY_CODE\":\"USD\", \"PRIMARY_ACCOUNT\":\"6705362103919221351\", \n\"MERCHANT_ID\":\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\", \"TERMINAL_ID\":\"5150279519809946\", \n\"CODE\":\"20130309113025\"},\n \"before\":{\"AUTH_AMOUNT\":\"2.2\", \"BUSINESS_NAME\":\"COMPANY 1\", \"ZIP\":\"41363\", \"EXP\":\"0916\", \n\"POS\":\"0\", \"CITY\":\"Quicksand\", \"CURRENCY_CODE\":\"USD\", \"PRIMARY_ACCOUNT\":\"6705362103919221351\", \n\"MERCHANT_ID\":\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\", \"TERMINAL_ID\":\"5150279519809946\", \n\"CODE\":\"20130309113025\"}\n }In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-01-28\n", "metadata": {"source": "https://www.striim.com/docs/en/using-the-data--,-dataordered--,-before--,-and-beforeordered---functions.html", "title": "Using the DATA(), DATAORDERED(), BEFORE(), and BEFOREORDERED() functions", "language": "en"}} {"page_content": "\n\nCollecting discarded events in an exception storeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersCollecting discarded events in an exception storePrevNextCollecting discarded events in an exception storeWhen replicating CDC source data with Database Writer, attempted updates to and deletes from the target database may sometimes fail due to duplicate or missing primary keys or other issues. See CREATE EXCEPTIONSTORE for discussion of how to ignore these errors and capture the unwritten events.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-12-17\n", "metadata": {"source": "https://www.striim.com/docs/en/collecting-discarded-events-in-an-exception-store.html", "title": "Collecting discarded events in an exception store", "language": "en"}} {"page_content": "\n\nHandling DDL changes in CDC reader source tablesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with SQL CDC readersHandling DDL changes in CDC reader source tablesPrevNextHandling DDL changes in CDC reader source tablesSee Handling schema evolution.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-08-24\n", "metadata": {"source": "https://www.striim.com/docs/en/handling-ddl-changes-in-cdc-reader-source-tables.html", "title": "Handling DDL changes in CDC reader source tables", "language": "en"}} {"page_content": "\n\nSQL CDC replication examplesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesPrevNextSQL CDC replication examplesTo replicate CDC data, the CDC reader's output must be the input stream of the target.Optionally, you may insert a CQ between the source and the target, but that CQ must be limited to adding user-defined fields and\u00a0modifying values.Striim includes wizards for creating applications for SQL CDC sources and many targets (see Creating apps using templates).Creating apps using templatesThese examples use OracleReader, but may be used as a guide for replicating data from other SQL CDC readers.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-20\n", "metadata": {"source": "https://www.striim.com/docs/en/sql-cdc-replication-examples.html", "title": "SQL CDC replication examples", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to another Oracle databaseSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to another Oracle databasePrevNextReplicating Oracle data to another Oracle databaseThe first step in Oracle-to-Oracle replication is the initial load.Use\u00a0select min(start_scn) from gv$transaction to get the SCN number of the oldest open or pending transaction.Use select current_scn from V$DATABASE;\u00a0to get the SCN of the export.Use Oracle's exp or\u00a0expdp utility,\u00a0providing the SCN from step 2, to export the appropriate tables and data from the source database to a data file.Use Oracle's imp or\u00a0impdp to import the exported data into the target database.Once initial load is complete, the following sample application would continuously replicate changes to the tables SOURCE1 and SOURCE2 in database DB1 to tables TARGET1 and TARGET2 in database DB2 using Database Writer. The StartSCN value is the SCN number from step 1. In\u00a0 the WHERE clause, replace\u00a0\u00a0######### with the SCN from step 2. Start the application with recovery enabled (see Recovering applications) so that on restart it will resume from the latest transaction rather than the StartSCN point.Recovering applicationsCREATE SOURCE OracleCDC USING OracleReader (\n Username:'striim',\n Password:'******', \n ConnectionURL:'10.211.55.3:1521:orcl1',\n Tables:'DB1.SOURCE1;DB1.SOURCE2', \n Compression:true\n StartSCN:'...'\n)\nOUTPUT TO OracleCDCStream;\n\nCREATE CQ FilterCDC\nINSERT INTO FilteredCDCStream\nSELECT x \nFROM OracleCDCStream x\nWHERE TO_LONG(META(x,'COMMITSCN')) > #########;\n\nCREATE TARGET WriteToOracle USING DatabaseWriter ( \n ConnectionURL:'jdbc:oracle:thin:@10.211.55.3:1521:orcl1', \n Username:'striim',\n Password:'******', \n Tables:'DB1.SOURCE1,DB2.TARGET1;DB1.SOURCE2,DB2.TARGET2'\n)\nINPUT FROM FilteredCDCStream;The\u00a0FilterCDC\u00a0CQ filters out all transactions that were replicated during initial load.The following Oracle column types are supported:BINARY DOUBLEBINARY FLOATBLOBCHARCLOBDATEFLOATINTERVAL DAY TO SECONDINTERVAL YEAR TO MONTHLONGNCHARNUMBERNVARCHARRAWTIMESTAMPTIMESTAMP WITH TIME ZONETIMESTAMP WITH LOCAL TIME ZONEVARCHAR2Limitations:The primary key for a target table cannot be BLOB or CLOB.TRUNCATE TABLE is not supported for tables containing BLOB or CLOB columns.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-10-08\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-another-oracle-database.html", "title": "Replicating Oracle data to another Oracle database", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to Amazon RedshiftSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to Amazon RedshiftPrevNextReplicating Oracle data to Amazon RedshiftStriim provides a template for creating applications that read from Oracle and write to Redshift. See\u00a0Creating an application using a template for details.RedshiftWriter can continuously replicate one or many Oracle tables to an Amazon Redshift store. First, create a table in Redshift corresponding to each Oracle table to be replicated. Then load the existing data using DatabaseReader, for example:CREATE SOURCE OracleJDBCSource USING DatabaseReader (\n Username:'Striim',\n Password:'****',\n ConnectionURL:'jdbc:oracle:thin:@192.168.123.14:1521/XE',\n Tables:'TPCH.H_CUSTOMER;TPCH.H_PART;TPCH.H_SUPPLIER'\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET TPCHInitialLoad USING RedshiftWriter (\n ConnectionURL: 'jdbc:redshift://mys3bucket.c1ffd5l3urjx.us-west-2.redshift.amazonaws.com:5439/dev',\n Username:'mys3user',\n Password:'******',\n bucketname:'mys3bucket',\n/* for striimuser */\n accesskeyid:'********************',\n secretaccesskey:'****************************************',\n Tables:'TPCH.H_CUSTOMER,customer;TPCH.H_PART,part;TPCH.H_SUPPLIER,supplier'\n)\nINPUT FROM DataStream; The\u0153 Tables property maps each specified Oracle table to a Redshift table, for example, TPCH.H_CUSTOMER to customer.Once the initial load is complete, the following application will read new data using LogMiner and continuously replicate it to Redshift:CREATE SOURCE OracleCDCSource USING OracleReader (\n Username:'miner',\n Password:'miner',\n ConnectionURL:'192.168.123.26:1521:XE',\n Tables:'TPCH.H_CUSTOMER;TPCH.H_PART;TPCH.H_SUPPLIER'\n)\nOutput To LCRStream;\n\nCREATE TARGET RedshiftTarget USING RedshiftWriter (\n ConnectionURL: 'jdbc:redshift://mys3bucket.c1ffd5l3urjx.us-west-2.redshift.amazonaws.com:5439/dev',\n Username:'mys3user',\n Password:'******',\n bucketname:'mys3bucket',\n/* for striimuser */\n accesskeyid:'********************',\n secretaccesskey:'****************************************',\n Tables:'TPCH.H_CUSTOMER,customer;TPCH.H_PART,part;TPCH.H_SUPPLIER,supplier',\n Mode:'IncrementalLoad' )\nINPUT FROM LCRStream; Note that Redshift does not enforce unique primary key constraints. Use OracleReader's StartSCN or StartTimestamp property to ensure that you do not have duplicate or missing events in Redshift.For for more information, see Redshift Writer.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-24\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-amazon-redshift.html", "title": "Replicating Oracle data to Amazon Redshift", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to Azure Cosmos DBSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to Azure Cosmos DBPrevNextReplicating Oracle data to Azure Cosmos DBCosmosDBWriter can continuously replicate one or many Oracle tables to Cosmos DB collections.You must create the target collections in Cosmos DB manually. Each partition key name must match one of the column names in the Oracle source table.If you wish to run the following examples, adjust the\u00a0Oracle Reader properties and\u00a0Cosmos DB Writer\u00a0properties to reflect your own environment.OracleReader propertiesIn Cosmos DB, create database MyDB containing the following collections (note that the collection and partition names are case-sensitive, so when replicating Oracle data they must be uppercase):SUPPLIERS with partition key /LOCATIONCUSTOMERS with partition key /COUNTRYIn Oracle, create tables and populate them as follows:CREATE TABLE SUPPLIERS(ID INT, NAME VARCHAR2(40), LOCATION VARCHAR2(200), PRIMARY KEY(ID));\nCREATE TABLE CUSTOMERS(ID INT, NAME VARCHAR2(40), EMAIL VARCHAR2(55), COUNTRY VARCHAR2(75),\n PRIMARY KEY(ID));\nCOMMIT;\nINSERT INTO SUPPLIERS VALUES(100036492, 'Michelle', 'michelle@example.com',\u00a0'West Virginia');\nINSERT INTO CUSTOMERS VALUES(23004389, 'Manuel', 'manuel@example.com', 'Austria');\nCOMMIT;In Striim, run the following application to perform the initial load of the existing data using DatabaseReader:CREATE APPLICATION Oracle2CosmosInitialLoad;\n \nCREATE SOURCE OracleJDBCSource USING DatabaseReader (\n Username: '',\n Password: '',\n ConnectionURL: '',\n Tables: 'MYSCHEMA.%'\n)\nOUTPUT TO OracleStream;\n \nCREATE TARGET CosmosTarget USING CosmosDBWriter (\n ServiceEndpoint: '',\n AccessKey: '',\n Collections: 'MYSCHEMA.%,MyDB.%',\n ConnectionPoolSize: 3\n )\nINPUT FROM OracleStream;After the application is finished, the Cosmos DB collections should contain documents similar to the following.MyDB.SUPPLIERS:{\n \"LOCATION\": \"West Virginia\",\n \"ID\": \"100036492\",\n \"NAME\": \"Example Inc.\",\n \"id\": \"100036492\",\n \"_rid\": \"CBcfAKX3xWACAAAAAAAACA==\",\n \"_self\": \"dbs/CBcfAA==/colls/CBcfAKX3xWA=/docs/CBcfAKX3xWACAAAAAAAACA==/\",\n \"_etag\": \"\\\"00008000-0000-0000-0000-5bacc99b0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538050459\n}\nMyDB.CUSTOMERS:{\n \"COUNTRY\": \"Austria\",\n \"ID\": \"23004389\",\n \"EMAIL\": \"manuel@example.com\",\n \"NAME\": \"Manuel\",\n \"id\": \"23004389\",\n \"_rid\": \"CBcfAJgI4eYEAAAAAAAACA==\",\n \"_self\": \"dbs/CBcfAA==/colls/CBcfAJgI4eY=/docs/CBcfAJgI4eYEAAAAAAAACA==/\",\n \"_etag\": \"\\\"d600b243-0000-0000-0000-5bacc99c0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538050460\n}\n{\n \"COUNTRY\": \"Austria\",\n \"ID\": \"23908876\",\n \"EMAIL\": \"michelle@example.com\",\n \"NAME\": \"Michelle\",\n \"id\": \"23908876\",\n \"_rid\": \"CBcfAJgI4eYFAAAAAAAACA==\",\n \"_self\": \"dbs/CBcfAA==/colls/CBcfAJgI4eY=/docs/CBcfAJgI4eYFAAAAAAAACA==/\",\n \"_etag\": \"\\\"d600b443-0000-0000-0000-5bacc99c0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538050460\n}\nIn Striim, run the following application to continuously replicate new data from Oracle to Cosmos DB using OracleReader:CREATE APPLICATION Oracle2CosmosIncremental;\n\nCREATE SOURCE OracleCDCSource USING OracleReader (\n Username: '',\n Password: '',\n ConnectionURL: '',\n Tables: 'DB.ORDERS;DB.SUPPLIERS;DB.CUSTOMERS'\n)\nOUTPUT TO OracleStream;\n\nCREATE TARGET CosmosTarget USING CosmosDBWriter (\n ServiceEndpoint: '',\n AccessKey: '',\n Collections: 'DB.%,MyDB.%',\n ConnectionPoolSize: 3\n )\nINPUT FROM OracleStream;\n \nEND APPLICATION Oracle2CosmosIncremental;In Oracle, enter the following:INSERT INTO SUPPLIERS VALUES(100099786, 'Example LLC', 'Ecuador');\nUPDATE CUSTOMERS SET EMAIL='msanchez@example.com' WHERE ID='23004389';\nDELETE FROM CUSTOMERS WHERE ID='23908876';\nCOMMIT;Within 30 seconds, those changes in Oracle should be replicated to the corresponding Cosmos DB collections with results similar to the following.MyDB.SUPPLIERS:{\n \"LOCATION\": \"West Virginia\",\n \"ID\": \"100036492\",\n \"NAME\": \"Example Inc.\",\n \"id\": \"100036492\",\n \"_rid\": \"CBcfAKX3xWACAAAAAAAACA==\",\n \"_self\": \"dbs/CBcfAA==/colls/CBcfAKX3xWA=/docs/CBcfAKX3xWACAAAAAAAACA==/\",\n \"_etag\": \"\\\"00008000-0000-0000-0000-5bacc99b0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538050459\n}\n{\n \"LOCATION\": \"Ecuador\",\n \"ID\": \"100099786\",\n \"NAME\": \"Example LLC\",\n \"id\": \"100099786\",\n \"_rid\": \"CBcfAKX3xWADAAAAAAAADA==\",\n \"_self\": \"dbs/CBcfAA==/colls/CBcfAKX3xWA=/docs/CBcfAKX3xWADAAAAAAAADA==/\",\n \"_etag\": \"\\\"0000e901-0000-0000-0000-5bacc99b0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538050559\n}\nMyDB.CUSTOMERS:{\n \"COUNTRY\": \"Austria\",\n \"ID\": \"23004389\",\n \"EMAIL\": \"msanchez@example.com\",\n \"NAME\": \"Manuel\",\n \"id\": \"23004389\"\n \"_rid\": \"CBcfAJgI4eYEAAAAAAAACA==\",\n \"_self\": \"dbs/CBcfAA==/colls/CBcfAJgI4eY=/docs/CBcfAJgI4eYEAAAAAAAACA==/\",\n \"_etag\": \"\\\"d600b243-0000-0000-0000-5bacc99c0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538050460\n}\nStriim provides a template for creating applications that read from Oracle and write to Cosmos DB. See\u00a0Creating an application using a template for details.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-24\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-azure-cosmos-db.html", "title": "Replicating Oracle data to Azure Cosmos DB", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to CassandraSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to CassandraPrevNextReplicating Oracle data to CassandraCassandra Writer can continuously replicate one or many Oracle tables to a Cassandra or DataStax keyspace. First, create a table in Cassandra corresponding to each Oracle table to be replicated. Then load the existing data using DatabaseReader, for example:CREATE SOURCE OracleJDBCSource USING DatabaseReader (\n Username:'Striim',\n Password:'****',\n ConnectionURL:'jdbc:oracle:thin:@192.168.123.14:1521/XE',\n Tables:'TPCH.H_CUSTOMER;TPCH.H_PART;TPCH.H_SUPPLIER'\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET TPCHInitialLoad USING CassandraWriter (\n ConnectionURL:'jdbc:cassandra://203.0.113.50:9042/mykeyspace',\n Username:'striim',\n Password:'******',\n Tables:'TPCH.H_CUSTOMER,customer;TPCH.H_PART,part;TPCH.H_SUPPLIER,supplier'\n)\nINPUT FROM DataStream; DatabaseWriter's Tables property maps each specified Oracle table to a Cassandra table, for example, TPCH.H_CUSTOMER to customer.\u00a0Oracle table names must be uppercase and Cassandra table names must be lowercase. Since columns in Cassandra tables are not created in the same order they are specified in the CREATE TABLE statement, the ColumnMap option is required (see Mapping columns) and wildcards are not supported. See\u00a0Database Reader and\u00a0Cassandra Writer\u00a0for more information about the properties.Once the initial load is complete, the following application will read new data using LogMiner and continuously replicate it to Cassandra:CREATE SOURCE OracleCDCSource USING OracleReader ( \n Username: 'Striim',\n Password: '******',\n ConnectionURL: '203.0.113.49:1521:orcl',\n Compression:'True',\n Tables: 'TPCH.H_CUSTOMER;TPCH.H_PART;TPCH.H_SUPPLIER'\n ) \nOUTPUT TO DataStream;\n\nCREATE TARGET CassandraTarget USING CassandraWriter(\n ConnectionURL:'jdbc:cassandra://203.0.113.50:9042/mykeyspace',\n Username:'striim',\n Password:'******',\n Tables: 'TPCH.H_CUSTOMER,customer;TPCH.H_PART,part;TPCH.H_SUPPLIER,supplier'\nINPUT FROM DataStream;\nOracleReader's Compression property must be True. Cassandra does not allow primary key updates. See\u00a0Oracle Reader properties\u00a0and Cassandra Writer for more information about the properties.When the input stream of a Cassandra Writer target is the output of an Oracle source (DatabaseReader or OracleReader), the following types are supported:Oracle typeCassandra CQL typeBINARY_DOUBLEdoubleBINARY_FLOATfloatBLOBblobCHARtext, varcharCHAR(1)boolCLOBascii, textDATEtimestampDECIMALdouble, floatFLOATfloatINTint\u00a0INTEGERintNCHARtext, varcharNUMBERintNUMBER(1,0)intNUMBER(10)intNUMBER(19,0)intNUMERICintNVARCHAR2varcharSMALLINTintTIMESTAMPtimestampTIMESTAMP WITH LOCAL TIME ZONEtimestampTIMESTAMP WITH TIME ZONEtimestampVARCHAR2varcharIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-06-10\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-cassandra.html", "title": "Replicating Oracle data to Cassandra", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to Google BigQuerySkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to Google BigQueryPrevNextReplicating Oracle data to Google BigQueryStriim provides a template for creating applications that read from Oracle and write to BigQuery. See\u00a0Creating an application using a template for details.The following application will replicate data from all tables in MYSCHEMA in Oracle to the corresponding tables in mydataset in BIgQuery. The tables in BigQuery must exist when the application is started. All source and target tables must all have a UUID column. In the source tables, the UUID values must be unique identifiers. See the notes for the Mode property in BigQuery Writer) for additional details.Big Query WriterCREATE SOURCE OracleCDCSource USING OracleReader ( \n CommittedTransactions: false,\n Username: 'myuser',\n Password: 'mypass',\n ConnectionURL: '192.168.33.10:1521/XE',\n Tables: 'MYSCHEMA.%'\n ) \nOUTPUT TO DataStream;\n\nCREATE TARGET BigQueryTarget USING BigQueryWriter(\n ServiceAccountKey: '/.json',\n ProjectId: '',\n Mode: 'MERGE',\n Tables: \"MYSCHEMA.%,mydataset.% keycolumns(UUID)\"\nINPUT FROM DataStream;\nSee\u00a0BigQuery Writer for details of the property values.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-24\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-google-bigquery.html", "title": "Replicating Oracle data to Google BigQuery", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to Google Cloud PostgreSQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to Google Cloud PostgreSQLPrevNextReplicating Oracle data to Google Cloud PostgreSQLSee Migrating an Oracle database to Cloud SQL for PostgreSQL using Striim.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-21\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-google-cloud-postgresql.html", "title": "Replicating Oracle data to Google Cloud PostgreSQL", "language": "en"}} {"page_content": "\n\nReplicating MySQL data to Google Cloud SpannerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating MySQL data to Google Cloud SpannerPrevNextReplicating MySQL data to Google Cloud SpannerSee Continuous data replication to Cloud Spanner using Striim.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-20\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-mysql-data-to-google-cloud-spanner.html", "title": "Replicating MySQL data to Google Cloud Spanner", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to a Hazelcast \"hot cache\"Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to a Hazelcast \"hot cache\"PrevNextReplicating Oracle data to a Hazelcast \"hot cache\"Striim provides a template for creating applications that read from Oracle and write to Hazelcast. See\u00a0Creating an application using a template for details.See\u00a0Hazelcast Writer for information on the adapter properties.To replicate Oracle data to Hazelcast:Write a Java class defining the Plain Old Java Objects (POJOs) corresponding to the Oracle table(s) to be replicated (see\u00a0http://stackoverflow.com/questions/3527264/how-to-create-a-pojo for more information on POJOs), compile the Java class to a .jar file, copy it to the Striim/lib\u00a0directory of each Striim server that will run the HazelcastWriter target, and restart the server.Write an XML file defining the object-relational mapping to be used to map Oracle table columns to Hazelcast maps (the \"ORM file\") and save it in a location accessible to the Striim cluster.\u00a0Data types are converted as specified in the ORM file. Supported Java types on the Hazelcast side are:binary (byte[])Character, charDouble, doubleFloat, floatint, Integerjava.util.DateLong, longShort, shortStringOdd mappings may throw invalid data errors, for example, when an Oracle VARCHAR2 column mapped to a long contains a value that is not a number. Oracle BLOB and CLOB types are not supported.Write a Striim application using DatabaseReader and HazelcastWriter to perform the initial load from Oracle to Hazelcast\u00a0.Write a second Striim application using OracleReader and HazelcastWriter to perform continuous replication.\u00a0This example assumes the following Oracle table definition:\u00a0CREATE TABLE INV ( \n SKU INT PRIMARY KEY NOT NULL,\n STOCK NUMBER(*,4),\n NAME varchar2(20),\n LAST_UPDATED date \n);\nThe following Java class defines a POJO corresponding to the table:package com.customer.vo;\nimport java.io.Serializable;\nimport java.util.Date;\npublic class ProductInvObject implements Serializable {\n\n public long sku = 0;\n public double stock = 0;\n public String name = null;\n public Date lastUpdated = null;\n\n public ProductInvObject ( ) { }\n\n @Override\n public String toString() {\n return \"sku : \" + sku + \", STOCK:\" + stock + \", NAME:\" + name + \", LAST_UPDATED:\" + lastUpdated ;\n }\n}\nThe following ORM file maps the Oracle table columns to Hazelcast maps:\n\n\n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\u00a0Assuming that the ORM file has been saved to Striim/Samples/Ora2HCast/invObject_orm.xml, the\u00a0following Striim application will perform the initial load:CREATE APPLICATION InitialLoadOra2HC;\n\nCREATE SOURCE OracleJDBCSource USING DatabaseReader (\n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'MYSCHEMA.INV'\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET HazelOut USING HazelcastWriter (\n ConnectionURL: '203.0.1113.50:5702',\n ormFile:\"Samples/Ora2HCast/invObject_orm.xml\",\n mode: \"initialLoad\",\n maps: 'MYSCHEMA.INV,invCache'\n)\nINPUT FROM DataStream;\n\nEND APPLICATION InitialLoadOra2HC;\nOnce\u00a0InitialLoadOra2HC has copied all the data, the following application will perform continuous replication of new data:CREATE APPLICATION ReplicateOra2HC;\n\nCREATE SOURCE OracleCDCSource USING OracleReader (\n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'MYSCHEMA.INVmyschema.ATM'\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET HazelOut USING HazelcastWriter (\n ConnectionURL: '203.0.1113.50:5702',\n ormFile:\"Samples/Ora2HCast/invObject_orm.xml\",\n mode: \"incremental\",\n maps: 'MYSCHEMA.INV,invCache'\n)\nINPUT FROM DataStream ;\n\nEND APPLICATION ReplicateOra2HCInitialLoadOra2HC;NoteIf the Hazelcast cluster goes down, the data in the map will be lost. To restore it, stop the replication application, do the initial load again, then restart replication.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-24\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-a-hazelcast--hot-cache-.html", "title": "Replicating Oracle data to a Hazelcast \"hot cache\"", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to HBaseSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to HBasePrevNextReplicating Oracle data to HBaseStriim provides a template for creating applications that read from Oracle and write to HBase. See\u00a0Creating an application using a template for details.The following sample application will continuously replicate changes to MYSCHEMA.MYTABLE to the\u00a0HBase Writer\u00a0table\u00a0mytable in the column family\u00a0oracle_data:CREATE SOURCE OracleCDCSource USING OracleReader ( \n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'MYSCHEMA.MYTABLE',\n ReaderType: 'LogMiner',\n CommittedTransactions: false\n)\nINSERT INTO DataStream;\n\nCREATE TARGET HBaseTarget USING HBaseWriter(\n HBaseConfigurationPath:\"/usr/local/HBase/conf/hbase-site.xml\",\n Tables: 'MYSCHEMA.MYTABLE,mytable.oracle_data'\nINPUT FROM DataStream;Notes:INSERT, UPDATE, and DELETE are supported.UPDATE does not support changing a row's primary key.If the Oracle table has one primary key, the value of that column is treated as the HBase rowkey. If the Oracle table has multiple primary keys, their values are concatenated and treated as the HBase rowkey.Inserting a row with the same primary key as an existing row is treated as an update.The\u00a0Tables property values are case-sensitive.The Tables\u00a0value may map Oracle tables to HBase tables and column families in various ways:one to one:\u00a0Tables: \"MYSCHEMA.MYTABLE,mytable.oracle_data\"many Oracle tables to one HBase table:\u00a0\"MYSCHEMA.MYTABLE1,mytable.oracle_data;MYSCHEMA.MYTABLE2,mytable.oracle_data\"many Oracle tables to one HBase table in different column families:\u00a0\"MYSCHEMA.MYTABLE1,mytable.family1;MYSCHEMA.MYTABLE2,mytable.family2\"many Oracle tables to many HBase tables:\u00a0\"MYSCHEMA.MYTABLE1,mytable1.oracle_data;MYSCHEMA.MYTABLE2,mytable2.oracle_data \"In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-10-06\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-hbase.html", "title": "Replicating Oracle data to HBase", "language": "en"}} {"page_content": "\n\nWriting raw CDC data to HiveSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesWriting raw CDC data to HivePrevNextWriting raw CDC data to HiveThe following sample application uses data from OracleReader, but you can do the same thing with any of the other CDC readers.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-01-26\n", "metadata": {"source": "https://www.striim.com/docs/en/writing-raw-cdc-data-to-hive.html", "title": "Writing raw CDC data to Hive", "language": "en"}} {"page_content": "\n\nOracle tableSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesWriting raw CDC data to HiveOracle tablePrevNextOracle tableIn Oracle, create the following table:CREATE TABLE POSAUTHORIZATIONS (\n BUSINESS_NAME varchar2(30),\n MERCHANT_ID varchar2(100),\n PRIMARY_ACCOUNT NUMBER,\n POS NUMBER,CODE varchar2(20),\n EXP char(4),\n CURRENCY_CODE char(3),\n AUTH_AMOUNT number(10,3),\n TERMINAL_ID NUMBER,\n ZIP number,\n CITY varchar2(20),\n PRIMARY KEY (MERCHANT_ID));\nCOMMIT;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/oracle-table.html", "title": "Oracle table", "language": "en"}} {"page_content": "\n\nTQL applicationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesWriting raw CDC data to HiveTQL applicationPrevNextTQL applicationCreate the following TQL application, substituting the appropriate connection URL and table name:CREATE SOURCE OracleCDCSource USING OracleReader (\n StartTimestamp:'07-OCT-2015 18:37:55',\n Username:'qatest',\n Password:'qatest',\n ConnectionURL:'192.0.2.0:1521:orcl',\n Tables:'QATEST.POSAUTHORIZATIONS',\n OnlineCatalog:true,\n FetchSize:1,\n Compression:true\n) \nOUTPUT TO DataStream;\n\nCREATE TARGET HiveTarget USING HDFSWriter(\n\tfilename:'ora_hive_pos.bin',\n\thadoopurl:'hdfs://localhost:9000/output/\u2019,\n)\nFORMAT USING AvroFormatter (\n\tschemaFileName: \u2019ora_hive_pos.avsc'\n)\nINPUT FROM DataStream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-24\n", "metadata": {"source": "https://www.striim.com/docs/en/tql-application.html", "title": "TQL application", "language": "en"}} {"page_content": "\n\nAvro schema fileSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesWriting raw CDC data to HiveAvro schema filePrevNextAvro schema fileSee AVROFormatter for instructions on using the TQL application to generate ora_hive_pos.avsc, an Avro schema file based on WAEvent. The generated file's contents should be:{\n \"namespace\": \"waevent.avro\",\n \"type\" : \"record\",\n \"name\": \"WAEvent_Record\",\n \"fields\": [\n {\n \"name\" : \"data\",\n \"type\" : { \"type\": \"map\",\"values\":\"string\" }\n },\n {\n \"name\" : \"before\",\n \"type\" : [\"null\",{\"type\": \"map\",\"values\":\"string\" }]\n },\n {\n \"name\" : \"metadata\",\n \"type\" : { \"type\": \"map\",\"values\":\"string\" }\n }\n ]\n}WAEvent's data, before, and metadata fields are represented in Avro as Avro map types.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/avro-schema-file.html", "title": "Avro schema file", "language": "en"}} {"page_content": "\n\nHive tableSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesWriting raw CDC data to HiveHive tablePrevNextHive tableCopy ora_hive_pos.avsc to In Hive, create a table using the generated Avro schema file. Modify the TBLPROPERTIES string to point to the correct location.CREATE TABLE OracleHive\nROW FORMAT SERDE'org.apache.hadoop.hive.serde2.avro.AvroSerDe' \nSTORED AS INPUTFORMAT'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'\nOUTPUTFORMAT'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'\nTBLPROPERTIES ('avro.schema.url'='hdfs://localhost:9000/avro/ora_hive_pos.avsc');The new table should look like this:hive> describe formatted oraclehive;\nOK\n# col_name \tdata_type \tcomment \n\t \t \ndata \tmap \t \nbefore \tmap \t \nmetadata \tmap \t \n\t \t \n\u2026. \nTime taken: 0.481 seconds, Fetched: 34 row(s)\nhive> Configure the above to table to read from generated avro datahive>LOAD DATA INPATH '/output/ora_hive_pos.bin' OVERWRITE INTO TABLE OracleHive;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/hive-table.html", "title": "Hive table", "language": "en"}} {"page_content": "\n\nGenerate sample CDC data in OracleSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesWriting raw CDC data to HiveGenerate sample CDC data in OraclePrevNextGenerate sample CDC data in OracleIn Oracle, enter the following to generate CDC data:INSERT INTO POSAUTHORIZATIONS VALUES('COMPANY 1',\n'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu',6705362103919221351,0,'20130309113025','0916',\n'USD',2.20,5150279519809946,41363,'Quicksand');\nINSERT INTO POSAUTHORIZATIONS VALUES('COMPANY 2',\n'OFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1',4710011837121304048,4,'20130309113025','0815',\n'USD',22.78,5985180438915120,16950,'Westfield');\nINSERT INTO POSAUTHORIZATIONS VALUES('COMPANY 3',\n'ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx',2553303262790204445,6,'20130309113025','0316',\n'USD',218.57,0663011190577329,18224,'Freeland');\nINSERT INTO POSAUTHORIZATIONS VALUES('COMPANY 4',\n'FZXC0wg0LvaJ6atJJx2a9vnfSFj4QhlOgbU',2345502971501633006,3,'20130309113025','0813',\n'USD',18.31,4959093407575064,55470,'Minneapolis');\nINSERT INTO POSAUTHORIZATIONS VALUES('COMPANY 5',\n'ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx',6388500771470313223,2,'20130309113025','0415',\n'USD',314.94,7116826188355220,39194,'Yazoo City');\nUPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 1A' where pos= 0;\nUPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 5A' where pos= 2;\nUPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 4A' where pos= 3;\nUPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 2A' where pos= 4;\nUPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 3A' where pos= 6;\nDELETE from POSAUTHORIZATIONS where pos=6;\nCOMMIT;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/generate-sample-cdc-data-in-oracle.html", "title": "Generate sample CDC data in Oracle", "language": "en"}} {"page_content": "\n\nQuery the Hive tableSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesWriting raw CDC data to HiveQuery the Hive tablePrevNextQuery the Hive tableQuery the Hive table to verify that the CDC data is being captured:hive> select * from oraclehive;\nOK\n{\"3\":\"0\",\"2\":\"6705362103919221351\",\"1\":\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",\n\"10\":\"Quicksand\",\"0\":\"COMPANY 1\",\"6\":\"USD\",\"5\":\"0916\",\"4\":\"20130309113025\",\"9\":\"41363\",\n\"8\":\"5150279519809946\"} NULL {\"TxnID\":\"10.23.1524\",\"RbaSqn\":\"209\",\n\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"1939875\",\"OperationName\":\"INSERT\",\n\"ParentTxnID\":\"10.23.1524\",\"SegmentType\":\"TABLE\",\"SessionInfo\":\"UNKNOWN\",\n\"ParentTxn\":\"QATEST\",\"Session\":\"143\",\"BytesProcessed\":\"760\",\n\"TransactionName\":\"\",\"STARTSCN\":\"\",\"SegmentName\":\"POSAUTHORIZATIONS\",\"COMMITSCN\":\"\",\n\"SEQUENCE\":\"1\",\"RbaBlk\":\"57439\",\"ThreadID\":\"1\",\"SCN\":\"193987500000588282738968494240000\",\n\"AuditSessionId\":\"73401\",\"ROWID\":\"AAAXlEAAEAAAALGAAA\",\n\"TimeStamp\":\"2015-10-08T14:58:55.000-07:00\",\"Serial\":\"685\",\n\"RecordSetID\":\" 0x0000d1.0000e05f.0010 \",\"TableName\":\"QATEST.POSAUTHORIZATIONS\",\n\"SQLRedoLength\":\"325\",\"Rollback\":\"0\"}\n{\"3\":\"4\",\"2\":\"4710011837121304048\",\"1\":\"OFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1\",\n\"10\":\"Westfield\",\"0\":\"COMPANY 2\",\"6\":\"USD\",\"5\":\"0815\",\"4\":\"20130309113025\",\"9\":\"16950\",\n\"8\":\"5985180438915120\"} NULL {\"TxnID\":\"10.23.1524\",\"RbaSqn\":\"209\",\n\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"1939876\",\"OperationName\":\"INSERT\",\n\"ParentTxnID\":\"10.23.1524\",\"SegmentType\":\"TABLE\",\"SessionInfo\":\"UNKNOWN\",\n\"ParentTxn\":\"QATEST\",\"Session\":\"143\",\"BytesProcessed\":\"762\",\n\"TransactionName\":\"\",\"STARTSCN\":\"\",\"SegmentName\":\"POSAUTHORIZATIONS\",\"COMMITSCN\":\"\",\n\"SEQUENCE\":\"1\",\"RbaBlk\":\"57441\",\"ThreadID\":\"1\",\"SCN\":\"193987600000588282738969804960001\",\n\"AuditSessionId\":\"73401\",\"ROWID\":\"AAAXlEAAEAAAALGAAB\",\n\"TimeStamp\":\"2015-10-08T14:58:56.000-07:00\",\"Serial\":\"685\",\n\"RecordSetID\":\" 0x0000d1.0000e061.0010 \",\"TableName\":\"QATEST.POSAUTHORIZATIONS\",\n\"SQLRedoLength\":\"327\",\"Rollback\":\"0\"}\n...\nTime taken: 0.238 seconds, Fetched: 11 row(s)To select a subset of the data, use syntax similar to the following:hive> select metadata[\"TimeStamp\"], metadata[\"TxnID\"], metadata[\"TableName\"],\ndata from orclehive where metadata[\"OperationName\"]=\"UPDATE\";\n2015-10-08T14:58:56.000-07:00\t5.26.1740\tQATEST.POSAUTHORIZATIONS\n\t{\"0\":\"COMPANY 1A\",\"1\":\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\"}\n2015-10-08T14:58:56.000-07:00\t5.26.1740\tQATEST.POSAUTHORIZATIONS\n\t{\"0\":\"COMPANY 2A\",\"1\":\"OFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1\"}\n...\nTime taken: 0.088 seconds, Fetched: 5 row(s)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/query-the-hive-table.html", "title": "Query the Hive table", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to HiveSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to HivePrevNextReplicating Oracle data to HiveSee\u00a0Hive Writer for information on storage types supported and limitations.The following example assumes the following Oracle source table:create table employee (Employee_name varchar2(30), \nEmployeed_id number, \nCONSTRAINT employee_pk PRIMARY KEY (Employeed_id));and the following Hive target table:CREATE TABLE employee (emp_name string, emp_id int)\nCLUSTERED BY (emp_id) into 2 buckets \nSTORED AS ORC TBLPROPERTIES ('transactional'='true');The following application will load existing data from Oracle to Hive:CREATE SOURCE OracleJDBCSource USING DatabaseReader (\n Username:'oracleuser',\n Password:'********',\n ConnectionURL:'192.0.2.75:1521:orcl',\n Tables:'DEMO.EMPLOYEE',\n FetchSize:1\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET HiveTarget USING HiveWriter (\n ConnectionURL:\u2019jdbc:hive2://localhost:10000\u2019,\n Username:\u2019hiveuser\u2019, \n Password:\u2019********\u2019,\n hadoopurl:'hdfs://18.144.17.75:9000/',\n Mode:\u2019initiaload\u2019,\n Tables:\u2019DEMO.EMPLOYEE,employee\u2019\n)\nINPUT FROM DataStream;Once initial load is complete, the following application will read new data and continuously replicate it to Hive:CREATE SOURCE OracleCDCSource USING OracleReader (\n Username:'oracleuser',\n Password:'********',\n ConnectionURL:'192.0.2.75:1521:orcl',\n Tables:'DEMO.EMPLOYEE',\n FetchSize:1\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET HiveTarget USING HiveWriter (\n ConnectionURL:\u2019jdbc:hive2://192.0.2.76:10000\u2019,\n Username:\u2019hiveuser\u2019, \n Password:\u2019********\u2019,\n hadoopurl:'hdfs://192.0.2.76:9000/',\n Mode:\u2019incrementalload\u2019,\n Tables:\u2019DEMO.EMPLOYEE,employee keycolumns(emp_id)\u2019\u2019\n)\nINPUT FROM DataStream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-24\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-hive.html", "title": "Replicating Oracle data to Hive", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to KafkaSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to KafkaPrevNextReplicating Oracle data to KafkaBefore following these instructions:Complete the tasks in Configuring Oracle to use Oracle Reader.Create the target topic in Kafka.Striim must be running.If you are using a Forwarding Agent in Oracle, it must be connected to Striim.You will need the following information to complete the wizard:StriimVM user nameVM user passwordStriim cluster nameStriim admin passwordDNS name (displayed on the Essentials tab for the Striim VM)Oracle (source)connection URL in the format ::, for example,\u00a0198.51.100.0:1521:orcllogin name and passwordsource table namesKafka (target)topic namebroker addressoptionally, any Kafka producer properties required by your environment (see \"KafkaWriter\" in the \"Adapters reference\" section of the Striim Programmer's Guide)Log into the Striim web UI at :9080 using\u00a0admin as the user name and the Striim admin password.Select the Oracle CDC to Kafka template that matches your target Kafka broker version.Enter names for your application (for example, Oracle2Kafka) and new namespace (do not create applications in the admin namespace) and click\u00a0Save.Enter the name for the Oracle source component in the Striim application (for example, OracleSource), the connection URL, user name, and password.Select LogMiner as the log reader.Optionally, specify a wildcard string to select the Oracle tables to be read (see the discussion of the Tables property in Oracle Reader properties).Set\u00a0Deploy source on Agent\u00a0on (if the Forwarding Agent is not connected to Striim, this property does not appear) and click\u00a0Next.If Striim's checks show that all properties are valid (this may take a few minutes), click\u00a0Next.If you specified a wildcard in the Oracle properties, click Next. Otherwise, select the tables to be read and click\u00a0Next.Enter the name for the Kafka target component in the Striim application (for example, KafkaTarget), the topic name and the broker address.For Input From, select the only choice. (This is OracleReader's output stream, and its name is\u00a0_ChangeDataStream.)Enter the topic name, and the broker address.Optionally, click Show optional properties and specify any Kafka producer properties required by your environment. Leave Mode set to sync.Select AvroFormatter and specify its schema file name. This file will be created when the application is deployed (see\u00a0Avro Formatter).Click\u00a0Save, then click\u00a0Next. (Click\u00a0Create Target\u00a0only if you specified maps or filters and want to create more than one target.)Striim will create your application and open it in the Flow Designer. It should look something like this:Select Configuration > App settings, set the recovery interval to 5 seconds, and click Save.Select Configuration > Export to generate a TQL file. It should contain something like this (the password is encrypted):CREATE APPLICATION Oracle2Kafka RECOVERY 5 SECOND INTERVAL;\n\nCREATE SOURCE OracleSource USING OracleReader ( \n FetchSize: 1,\n Compression: false,\n Username: 'myname',\n Password: '7ip2lhUSP0o=',\n ConnectionURL: '198.51.100.15:1521:orcl',\n DictionaryMode: 'OnlineCatalog',\n ReaderType: 'LogMiner',\n Tables: 'MYSCHEMA.%'\n ) \nOUTPUT TO OracleSourcre_ChangeDataStream;\n\nCREATE TARGET KafkaTarget USING KafkaWriter VERSION '0.8.0' ( \n Mode: 'Sync',\n Topic: 'MyTopic',\n brokerAddress: '198.51.100.55:9092'\n) \nFORMAT USING AvroFormatter ( schemaFileName: 'MySchema.avro' ) \nINPUT FROM OracleSourcre_ChangeDataStream;\n\nEND APPLICATION Oracle2Kafka;Note that\u00a0FetchSize: 1 is appropriate for development, but should be increased in a production environment. See\u00a0Oracle Reader properties for more information.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-04-02\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-kafka.html", "title": "Replicating Oracle data to Kafka", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to SAP HANASkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to SAP HANAPrevNextReplicating Oracle data to SAP HANADatabaseWriter can continuously replicate one or many Oracle tables to a SAP HANA database. First, create a table in SAP HANA corresponding to each Oracle table to be replicated. Then load the existing data using DatabaseReader. For example, to replcate all tables in MYORADB to MYSAPDB:CREATE SOURCE OracleJDBCSource USING DatabaseReader (\n Username:'Striim',\n Password:'****',\n ConnectionURL:'jdbc:oracle:thin:@203.0.113.49:1521:orcl',\n Tables:'MYORADB.%'\n)\nOUTPUT TO DataStream;\n\nCREATE TARGET SAPHANAInitialLoad USING DatabaseWriter (\n ConnectionURL:'jdbc:sap://203.0.113.50:39013/?databaseName=MYASPDB¤tSchema=striim',\n Username:'striim',\n Password:'******',\n Tables:'MYORADB.%,MYSAPDB.%'\n)\nINPUT FROM DataStream; See\u00a0Database Reader and\u00a0Database Writer\u00a0for more information about the properties.Database WriterOnce the initial load is complete, the following application will continuously replicate new data from Oracle to SAP HANA:CREATE SOURCE OracleCDCSource USING OracleReader ( \n Username: 'Striim',\n Password: '******',\n ConnectionURL: '203.0.113.49:1521:orcl',\n Compression:'True',\n Tables: 'MYORADB.%'\n ) \nOUTPUT TO DataStream;\n\nCREATE TARGET SAPHANAContinuous USING DatabaseWriter(\n ConnectionURL:'jdbc:cassandra://203.0.113.50:9042/mykeyspace',\n Username:'striim',\n Password:'******',\n Tables: 'MYORADB.%,MYSAPDB.% '\nINPUT FROM DataStream;\nWhen the input stream of a SAP HANA DatabaseWriter target is the output of an Oracle source (Database Reader, Incremental Batch Reader, or Oracle Reader), the following types are supported:Oracle typeSAP HANA typeBINARY_DOUBLEDOUBLEBINARY_FLOATREALBLOBBLOB, VARBINARYCHARALPHANUM, TEXT, VARCHARCHAR(1)BOOLEANCLOBCLOB, VARCHARDATEDATEDECIMALDECIMALFLOATFLOATINTINTEGERINTEGERINTEGERNCHARNVARCHARNUMBERINTEGERNUMBER(1,0)INTEGERNUMBER(10)INTEGERNUMBER(19,0)INTEGERNUMERICBIGINTEGER, DECIMAL, DOUBLE, FLOAT, INTEGERNVARCHAR2NVARCHARSMALLINTSMALLINTTIMESTAMPTIMESTAMPTIMESTAMP WITH LOCAL TIME ZONETIMESTAMPTIMESTAMP WITH TIME ZONETIMESTAMPVARCHAR2VARCHARIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-12-16\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-sap-hana.html", "title": "Replicating Oracle data to SAP HANA", "language": "en"}} {"page_content": "\n\nReplicating Oracle data to SnowflakeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL CDC replication examplesReplicating Oracle data to SnowflakePrevNextReplicating Oracle data to SnowflakeStriim provides a template for creating applications that read from Oracle and write to Snowflake. See\u00a0Creating an application using a template for details.SnowflakeWriter can continuously replicate one or many Oracle tables to Snowflake. First, create a table in Snowflake corresponding to each Oracle table to be replicated. Then load the existing data using DatabaseReader, for example:CREATE SOURCE OracleJDBCSource USING DatabaseReader (\n \u00a0Username: 'striim',\n \u00a0Password: '******',\n \u00a0ConnectionURL: 'jdbc:oracle:thin:@//127.0.0.1:1521/xe',\n \u00a0Tables: 'QATEST.%'\nOUTPUT TO DataStream;\n\nCREATE TARGET SnowflakeInitialLoad USING SnowflakeWriter (\n ConnectionURL: 'jdbc:snowflake://hx75070.snowflakecomputing.com/?db=DEMO_DB&schema=public',\n username: 'striim',\n password: '******',\n Tables: 'QATEST.%,DEMO_DB.PUBLIC.%',\n appendOnly: true\n)\nINPUT FROM DataStream;\nOnce the initial load is complete, the following application will read new data using LogMiner and continuously replicate it to Snowflake:CREATE SOURCE OracleCDCSource USING OracleReader (\n \u00a0Username: 'striim',\n \u00a0Password: '******',\n \u00a0ConnectionURL: 'jdbc:oracle:thin:@//127.0.0.1:1521/xe',\n \u00a0Tables: 'QATEST.%'\nOUTPUT TO DataStream;\n\nCREATE TARGET SnowflakeCDC USING SnowflakeWriter (\n ConnectionURL: 'jdbc:snowflake://hx75070.snowflakecomputing.com/?db=DEMO_DB&schema=public',\n username: 'striim',\n password: '******',\n Tables: 'QATEST.%,DEMO_DB.PUBLIC.%' \n)\nINPUT FROM DataStream;For for more information, see Snowflake Writer.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-24\n", "metadata": {"source": "https://www.striim.com/docs/en/replicating-oracle-data-to-snowflake.html", "title": "Replicating Oracle data to Snowflake", "language": "en"}} {"page_content": "\n\nBidirectional replicationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Bidirectional replicationPrevNextBidirectional replicationBidirectional replication allows synchronization of two databases, with inserts, updates, and deletes in each replicated in the other. The columns in the replicated tables must have compatible data types.If your Striim cluster is licensed for bidirectional replication, this will be indicated on the user-name menu at the top right corner of the web UI.In this release, bidirectional replication is supported for Oracle, MariaDB, MySQL, PostgreSQL, and SQL Server. It uses two data flows, one from a source in database A to a target in database B, the other the reverse.NoteWhen doing bidirectional replication:Schema evolution is not supported.MS SQL Reader's Support Transaction property must be True.Oracle Reader's Committed Transaction property must be True.Please Contact Striim support to determine whether your databases are compatible with bidirectional replication.The following example application would perform bidirectional replication between MySQL and SQL Server:CREATE APPLICATION BidirectionalDemo RECOVERY 1 SECOND INTERVAL;\nCREATE SOURCE ReadFromMySQL USING MySQLReader (\n Username: 'striim',\n Password: '*******',\n ConnectionURL: 'mysql://192.0.2.0:3306',\n Tables: 'mydb.*',\n BidirectionalMarkerTable: 'mydb.mysqlmarker'\n)\nOUTPUT TO MySQLStream;\n\nCREATE TARGET WriteToSQLServer USING DatabaseWriter (\n ConnectionURL:'jdbc:sqlserver://192.0.2.1:1433;databaseName=mydb',\n Username:'striim',\n PassWord:'********',\n Tables: 'mydb.*,dbo.*',\n CheckPointTable: 'mydb.CHKPOINT',\n BidirectionalMarkerTable: 'mydb.sqlservermarker'\n)\nINPUT FROM MySQLStream;\n\nCREATE SOURCE ReadFromSQLServer USING MSSQLReader (\n ConnectionURL:'192.0.2.1:1433',\n DatabaseName: 'mydb',\n Username: 'striim',\n Password: '*******',\n Tables: 'dbo.*',\n BidirectionalMarkerTable: 'mydb.sqlservermarker'\n)\nOUTPUT TO SQLServerStream;\n\nCREATE TARGET WriteToMySQL USING DatabaseWriter (\n Username:'striim',\n PassWord:'********',\n ConnectionURL: 'mysql://192.0.2.0:3306',\n Tables: 'dbo.*,mydb.*',\n CheckPointTable: 'mydb.CHKPOINT',\n BidirectionalMarkerTable: 'mydb.mysqlmarker'\n)\nINPUT FROM SQLServerStream;\nEND APPLICATION BidirectionalDemo;Striim requires a \"marker table\" in each database. It uses the information recorded in this table to detect and discard events that would create an infinite loop. To create the table, use the following DDL:for MariaDB, MySQL, or PostgreSQL:CREATE TABLE \n(componentId varchar(100) PRIMARY KEY, lastupdatedtime timestamp(6));for Oracle (table name must be uppercase):CREATE TABLE \n(componentId varchar2(100) PRIMARY KEY, lastupdatedtime timestamp(6));for SQL Server:CREATE TABLE \n(componentId varchar(100) PRIMARY KEY, lastupdatedtime datetime2(6));In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-14\n", "metadata": {"source": "https://www.striim.com/docs/en/bidirectional-replication.html", "title": "Bidirectional replication", "language": "en"}} {"page_content": "\n\nAdapter property data typesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Adapter property data typesPrevNextAdapter property data typesAdapter properties use the same Supported data types as TQL, plus Encrypted passwords.Some property data types are enumerated: that is, only documented values are allowed. If setting properties in TQL, be careful not to use other values for these properties.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-05-20\n", "metadata": {"source": "https://www.striim.com/docs/en/adapter-property-data-types.html", "title": "Adapter property data types", "language": "en"}} {"page_content": "\n\nHP NonStopSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopPrevNextHP NonStopStriim can read change data from HP NonStop Enscribe (all versions), SQL/MP (all versions), and SQL/MX (3.2.1 or later).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/hp-nonstop-cdc.html", "title": "HP NonStop", "language": "en"}} {"page_content": "\n\nSetting up HP NonStop with the Striim agentSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopSetting up HP NonStop with the Striim agentPrevNextSetting up HP NonStop with the Striim agentThe Striim agent software for HP NonStop comprises 5 objects that need to be installed on the source HP NonStop system :WAGENT - Agent process responsible for managing the change data capture (CDC) processesSQMXCDCP - CDC process program file for NonStop SQL/MXSQMPCDCP - CDC process program file for NonStop SQL/MPENSCDCP - CDC process program file for EnscribeWEBACTIO - SQL/MX module used by SQMXCDCPThe 5 files are distributed in PAK files called E (for NonStop) and X (for NonStop X). For example:Striim version 3.9.6: E3096 / X30963.9.6.1: E30961 / X309613.10.0: E3100 / X31003.10.0.1: E31001 / X310013.10.1: E3101 / X31011. Copy the PAK file to the HP NonStop system using binary FTP file transfer (or equivalent). From a TACL prompt, UNPAK this file info the subvolume in which you wish to install the HP NonStop components using the following command (where $. identify where the object files are to be installed):UNPAK ,$*.*.*,MYID,LISTALL,VOL $.2. Once the files are unpaked, the 3 CDC process objects need to changed to be owned by SUPER.SUPER and be LICENSED and PROGID\u2019d. To do this, log on as SUPER.SUPER and execute the following commands:FUP GIVE (SQMXCDCP,SQMPCDCP,ENSCDCP),SUPER.SUPER\nFUP LICENSE (SQMXCDCP,SQMPCDCP,ENSCDCP)\nFUP SECURE (SQMXCDCP,SQMPCDCP,ENSCDCP),, PROGID is user-determined, however, these files need to be secured so that the userid under which the agent process runs has execute permissions on these object files.3.\u00a0If Striim will be used to capture changes from SQL/MP tables, the SQMPCDCP program must be SQL-compiled, as described later in this section under the heading \"System log messages about SQL/MP automatic recompilation\". If minimizing system log messages about automatic recompilation is not important, any SQL/MP catalog on the system may be used for in those directions. It may be one that already exists, or one you create to use solely for this program.4. (Optional) Create an Edit file anywhere in the Guardian filespace containing the names of all files and tables you plan to reference from any Striim application. If you do not want to control which files and tables Striim applications can reference, you do not have to create this file. Change data will only be captured and sent for files and tables requested by Striim applications, not necessarily all the files and tables listed in this file.The file consists of one line per table or file for which change data can be captured. Wildcard patterns are not currently supported. The volume and subvolume names are optional for SQL/MP and Enscribe names, the default volume and subvolume names of the agent process will be added for partially qualified names. SQL/MX names should be fully qualified 3 part names, for example:for SQL/MX: ..
for SQL/MP: [<$VOL>.][.]
for Enscribe: [<$VOL>.][.]When SQL/MP tables or Enscribe files on SMF Virtual Disks are to be allowed to be referenced, the logical name of the table or file is what should be specified in this file, not the physical name.5. If necessary, change the security for WAGENT to match the userid you will use to start the process.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-18\n", "metadata": {"source": "https://www.striim.com/docs/en/setting-up-hp-nonstop-with-the-striim-agent.html", "title": "Setting up HP NonStop with the Striim agent", "language": "en"}} {"page_content": "\n\nStarting the Striim Agent process on the HP NonStop platformSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopSetting up HP NonStop with the Striim agentStarting the Striim Agent process on the HP NonStop platformPrevNextStarting the Striim Agent process on the HP NonStop platformThe agent process (WAGENT) must run at all times to allow the Striim server to make connections and request change data from the HP NonStop platform. The agent can be started from the TACL command prompt as follows:[PARAM WA-LOGPRIORITY ]\n[PARAM ALLOWED-TABLES ]\n[PARAM WA-ASSUMED-TIMEZONE ]\n[PARAM WA-CHARSET-FOR-CHAR ]\n[PARAM WA-CHARSET-FOR-NCHAR ]\n[PARAM WA-CDC-CPU-LIST \"\"]\n[PARAM WA-ENCRYPT-ALL {TRUE|FALSE}]\n[ADD DEFINE =TCPIP^PROCESS^NAME, CLASS MAP, FILE ]\nRUN WAGENT [ / NAME , TERM $ZHOME/ ] --agent_port --logger_name Where: is a number that specifies the highest level of detail for the messages the Striim processes report. Messages at the specified value and below are reported. The permitted values are :0 FATAL \n1 ALERT \n2 CRIT \n3 ERROR \n4 WARN \n5 NOTICE \n6 INFO \n7 DEBUG \n8 TRACEThe default value is 4, which is WARN, the recommended value for normal use. is the name of the Edit file you created in step 4 of \"Installing the agent.\" If you do not want to control which files can be accessed by Striim applications, you do not have to specify this parameter.\u00a0is the code for the time zone used by date values in the files and tables. The valid values (which are not case-sensitive) are:GMT for Greenwich Mean TimeLST for Local Standard Time as configured for the NonStop system, ignoring daylight savings timeLCT for Local Civil Time as configured for the NonStop system, including daylight savings time (default used if this PARAM is not specified)\u00a0is one of the character set names listed in Encoding of character fields and specifies the encoding used for character fields. See that section for a full description of the character set names and the PARAMs with which they are used.\u00a0is a list of CPU numbers in which CDC processes may be started. The point of this is so that\u00a0when more than one CDC process is running, the CDC processes do not all have to run in the same CPU.\u00a0 There might be multiple CDC processes due to having multiple TQL applications running or a single TQL application that uses parallel audit trail reading . The CPU numbers may be separated by spaces or commas. The number of a CPU that is down is accepted, but a number that is larger than the highest numbered CPU is not accepted and causes an error and termination of WAGENT. The CPU numbers in the list are used in round robin fashion when choosing the CPU in which the next CDC process is started, skipping any CPU that currently is down. If the WA-CDC-CPU-LIST PARAM is not present, all CDC processes are started in the CPU in which WAGENT is running. The same CPU number may appear in the list more than once. This can be used to use some CPUs. more often than others for CDC processes.\u00a0See HP NonStop reader properties\u00a0for a description of how to request parallel audit trail reading.PARAM WA-ENCRYPT-ALL TRUE encrypts messages between Striim and WAGENT (and all processes started by WAGENT) for applications or flows created\u00a0WITH ENCRYPTION (see\u00a0CREATE APPLICATION ... END APPLICATION\u00a0). The encryption key is set automatically by Striim. When WA-ENCRYPT-ALL is true, attempting to run an application created without encryption will fail with an error. Similarly, if WA-ENCRYPT-ALL is omitted or FALSE, attempting to run an application created with encryption\u00a0will fail with an error. To run applications both with and without encryption,\u00a0start two WAGENT processes, one with WA-ENCRYPT-ALL TRUE and the other without, and specify the port number for the appropriate WAGENT process in the HP NonStop reader properties in the application.Messages related to this encryption may appear in either the NonStop system or Striim server logs, and do not always specifically mention encryption. When WAGENT expects\u00a0an encrypted message and decryption fails, or when it expects\u00a0 an unencrypted message and fails to parse a GPB object, it writes a warning to its log. If Striim sends a start command that WAGENT does not recognize due to mismatched encryption settings, Striim will write an error to its server log and the application will not start.\u00a0is the name of the TCP/IP process that the WAGENT and CDC processes should use. The ADD DEFINE command may be omitted if the default process, $ZTC0, is the one that should be used. The ADD DEFINE commands for other DEFINE names that specify TCP/IP settings, such as =TCPIP^HOST^FILE, TCPIP^NETWORK^FILE, etc., also may be included here if the Striim programs must use non-default settings.\u00a0is a Guardian process name to be used to identify the WAGENT process. is the port number the agent process listens on for connections from the Striim server. is the name of an EMS collector process where the agent and CDC processes will write any informational and error messages.We recommend that you name the agent process to aid process identification, though that is not required.The Striim processes do not write any messages of their own to the process' home terminal, but if the C++ runtime library reports an error or if one of the processes abends and produces a saveabend file, the messages about those events will be sent to the home terminal. Further, when the agent starts one of the CDC processes, if the agent's home terminal no longer exists, starting the CDC process will fail. For both these reasons, it is best not to let the agent inherit a telnet session's home terminal, but specify a device or process that always exists, such as $ZHOME or a VHS process, as the TERM argument of the RUN command for WAGENT. Using a VHS process would allow you to control where the messages are logged, which probably would make it easier for you to find them, should some serious error occur in a Striim component.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-19\n", "metadata": {"source": "https://www.striim.com/docs/en/starting-the-striim-agent-process-on-the-hp-nonstop-platform.html", "title": "Starting the Striim Agent process on the HP NonStop platform", "language": "en"}} {"page_content": "\n\nRunning the Striim agent as an SCF Persistent Generic ProcessSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopSetting up HP NonStop with the Striim agentRunning the Striim agent as an SCF Persistent Generic ProcessPrevNextRunning the Striim agent as an SCF Persistent Generic ProcessTo ensure the agent process is always running and available, it is recommended that the process is started and managed as an SCF Persistent Generic Process. (See chapter 3 of the HP NonStop SCF Reference Manual for the Kernel Subsystem for more information on creating and managing generic processes).Below is an example of the SCF commands that could be used to create and start the agent as a persistent generic process:ASSUME PROCESS $ZZKRN\nADD #WAGNT, &\nNAME $WAGNT, &\nHOMETERM $ZHOME, &\nCPU FIRSTOF (1,2,3), &\nAUTORESTART 10, &\nPROGRAM <$vol>..WAGENT , &\nDEFAULTVOL <$vol>. , &\nUSERID ., &\nSTARTMODE APPLICATION, &\nSTARTUPMSG \"--agent_port --logger_name \"\n[ ADD #WAGNT , &\n ( PARAM ALLOWED-TABLES <$vol>.. ) ]\n[ ADD #WAGNT , &\n ( DEFINE =TCPIP^PROCESS^NAME, CLASS MAP, FILE ) ]\nSTART #WAGNTYou may use a process name (the NAME argument) other than $WAGNT, but the generic process must be named.The DEFAULTVOL must be specified to be the volume and subvolume in which the HP NonStop components were installed. If this is not set properly, the Agent will not be able to start the change data capture processes, since it expects their object files to be in the default volume.If you choose to use the ALLOWED-TABLES PARAM, the filename given in the ADD #WAGNT command for PARAM ALLOWED-TABLES must be the name of the Edit file you created in step 4 of \"Installing the agent.\"This example shows only specifying the PARAM ALLOWED-TABLES and the DEFINE =TCPIP^PROCESS^NAME, but the other PARAMs and DEFINEs documented in \"Starting the Striim Agent process on the HP NonStop platform\" also may be specified by including additional ADD #WAGNT commands before the START #WAGNT command.The home terminal for a Persistent Generic Process defaults to $YMIOP.#CLCI, the system console, if no HOMETERM argument is included. HP recommends using $ZHOME as the HOMETERM for most Persistent Generic Processes. You also could use a VHS process as the HOMETERM if you arrange to configure the VHS process as a Persistent Generic Process that starts before the agent process starts. Using a VHS process would allow you to control where the messages are logged so you could more easily find them if a serious error would occur in a Striim component.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-11-06\n", "metadata": {"source": "https://www.striim.com/docs/en/running-the-striim-agent-as-an-scf-persistent-generic-process.html", "title": "Running the Striim agent as an SCF Persistent Generic Process", "language": "en"}} {"page_content": "\n\nDisabling TMF Audit CompressionSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopSetting up HP NonStop with the Striim agentDisabling TMF Audit CompressionPrevNextDisabling TMF Audit CompressionThe Striim CDC Process for HP NonStop reads and forwards change data from the NonStop TMF audit trail to the Striim server. TMF has the ability to compress audit data records, which means only columns that have changed are audited, or in the case of Enscribe, only the bytes that have changed are audited. To ensure all the field and column data is available to the flows in Striim applications, we recommend that you disable such compression by creating the relevant files and tables with the NO AUDITCOMPRESS attribute.If TMF audit compression is enabled for a SQL table, change records for UPDATE operations on the table might not contain the values of at least some of the columns that were not changed by the update.\u00a0 If TMF audit compression is enabled for an Enscribe file, no change records will be created for UPDATE operations on those files.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-11-06\n", "metadata": {"source": "https://www.striim.com/docs/en/disabling-tmf-audit-compression.html", "title": "Disabling TMF Audit Compression", "language": "en"}} {"page_content": "\n\nSystem log messages about SQL/MP automatic recompilationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopSetting up HP NonStop with the Striim agentSystem log messages about SQL/MP automatic recompilationPrevNextSystem log messages about SQL/MP automatic recompilationWhen Striim is used to work with change data from SQL/MP tables, embedded SQL code in SQMPCDCP is used to read the description of each of those tables from the SQL/MP catalog in which it is registered. Since similarity checking is not available for SQL/MP catalog tables, each of those embedded queries is automatically recompiled when it is run. There are two automatic recompilations done each time a different catalog is referenced. These automatic recompilations do not affect the performance of capturing the change data from the TMF audit trail and sending it to the Striim server. They occur only at the time the Striim application is started on the Striim server, when it sends the request to start capture of change data from particular tables on the NonStop system.The SQL/MP compiler reports each automatic recompilation in a message to the system log, and there is no way to turn off or redirect those messages, so at least two automatic recompilation messages will appear each time a Striim application is started. There could be more than two automatic recompilation messages if the Striim application requests change data from tables in more than one catalog. These messages can be ignored safely. They are expected during normal operation and do not indicate anything wrong in the SQL/MP change data capture process. However, if these messages interfere with your regular monitoring of the system log, you can reduce or eliminate them by SQL compiling the SQMPCDCP program to reference the tables in the SQL/MP catalog on your system used most frequently for tables that the Striim applications reference.If you decide to SQL compile the SQMPCDCP program, run the following commands while logged on as SUPER.SUPER:VOLUME \nADD DEFINE =WEBACT_COLUMNS, CLASS MAP, FILE .COLUMNS\nADD DEFINE =WEBACT_KEYS, CLASS MAP, FILE .KEYS\nADD DEFINE =WEBACT_PARTNS, CLASS MAP, FILE .PARTNS\nSQLCOMP / IN SQMPCDCP / CATALOG , &\n\u00a0 COMPILE PROGRAM STORE SIMILARITY INFO\nFUP LICENSE SQMPCDCPWhere: is the volume and subvolume in which you installed the Striim files is the volume and subvolume of the SQL/MP catalog most frequently used by the SQL/MP tables referenced by Striim applications. This is not necessarily the volume and subvolume in which the tables themselves reside. Use FUP INFO with the DETAIL option on a SQL/MP table to determine in which catalog it is registered.The system log messages that report the automatic recompilations do not give the name of the table referenced by the query that caused the recompilation. You will have to determine by other means which SQL/MP tables are being used by Striim applications, then check them to see which SQL/MP catalog is used most frequently. If you register all of your SQL/MP tables in the same catalog, you would not have to do any checking to see which catalog to use in the above commands.The above method will not eliminate all system log messages about automatic SQL compilations of the SQMPCDCP program unless all the SQL/MP tables used from your Striim applications are registered in the one SQL/MP catalog. However, if it is the case that every HpNonStopSQLMPReader Adapter in every Striim application references tables only from a single SQL/MP catalog, with some additional effort, you could eliminate all the system log messages.To do this, you would install and run the Striim Agent in multiple subvolumes on the NonStop system and use the above method to SQL compile SQMPCDCP in each of them to use a different SQL catalog on your system. Then, if you are careful to configure each HpNonStopSQLMPReader Adapter with the IP address of the Agent whose copy of SQMPCDCP was SQL compiled with the catalog used by the tables referenced in that HpNonStopSQLMPReader Adapter, this would eliminate all of the system log messages about automatic SQL compilations. This method would require that you have several Agents running rather than just one, but the number of processes running SQMPCDCP would be the same, since each HpNonStopSQLMPReader Adapter instance uses its own process running SQMPCDCP, whether they all use the same Agent or different Agents. It also requires extra effort when configuring Striim applications to use the correct IP address for the tables referenced in the HpNonStopSQLMPReader Adapters. You will have to decide whether eliminating the system log messages about automatic SQL compilation is worth the extra effort.Even if you install the Agent in multiple subvolumes and SQL compile the multiple copies of SQMPCDCP with different catalog tables, if any of the HpNonStopSQLMPReader Adapters contains a list of tables that are not all in the same SQL/MP catalog, you would still see some system log messages about automatic SQL compilation. This would not cause any malfunction of the Striim applications. It just would not eliminate all of the system log messages that you tried to eliminate.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-03-28\n", "metadata": {"source": "https://www.striim.com/docs/en/system-log-messages-about-sql-mp-automatic-recompilation.html", "title": "System log messages about SQL/MP automatic recompilation", "language": "en"}} {"page_content": "\n\nHP NonStop reader propertiesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopHP NonStop reader propertiesPrevNextHP NonStop reader propertiesBefore you can use the HPNonStopEnscribeReader, HPNonStopSQLMPReader, or HPNonStopSQLMXReader, the Striim change data capture (CDC) processes must be installed on the host as detailed in Setting up HP NonStop with the Striim agent.To read from SQL/MP or SQL/MX using DatabaseReader deployed to a Forwarding Agent, the JDBC driver must be installed as described in Install the HP NonStop JDBC driver in a Forwarding Agent.propertytypedefault valuenotesAgent IP AddressStringIP address (or DNS hostname) of the HP NonStop system from which the adapter is to receive change data.Agent Port NoIntegerTCP port number to be used to communicate with the Agent process. This must match the port number given on the command line when starting the Agent (the program wagent).Audit TrailsStringmergedList of audit trail name abbreviations from which to read change records and transmit them in parallel TCP sessions to the Striim server. Audit trail name abbreviations are: \"MAT', 'AUX01', 'AUX02', ... , 'AUX15'. A value of 'parallel' means to read from all the configured audit trails in parallel. The default value of 'merged' means to merge the change records from all the audit trails into a single stream and send them in a single TCP session. The value is not case-sensitive.NOTE: If recovery is enabled (see\u00a0Recovering applications), either leave this blank or specify only a single audit trail.Block Sizeinteger64Amount of data in KB requested by each read operation when receiving change data from the CDC Process.CompressionBooleanFalseIf set to True, fields with unchanged values are omitted from output. See\u00a0HP NonStop reader WAEvent fields for details.Include SYSKEYBooleanfalseSQL/MP and SQL/MX only: set to true to treat the SYSKEY as if it were a user-defined primary key column. Its value is put into data[0] and before[0], and is NOT put in the ROWID of the metadata. This enables DatabaseWriter to replicate tables that contain no user-defined primary key columns. The target table must contain an extra column (that is not in the source table) to hold the SYSKEY values.IP AddressStringLeave blank unless instructed otherwise by Striim support.NameStringDistinguishes adapter instances in the Agent, and also used as the process name of the Guardian process that is started to collect the change data for this instance of the adapter (the CDC Process). Must be 1 to 3 letters or numbers, beginning with a letter. The Guardian process name is formed by adding \"$\" to the beginning of this name, and in some cases, one character to the end of this name.Port NoIntegerTCP port number on which the HP NonStop reader module running in the Striim server listens to get the change data from the HP NonStop system.Return DateTime AsStringJodaSet to\u00a0String to return timestamp values as strings rather than Joda timestamps. The primary purpose of this option is to avoid losing precision when microsecond timestamps are converted to Joda milliseconds. The format of the string is yyyy-mm-dd hh:mm:ss.ffffff.Start LSNStringTo start reading from a specific position, specify the value of the LSNValue field (including the final semicolon) from a WAEvent (see HP NonStop reader WAEvent fields).TablesStringList of file or table names, separated by semicolons, for which change data is requested. All the files or tables must exist at the time the application using this reader is started. See further description of the syntax below.Trim Guardian NamesBooleanFalseFor HPNonStopSQLMPReader and HPNonStopEnscribeReader only:If set to True, the table names in the Tables property may be specified as\u00a0..\u00a0instead of the usual\u00a0\u00a0\\.$...When\u00a0using the MAP function, the target tables must be specified in the shorter format.This property has no effect on the forms accepted for the part of an Enscribe name following the colon (:), which\u00a0specifies the location of the DDL dictionary and record name in that dictionary that describes the layout of the records in the Enscribe file. The dictionary location may not be shortened, even if the Enscribe file name is shortened.TrimGuardianNames has no effect on the forms accepted for the table or file names used in a file that is specified with the ALLOWED-TABLES param.The output type is WAEvent. See WAEvent contents for change data for more information.The format of the Tables property value depends on which type of database the adapter is accessing:Enscribe$volume.subvolume.file:$ddl-volume.ddl-subvolume.ddl-recordnamewildcard pattern allowed for $volume.subvolume.file; wildcards are * for any series of characters and ? for a single characterSQL/MP$volume.subvolume.tablewildcard pattern allowed;\u00a0wildcards are * for any series of characters and ? for a single characterSQL/MXcatalog.schema.tablewildcard pattern allowed;\u00a0wildcards are % for any series of characters and _ for a single character; % and _ are always wildcards, there is no way to escape them to represent literal % or\u00a0_.Note that when using DatabaseReader or DatabaseWriter, the wildcards for SQL/MP and SQL/MX are % and\u00a0_. (Enscribe files are not accessible using JDBC, so it is not supported as a DatabaseReader source or DatabaseWriter target.) Wildcards are not supported in ALLOWED-TABLES.For Enscribe files, the file name is followed by the DDL Dictionary name and record name. The file name is separated from the dictionary name by a colon (\":\"), and the dictionary name is separated from the record name by a period (\".\"). $ddl-volume.ddl-subvolume gives the location of an Enscribe DDL dictionary. The ddl-recordname may be a DDL RECORD or a DDL DEF in that dictionary that describes the layout of the records in the Enscribe file. The DDL record can be the same as is used to access the file with Enform or to create a record structure declaration for use when accessing the file in a programming language. The dictionary may be created by either DDL or DDL2.If the volume and/or subvolume part of a SQL/MP table name or Enscribe file name is omitted, the Guardian default volume and subvolume of the wagent process are used to fill in the missing parts, though it probably would be best not to rely on knowing the default volume and subvolume of the wagent process. The ddl-volume may be be omitted and the default taken from the wagent process, but, again, that probably is best avoided.For SQL/MP tables or Enscribe files on SMF Virtual Disks, the logical name of the table or file is what should be specified, not the physical name.These adapters use data from change data capture logs, so you will need to use the META() and IS_PRESENT() TQL functions in your queries. See WAEvent contents for change data for more information.The default value for AuditTrails should be used unless measurements show that there is a performance bottleneck in either reading the audit trails or in the TCP session from the HP NonStop system to the Striim server. Even then, the default value should be used unless the HP NonStop system actually has multiple audit trails configured, and the disks on which the tables or files listed in the Tables property reside send their change data to more than one of the audit trails.If parallel audit trail reading is specified with the AuditTrails property, the transmission of change records in the parallel TCP sessions is not synchronized, so a TQL application might receive changes for a given transaction after it receives the COMMIT or ROLLBACK record for that transaction. If the MAT is not included among the audit trail name abbreviations (MAT is included implicitly for 'merged' and 'parallel'), the TQL application will not receive any COMMIT or ROLLBACK records. The writer of the TQL application must keep those facts in mind when designing the application.For all three NonStop databases, numeric data with a nonzero scale is sent to Striim as a string consisting of the decimal digits with an explicit decimal point that expresses the value of the numeric item. If the numeric item is signed, negative values will also have a minus sign as the first character.For all three NonStop databases, data represented in TQL as DateTime is converted to GMT from the time zone in which it is stored in the database. Since the database does not include an indication of the time zone with the data, Striim assumes the data is stored in the local civil time of the time zone configured as the TIME_ZONE_OFFSET for the NonStop system. This assumption can be changed by using the WA-ASSUMED-TIMEZONE PARAM when starting the Striim Agent. You can make it use local standard time or GMT. This is described in Setting up HP NonStop with the Striim agent.For all three NonStop databases, the result of processing any of the unsupported data types is unpredictable.For all three NonStop databases, the contents of fields of single-byte characters, such as PIC X(n), CHAR(n), etc., are assumed to be encoded as UTF-8 unless the PARAM name WA-CHARSET-FOR-CHAR was specified when WAGENT was started to specify the encoding used for single-byte character fields. See\u00a0Encoding of character fields for more details.The contents of fields of double-byte characters, such as PIC N(n), NCHAR(n), etc., are interpreted according to the character set specified by the fields' declarations or the defaults configured for the database. If the PARAM named WA-CHARSET-FOR-NCHAR was specified when WAGENT was started, the encoding given by that PARAM is used for double-byte character fields, overriding any explicit or default specification in the declarations. See\u00a0Encoding of character fields more details.In all cases, when character fields are referenced in TQL, they are normal Java strings, encoded in Java's default character set.Examples:CREATE SOURCE SQLMPSource using HPNonStopEnscribeReader (\n AgentPortNo:4012,\n AgentIpAddress:'192.0.2.150',\n portno:4013,\n ipaddress:'192.0.2.151',\n Name:'ens',\n Tables:'test.es1:test.es1;test.es3:test.es3'\n) OUTPUT TO CDCStream;\n \nCREATE SOURCE SQLMPSource using HPNonStopSQLMPReader (\n AgentPortNo:4012,\n AgentIpAddress:'192.0.2.150',\n portno:4013,\n ipaddress:'192.0.2.151',\n Name:'lod',\n Tables:'$data06.test.esa;$data06.test.esb;$data06.test.esc'\n) OUTPUT TO CDCStream;\n\nCREATE SOURCE SQLMXSource using HPNonStopSQLMXReader (\n AgentPortNo:4012,\n AgentIpAddress:'192.0.2.150',\n portno:4013,\n ipaddress:'192.0.2.151',\n Name:'lod',\n Tables:'testcat.testsch.sqltest1'\n) OUTPUT TO CDCStream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-15\n", "metadata": {"source": "https://www.striim.com/docs/en/hp-nonstop-reader-properties.html", "title": "HP NonStop reader properties", "language": "en"}} {"page_content": "\n\nEncoding of character fieldsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopHP NonStop reader propertiesEncoding of character fieldsPrevNextEncoding of character fieldsThe encoding that is used for character fields in the database may be specified by using the PARAMs named WA-CHARSET-FOR-CHAR and WA-CHARSET-FOR-NCHAR. The value of WA-CHARSET-FOR-CHAR sets the encoding used to interpret values of single-byte character fields. The value of WA-CHARSET-FOR-NCHAR sets the encoding used to interpret values of double-byte character fields. These PARAMs are set before starting WAGENT. They are optional and may be be given separately or together. If these PARAMs are set, they control the interpretation of all character fields of all tables or files referenced by TQL applications that specify the WAGENT that received the PARAMs.The values specified for these PARAMs are not validated at the time WAGENT starts. If a value is not one of the ones given in the table below, this is detected at the time a TQL application is started that references a table or file that includes character fields of the type the PARAM controls.Any of the encoding names may be given as the value of either of the PARAMs. There is no attempt to restrict use of certain encodings to single-byte characters or double-byte characters.The values that are recognized for these PARAMs are given in the following table. The values are case-sensitive, so enter them exactly as they appear in this table.\n\neucJP\n\n\nextended unix code for Japanese\n\n\n\neucKR\n\n\nextended unix code for Korean\n\n\n\neucTW\n\n\nextended unix code for Taiwan\n\n\n\nISO8859-1\n\n\nLatin-1, Western European\n\n\n\nISO8859-2\n\n\nLatin-2, Central European\n\n\n\nISO8859-3\n\n\nLatin-3, South European\n\n\n\nISO8859-4\n\n\nLatin-4, North European\n\n\n\nISO8859-5\n\n\nLatin/Cyrillic\n\n\n\nISO8859-6\n\n\nLatin/Arabic\n\n\n\nISO8859-7\n\n\nLatin/Greek\n\n\n\nISO8859-8\n\n\nLatin/Hebrew\n\n\n\nISO8859-9\n\n\nLatin-5, Turkish\n\n\n\nSJIS\n\n\nShift JIS, a common encoding of Japanese Kanji characters\n\n\n\nUCS-2\n\n\nThe original 2-byte, big-endian Unicode encoding\n\n\n\nUTF-16\n\n\nThe current standard 2-byte, big-endian Unicode encoding\n\n\n\nUTF-8\n\n\nThe most common Unicode encoding\n\nIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-02-16\n", "metadata": {"source": "https://www.striim.com/docs/en/encoding-of-character-fields.html", "title": "Encoding of character fields", "language": "en"}} {"page_content": "\n\nHP NonStop reader WAEvent fieldsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopHP NonStop reader WAEvent fieldsPrevNextHP NonStop reader WAEvent fieldsThe output data type for HPNonStopEnscribeReader, HPNonStopSQLMPReader, and HPNonStopSQLMXReader is WAEvent. The fields are:metadata: To retrieve the values for these fields, use the META function.OperationName: ROLLBACK, COMMIT, INSERT, DELETE, or UPDATETimeStamp: date and time of the operation (Joda DateTime, milliseconds since 1970-1-1 GMT)TxnID: transaction IDTableName (returned only for INSERT, DELETE, and UPDATE operations): fully qualified name of the tableAuditTrailName: logical name of the TMF audit trail from which the record came (either 'MAT' or 'AUX01' through\u00a0 'AUX15') or, if the audit trails are not being read separately in parallel, 'MERGE'.LSNValue: The TMF audit trail position of the event. This may be used to start from a specific position by specifying it as the StartLSN value in the reader (see\u00a0HP NonStop reader properties).PK_UPDATE: Always false.Rollback: 1\u00a0if the record was generated as a result of the rollback of the associated transaction, otherwise 0ROWID (returned only for INSERT, DELETE, and UPDATE operations): the record address, record number, or SYSKEY, or null if the table or file has none of those items. This property is meaningful only for tables or files with a system-generated key, such as entry-sequenced files/tables, relative files/tables, and key-sequenced tables with no user-defined primary key. It is present but null for key-sequenced files and key-sequenced tables that have a user-defined primary key.\u00a0If the IncludeSYSKEY property is true, the SYSKEY gets put into data[0] (and before[0]) and is not put into ROWID.TxnSystemName: the name of the NonStop system on which the current transaction started. (Useful when converting TxnID to HP's normal human-readable format for transaction IDs.)To retrieve the values for these fields, use the META function. See Parsing the fields of WAEvent for CDC readers. For\u00a0TxnSystemName, you may use the\u00a0NSK_TXN_STRING function to convert its value to a human-readable format. For example:CREATE CQ tst54cq\nINSERT INTO tst54cqstream\nSELECT meta(s,\"OperationName\").toString(),\n meta(s,\"TxnID\").toString(),\n meta(s,\"TxnSystemName\").toString(),\n NSK_TXN_STRING(meta(s,\"TxnID\").toString(),meta(s,\"TxnSystemName\").toString()),\n CASE WHEN IS_PRESENT(s,data,0) = true\n THEN TO_STRING(data0)\n ELSE 'nothing' END,\n CASE WHEN IS_PRESENTis_present(s,data,1) = true\n THEN TO_STRING(data1)\n ELSE 'nothing' END\nFROM tst54stream s;data: an array of fields, numbered from 0, containing:for an INSERT operation, the values that were inserted.for an UPDATE, the values after the operation was completed;\u00a0if the HP NonStop reader's compression property is True (see\u00a0HP NonStop reader properties), only the modified values.If TMF audit compression is specified for a table and the HP NonStop reader's Compression property is false, then the values of columns that were changed will be included and the values of some of the columns that were not changed might also be included (whichever makes the TMF audit trail record shorter). If TMF audit compression is specified for an Enscribe file, no change records are created for UPDATE operations.for a DELETE operation, the values that were deleted; if the HP NonStop reader's compression property is True (see\u00a0HP NonStop reader properties),\u00a0contains only the value of the primary key column (unless there are no user-defined key columns, in which case all column values are included)To retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.For Enscribe files, the entries in the data[] array that correspond to DDL fields that are beyond the end of the current record are omitted. The IS_PRESENT() function can be used to determine whether a field's value is included or omitted if it is not possible to determine that from the value of a field that is always present that gives the record type of the current record.The value of fields of character type that start before the end of the current record, but whose declared length extends beyond the end of the current record, are included in the data[] array, but only the characters up to the end of the current record are used as the value of that field. IS_PRESENT() returns true for such fields. The value of fields of any other data type that start before the end of the current record, but whose declared length makes them extend beyond the end of the current record, are omitted from the data[] array, and IS_PRESENT() returns false for them.before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operation.If the HP NonStop reader's compression property is True (see\u00a0HP NonStop reader properties), before contains only the value of the primary key columns.If the HP NonStop reader's compression property is False, and TMF audit compression is enabled for a file,\u00a0before\u00a0contains the values of the primary key columns and the values before the update of any column whose value was changed, but generally the values of columns whose values were not changed generally are omitted (though some of them might be included).If a table has no user-defined primary key columns,\u00a0before\u00a0contains the values of all columns before the update was done, except if TMF audit compression is enabled for the table, the values of columns that were not changed usually will be omitted.For Enscribe files, the entries in the before[] array that correspond to DDL fields that are beyond the end of the current record are handled the same as was described above for the data[] array. Note that for Enscribe UPDATE operations, a record may have a different length after the operation than it had before.dataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.For SQL/MP and SQL/MX tables that contain a SYSKEY column, the SYSKEY column value is not included in data or before. It is put only into the ROWID part of metadata, unless includeSYSKEY is specified to be true, in whcih case the SYSKEY is put into data[0] (and before[0]) and is not put into ROWID.For SQL/MP and SQL/MX tables that have the TMF auditcompress attribute set, the change records for updates are guaranteed to contain values only for the columns actually changed and for key columns. Other columns will have null in their spots in data and before. Sometimes the values for other columns will be included if it makes logging the change more efficient, but that cannot be relied upon. If you want update operations to show the values of all the columns, be sure the auditcompress attribute is not set for the tables in question. This affects only updates. Inserts and deletes always show the values of all the columns.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-10-19\n", "metadata": {"source": "https://www.striim.com/docs/en/hp-nonstop-reader-waevent-fields.html", "title": "HP NonStop reader WAEvent fields", "language": "en"}} {"page_content": "\n\nFunctions for HP NonStop transaction IDsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopFunctions for HP NonStop transaction IDsPrevNextFunctions for HP NonStop transaction IDsfunctiondescriptionnotesNSK_CNVT_TXNID_TO_UNSIGNED(txnid) returns the unsigned representation of a NonStop system transaction ID While Striim always represents a NonStop system transaction ID as an unsigned value, other software, such as Golden Gate, can represent NonStop system transaction IDs as signed values. When the NonStop system number is larger than 126, representing it as signed yields a negative value, which will not be the same as the unsigned value of the same transaction ID. This function converts a negative transaction ID's value to the equivalent positive value, and leaves a positive transaction ID's value unchanged.It is not necessary to use NSK_CNVT_TXNID_TO_UNSIGNED to convert transaction IDs that will be passed to either NSK_TXNS_ARE_SAME or NSK_TXN_STRING, since they already do the conversion to unsigned. NSK_CNVT_TXNID_TO_UNSIGNED is provided in case you have some other reason to make sure a transaction ID is represented as unsigned. NSK_TXN_STRING(txnid,systemname) returns HPE's human-readable form for a transaction ID. txnid is the value of the TxnID field of a WAEvent, and systemname is the TxnSystemName field of the same WAEvent. This function is useful only if you want to display NonStop system transaction IDs in the same format that HPE's software displays them. That could be helpful if you have to investigate the history of the transaction on the NonStop system. See\u00a0HP NonStop reader WAEvent fields for an example.NSK_TXNS_ARE_SAME(txnid1,txnid2) returns true if the two string arguments identify the same transaction, regardless of differences in how the transaction IDs are represented Can be used for any comparison of transaction IDs from NonStop systems, but really only needed when comparing transaction IDs from NonStop systems that came to the TQL application via different software products. For example, one transaction ID came via an HPNonStopSQLMXReader adapter and the other came from Golden Gate's Extract, and then only if some of the NonStop systems have system numbers greater than 126. Striim Source adapters always interpret transaction IDs from NonStop systems as unsigned 64-bit values, while other software might interpret the transaction IDs as signed 64-bit values, giving different representations of the same transaction ID.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-06-01\n", "metadata": {"source": "https://www.striim.com/docs/en/functions-for-hp-nonstop-transaction-ids.html", "title": "Functions for HP NonStop transaction IDs", "language": "en"}} {"page_content": "\n\nHPNonStopEnscribeReader data type support and correspondenceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopHPNonStopEnscribeReader data type support and correspondencePrevNextHPNonStopEnscribeReader data type support and correspondenceEnscribe typeTQL typePIC X(n)stringPIC N(n)stringPIC 9(1-4) COMPintPIC S9(1-4) COMPshortPIC 9(5-9) COMPlongPIC S9(5-9) COMPintPIC 9(10-18) COMPunsupportedPIC S9(10-18) COMPlongPIC 9(n)V9(s) COMPstringPIC S9(n)V9(s) COMPstringPIC 9(1-3) COMP-3intPIC 9(4-9) COMP-3longPIC 9(10-18) COMP-3longPIC S9(1-3) COMP-3shortPIC S9(4-9) COMP-3intPIC S9(10-18) COMP-3longPIC 9(n)V9(s) COMP-3stringPIC S9(n)V9(s) COMP-3stringPIC 9(n)longPIC TlongPIC T9(n)longPIC 9(n)V9(s)stringPIC T9(n)V9(s)stringPIC 9(n)TlongPIC 9(n)V9(s)TstringPIC S9(n)longPIC S9(n)V9(s)stringPIC 9(n)SlongPIC 9(n)V9(s)SstringTYPE CHARACTER nstringTYPE BINARY 8 UNSIGNEDintTYPE BINARY 8shortTYPE BINARY 16,0 UNSIGNEDintTYPE BINARY 16,0shortTYPE BINARY 32,0 UNSIGNEDlongTYPE BINARY 32,0intTYPE BINARY 64,0 UNSIGNEDunsupportedTYPE BINARY 64,0longTYPE BINARY n,s UNSIGNED n = 16, 32 s > 0stringTYPE BINARY n,s n = 16, 32, 64 s > 0stringTYPE FLOAT 32doubleTYPE FLOAT 64doubleTYPE COMPLEXtwo doublesTYPE LOGICAL 1intTYPE LOGICAL 2shortTYPE LOGICAL 4intTYPE ENUMshortTYPE BIT nshortTYPE BIT n UNSIGNEDintTYPE SQL VARCHARstringTYPE SQL DATEstringTYPE SQL TIMEstringTYPE SQL TIMESTAMPstringTYPE SQL DATETIMEstringTYPE SQL INTERVALstringAny Enscribe field that has the SQLNULLABLE attribute in its DDL description is represented in the database as two fields: A two-byte null indicator field, followed by the actual data field. Striim interprets the null indicator field to determine whether the data field has a valid value or is null, so such fields appear as a single field in the WAEvent. Note that the SQLNULLABLE attribute is different than the NULL attribute that DDL also supports. Striim ignores the NULL attribute.An Enscribe field of type COMPLEX is represented as two 32-bit floating point values in the database. Striim puts two double values into the WAEvent for each type COMPLEX field \u2013 first the real part then the imaginary part. Striim creates field names for those two fields by appending \"_R\" and \"_I\" to the name declared for the COMPLEX field in the DDL.An Enscribe field of type SQL VARCHAR is represented in the database as two fields: A two-byte length field, followed by a character field of the maximum possible length, as given in the SQL VARCHAR declaration. Striim puts just a single string into the WAEvent for the SQL VARCHAR field. This field contains the number of characters from the second field that are indicated by the first field.Enscribe field types SQL DATE, SQL TIME, SQL TIMESTAMP, SQL DATETIME and SQL INTERVAL are synonyms for fixed-length character fields of the appropriate length for the specific date-time or interval type declared. For example TYPE DATETIME YEAR TO DAY declares a 10-character field whose values are expected to represent a date in the form such as \"2015-05-20\". Striim does not interpret the values of SQL DATETIME or SQL INTERVAL fields, but simply puts them into the WAEvent as strings. The exact length of each of SQL DATETIME and SQL INTERVAL type is documented in Table D-12 and Table D-13 in HP's \"Data Definition Language (DDL) Reference Manual\", in the chapter \"Dictionary Database Structure\", at the end of the description of the DICTOBL file. The types SQL DATE, SQL TIME, and SQL TIMESTAMP are missing from those tables. They are shorthand notation for:SQL DATE = SQL DATETIME YEAR TO DAYSQL TIME = SQL DATETIME HOUR TO SECONDSQL TIMESTAMP = SQL DATETIME YEAR TO FRACTIONIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-10-31\n", "metadata": {"source": "https://www.striim.com/docs/en/hpnonstopenscribereader-data-type-support-and-correspondence.html", "title": "HPNonStopEnscribeReader data type support and correspondence", "language": "en"}} {"page_content": "\n\nHPNonStopSQLMPReader data type support and correspondenceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopHPNonStopSQLMPReader data type support and correspondencePrevNextHPNonStopSQLMPReader data type support and correspondenceSQL/MP typeTQL typeCHAR(n)stringCHAR VARYING(n)stringVARCHAR(n)stringNATIONAL CHARACTER(n)stringNCHAR(n)stringNCHAR VARYING(n)stringSMALLINT UNSIGNEDintSMALLINT SIGNEDshortINTEGER UNSIGNEDlongINTEGER SIGNEDintLARGEINTlongREALdoubleDOUBLE PRECISIONdoubleFLOAT(n)doubleDATEDateTimeTIMEstringTIMESTAMPDateTimeDATETIME YEAR TO xDateTimeDATETIME x TO y x \u2260 YEAR y = anystringINTERVALstringNUMERIC(1-4,0) UNSIGNEDintNUMERIC(1-4,0) SIGNEDshortNUMERIC(5-9,0) UNSIGNEDlongNUMERIC(5-9,0) SIGNEDintNUMERIC(10-18,0) SIGNEDlongNUMERIC(n,s) UNSIGNED n \u2264 9 s > 0stringNUMERIC(n,s) SIGNED s > 0stringDECIMAL(n,0) UNSIGNEDlongDECIMAL(n,0) SIGNEDlongDECIMAL(n,s) UNSIGNED s > 0stringDECIMAL(n,s) SIGNED s > 0stringPIC X(n)stringPIC 9(1-4) COMPintPIC S9(1-4) COMPshortPIC 9(5-9) COMPlongPIC S9(5-9) COMPintPIC S9(10-18) COMPlongPIC 9(n)V9(s) COMPstringPIC S9(n)V9(s) COMPstringPIC 9(n)longPIC S9(n)longPIC 9(n)V9(s)stringPIC S9(n)V9(s)stringIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-10-31\n", "metadata": {"source": "https://www.striim.com/docs/en/hpnonstopsqlmpreader-data-type-support-and-correspondence.html", "title": "HPNonStopSQLMPReader data type support and correspondence", "language": "en"}} {"page_content": "\n\nHPNonStopSQLMXReader data type support and correspondenceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)HP NonStopHPNonStopSQLMXReader data type support and correspondencePrevNextHPNonStopSQLMXReader data type support and correspondence SQL/MX type TQL typePIC X(n)stringCHAR(n)stringCHAR VARYING(n)stringVARCHAR(n)stringNATIONAL CHARACTER(n)stringNCHAR(n)stringNCHAR VARYING(n)stringSMALLINT UNSIGNEDintSMALLINT SIGNEDshortINTEGER UNSIGNEDlongINTEGER SIGNEDintLARGEINTlongNUMBER(1-4,0) UNSIGNEDintNUMBER(1-4,0) SIGNEDshortNUMBER(5-9,0) UNSIGNEDlongNUMBER(5-9,0) SIGNEDintNUMBER(10-128,0) UNSIGNEDstringNUMBER(10-18,0) SIGNEDlongNUMBER(19-128,0) SIGNEDstringNUMBER(1-9,s) UNSIGNED s > 0stringNUMBER(10-128,s) UNSIGNEDstringNUMBER(1-18,s) SIGNED s > 0stringNUMBER(19-128,s) SIGNEDstringNUMERIC(1-4,0) UNSIGNEDintNUMERIC(1-4,0) SIGNEDshortNUMERIC(5-9,0) UNSIGNEDlongNUMERIC(5-9,0) SIGNEDintNUMERIC(10-128,0) UNSIGNEDstringNUMERIC(10-18,0) SIGNEDlongNUMERIC(19-128,0) SIGNEDstringNUMERIC(1-9,s) UNSIGNED s > 0stringNUMERIC(10-128,s) UNSIGNEDstringNUMERIC(1-18,s) SIGNED s > 0stringNUMERIC(19-128,s) SIGNEDstringDECIMAL(n,0) UNSIGNEDlongDECIMAL(n,0) SIGNEDlongDECIMAL(n,s) UNSIGNED s > 0stringDECIMAL(n,s) SIGNED s > 0stringREALfloatDOUBLE PRECISIONdoubleFLOAT(n)doubleDATEDateTimeTIME(n)stringTIMESTAMP(n)DateTimeINTERVALstringPIC 9(1-4) COMPintPIC S9(1-4) COMPshortPIC 9(5-9) COMPlongPIC S9(5-9) COMPintPIC S9(10-18) COMPlongPIC 9(n)V9(s) COMPstringPIC S9(n)V9(s) COMPstringPIC 9(n)longPIC S9(n)longPIC 9(n)V9(s)stringPIC S9(n)V9(s)stringIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-10-31\n", "metadata": {"source": "https://www.striim.com/docs/en/hpnonstopsqlmxreader-data-type-support-and-correspondence.html", "title": "HPNonStopSQLMXReader data type support and correspondence", "language": "en"}} {"page_content": "\n\nMariaDB / SkySQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)MariaDB / SkySQLPrevNextMariaDB / SkySQLStriim supports:MariaDB and MariaDB Galera Cluster versions compatible with MySQL 5.5 and later (using MariaDB Reader)MariaDB Xpand and SkySQL versions 5.3.x and 6.0.x (using MariaDB Xpand Reader)MariaDB setupTo use MariaDBReader, an administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MariaDB.You may use any other valid name in place of striim. Note that by default MariaDB does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MariaDB setupSee Activating the Binary Log.On-premise MariaDB setup with multi-source replicationIf you are using multi-source replication, use MySQL Reader rather than MariaDB Reader. For more information, see Multi-Source Replication, Multi-source replication in MariaDB 10.0, Multisource Replication: How to resolve the schema name conflicts, and High Availability with Multi-Source Replication in MariaDB Server.MariaDB Galera Cluster setupThe following properties must be set on each server in the cluster:binlog_format=ROWlog_bin=ONlog_slave_updates=ONServer_id: see server_idwsrep_gtid_mode=ONAmazon RDS for MariaDB setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);SkySQL setup in Striim CloudSet binlog_format to row both in the global configuration and for each individual binlog to be read.Download the SSL root certificate .pem file for your SkySQL instance (see Connection Parameters in the SkySQL documentation).Create a truststore using a Java keytool (see keytool), using the path and filename of your downloaded certificate:keytool -importcert -alias MySQLCACert -keystore truststore -file //.pem -storepass striimUpload the generated truststore to Striim Cloud as described in Manage Striim - Files.The connection URL in MariaDB Xpand Reader will look something like this (but all on one line without spaces):jdbc:mariadb://://?\n useSSL=true&\n requireSSL=true&\n sslMode=VERIFY_CA&\n verifyServerCertificate=true&\n trustCertificateKeyStoreUrl=file:///opt/striim/UploadedFiles/truststore&\n trustCertificateKeyStorePassword=&\n trustCertificateKeyStoreType=jksMariaDB Reader and MariaDB Xpand Reader propertiesThese two readers are identical except as noted below.When one of these readers is deployed to a Forwarding Agent, you must install the appropriate JDBC driver as described in Installing third-party drivers in the Forwarding Agent.Striim provides templates for creating applications that read from MariaDB and write to various targets. See\u00a0Creating an application using a template for details.The adapter properties are:propertytypedefault valuenotesBidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.CDDL ActionenumProcessMariaDB Xpand Reader only: see Handling schema evolution.CDDL CaptureBooleanFalseMariaDB Xpand Reader only: see Handling schema evolution.Cluster SupportStringMariaDBReader only: set to Galera when reading from a MariaDB Galera Cluster.CompressionBooleanFalseSet to True when the output of this reader is the input of a\u00a0Cassandra Writer target.When replicating data from one MariaDB instance to another, when a table contains a column of type FLOAT, updates and deletes may fail with messages in the log including \"Could not find appropriate handler for SqlType.\" Setting Compression to True may resolve this issue. If the table's primary key is of type FLOAT, to resolve the issue you may need to change the primary key column type in MySQL.Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringWhen reading from MariaDB, jdbc:mariadb:// followed by the MariaDB server's IP address or network name, optionally a colon and the port number (if not specified, port 3306 is used).Only one MariaDB Reader at a time can connect to a MariaDB database instance. Thus a single Striim application may not contain more than one MariaDB Reader. If a second Striim application attempts to connect to a MariaDB database instance that is already connected to a MariaDB Reader, the first application will halt.When reading from a MariaDB Galera Cluster, specify the IP address and port for each server in the cluster, separated by commas: jdbc:mariadb://:,:,....When reading from SkySQL using MariaDB Xpand reader, specify the following, all on one line:jdbc:mariadb://:/?\nuseSSL=true&requireSSL=true&\nsslMode=VERIFY_CA& \nverifyServerCertificate=true&\ntrustCertificateKeyStoreUrl=file:///opt/striim/UploadedFiles/truststore&\ntrustCertificateKeyStorePassword=&\ntrustCertificateKeyStoreType=jksExcluded TablesStringChange data for any tables specified here will not be returned. For example, if Tables uses a wildcard, data from any tables specified here will be omitted. Multiple table names and wildcards may be used as for Tables.Filter Transaction BoundariesBooleanTrueWith the default value of True, begin and commit transactions are filtered out. Set to False to include begin and commit transactions.Passwordencrypted passwordthe password specified for the username (see Encrypted passwords)Replace Invalid DateStringMariaDB Xpand Reader only: if the database contains \"zero\" dates (0000-00-00 00:00:00 or 0000-00-00), specify a replacement date in the format YYYY-MMM-dd HH:mm:ss (see Joda-Time > Pattern-based formatting).Send Before ImageBooleanTrueset to False to omit before data from outputStart PositionStringWith the default value of null, reading starts with transactions that are committed after the Striim application is started. To start from an earlier point, specify the name of the file and the offset for the start position, for example, FileName:clustrix-bin.000001;offset:720.To start from an earlier point, specify a Global Transaction ID (GTID) in the format GTID: #-#-#, replacing #-#-# with the last GTID before the point where you want to start. Reading will start with the next valid GTID.If you are using schema evolution (see Handling schema evolution, set a Start Position only if you are sure that there have been no DDL changes after that point.Handling schema evolutionIf your environment has multiple binlog files, specify the name of the one to use, for example, FileName:clustrix-bin.000001.When the application is recovered after a system failure, it will automatically resume from the point where it left off.See also Switching from initial load to continuous replication.Start TimestampStringnullMariaDB Xpand Reader only: With the default value of null, only new (based on current system time) transactions are read. If a timestamp is specified, transactions that began after that time are also read. The format is YYYY-MMM-DD HH:MM:SS. For example, to start at 5:00 pm on February 1, 2020, specify 2020-FEB-01 17:00:00.If you are using schema evolution (see Handling schema evolution, set a Start Timestamp only if you are sure that there have been no DDL changes after that point.Handling schema evolutionWhen the application is recovered after a system failure, it will automatically resume from the point where it left off.See also Switching from initial load to continuous replication.TablesStringThe table(s) for which to return change data in the format .
. Names are case-sensitive. You may specify multiple tables as a list separated by semicolons or with the following wildcards in the table name only (not in the database name):%: any series of characters_: any single characterFor example, my.% would include all tables in the my database.The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.If any specified tables are missing Striim will issue a warning. If none of the specified tables exists, start will fail with a \"found no tables\" error.UsernameStringthe login name for the user created as described in MariaDB setupMariaDB Reader WAEvent fieldsMariaDB Reader WAEvent fields are the same as as MySQL Reader WAEvent fields.MySQL Reader WAEvent fieldsMariaDB Reader simple applicationThe following application will write change data for the specified table to SysOut. Replace wauser and ****** with the user name and password for the MariaDB account you created for use by MariaDB Reader (see MariaDB setup) and mydb and mytable with the names of the database and table(s) to be read.CREATE APPLICATION MariaDBTest;\n\nCREATE SOURCE MariaDBCDCIn USING MariaDBReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'jdbc:mariadb://192.168.1.10:3306',\n Database:'mydb',\n Tables:'mytable'\n) \nOUTPUT TO MariaDBCDCStream;\n\nCREATE TARGET MariaDBCDCOut\nUSING SysOut(name:MariaDBCDC)\nINPUT FROM MariaDBCDCStream;\n\nEND APPLICATION MariaDBTest;For MariaDB Galera Cluster, the connection URL would specify all nodes in the cluster.CREATE SOURCE MySQLCDCIn USING MariaDBReader (\n Username:'striim',\n Password:'******',\n ClusterSupport: 'Galera'\n ConnectionURL:'mysql://192.168.1.10:3306,192.168.1.11:3306,192.168.1.12:3306',\n Database:'mydb',\n Tables:'mytable'\n) \nOUTPUT TO MySQLCDCStream;MariaDB Reader example outputOutput is identical to that from MySQL Reader (see MySQLReader example output).MariaDB Reader data type support and correspondenceData type support and correspondence are identical to those for MySQL Reader (see MySQL Reader data type support and correspondence).Runtime considerations when using MariaDB ReaderThe default value of MariaDB's wait_timeout is 28800 seconds (eight hours). Reducing this to 300 seconds (five minutes) can resolve a variety of errors such as \"connect timed out\" or \"unexpected end of stream.\" See wait_timeout for more information.Only one MariaDB Reader at a time can connect to a MariaDB database instance. Thus a single Striim application may not contain more than one MariaDB Reader. If a second Striim application attempts to connect to a MariaDB database instance that is already connected to a MariaDB Reader, the first application will halt.Runtime considerations when using MariaDB Xpand ReaderThe default value of MariaDB's wait_timeout is 28800 seconds (eight hours). Reducing this to 300 seconds (five minutes) can resolve a variety of errors such as \"connect timed out\" or \"unexpected end of stream.\" See wait_timeout for more information.In this section: MariaDB / SkySQLMariaDB setupOn-premise MariaDB setupOn-premise MariaDB setup with multi-source replicationMariaDB Galera Cluster setupAmazon RDS for MariaDB setupSkySQL setup in Striim CloudMariaDB Reader and MariaDB Xpand Reader propertiesMariaDB Reader WAEvent fieldsMariaDB Reader simple applicationMariaDB Reader example outputMariaDB Reader data type support and correspondenceRuntime considerations when using MariaDB ReaderRuntime considerations when using MariaDB Xpand ReaderSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-23\n", "metadata": {"source": "https://www.striim.com/docs/en/mariadb---skysql.html", "title": "MariaDB / SkySQL", "language": "en"}} {"page_content": "\n\nMySQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)MySQLPrevNextMySQLStriim supports MySQL versions 5.5 and later.Striim provides templates for creating applications that read from MySQL and write to various targets. See\u00a0Creating an application using a template for details.MySQL setupTo use MySQLReader or MariaDBReader, an administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MySQL.You may use any other valid name in place of striim. Note that by default MySQL does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MySQL setupMySQLReader reads from the MySQL binary log. If your MySQL server is using replication, the binary log is enabled, otherwise it may be disabled.For on-premise MySQL, the property name for enabling the binary log, whether it is one or off by default, and how and where you change that setting vary depending on the operating system and your MySQL configuration, so for instructions see the binary log documentation for the version of MySQL you are running.If the binary log is not enabled, Striim's attempts to read it will fail with errors such as the following:Summary of the problem : Invalid binlog related database configuration\nPotential root cause : The following global variables does not contain\n required configuration or it cannot be found:\n log_bin,server_id,binlog_format,binlog_row_image.\nSuggested Actions: 1.Add --log_bin to the mysqld command line or add\n log_bin to your my.cnf file..\n2.Add --server-id=n where n is a positive number to the mysqld command\n lineor add server-id=n to your my.cnf file..\n3. Add --binlog-format=ROW to the mysqld command line or add \n binlog-format=ROW to your my.cnf file..\n4.Add --binlog_row_image=FULL to the mysqld command line or add\n binlog_row_image=FULL to your my.cnf file..\nComponent Name: MySQLSource.\nComponent Type: SOURCE.\nCause: Problem with configuration of MySQL\nbinlog_format should be ROW.\nbinlog_row_image should be FULL.\nThe server_id must be specified.\nlog_bin is not enabled.On-premise MariaDB Xpand setupSee Configure MariaDB Xpand as a Replication Master. Set the global variables binlog_format to row and sql_log_bin to true.Amazon Aurora for MySQL setupSee How do I enable binary logging for my Amazon Aurora MySQL cluster?.Amazon RDS for MySQL setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);Azure Database for MySQL setupYou must create a read replica to enable binary logging. See Read replicas in Azure Database for MySQL.Google Cloud SQL for MySQL setupYou must create a read replica to enable binary logging. See Cloud SQL> Documentation> MySQL> Guides > Create read replicas.MySQL Reader propertiesBefore using one of these readers, the tasks described in MySQL setup must be completed.MySQL / MariaDB setupWhen this reader is deployed to a Forwarding Agent, you must install the appropriate JDBC driver as described in Installing third-party drivers in the Forwarding Agent.Striim provides templates for creating applications that read from MySQL and write to various targets. See\u00a0Creating an application using a template for details.The adapter properties are:propertytypedefault valuenotesBidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.CDDL ActionenumProcesssee Handling schema evolution.CDDL CaptureBooleanFalsesee Handling schema evolution.CompressionBooleanFalseSet to True when the output of a MySQLReader source is the input of a\u00a0Cassandra Writer target.When replicating data from one MySQL instance to another, when a table contains a column of type FLOAT, updates and deletes may fail with messages in the log including \"Could not find appropriate handler for SqlType.\" Setting Compression to True may resolve this issue. If the table's primary key is of type FLOAT, to resolve the issue you may need to change the primary key column type in MySQL.Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringWhen reading from MySQL, mysql:// followed by the MySQL server's IP address or network name, optionally a colon and the port number (if not specified, port 3306 is used).To use an Azure private endpoint to connect to Azure Database for MySQL, see Specifying Azure private endpoints in sources and targets.Excluded TablesStringChange data for any tables specified here will not be returned. For example, if Tables uses a wildcard, data from any tables specified here will be omitted. Multiple table names and wildcards may be used as for Tables.Filter Transaction BoundariesBooleanTrueWith the default value of True, begin and commit transactions are filtered out. Set to False to include begin and commit transactions.Passwordencrypted passwordthe password specified for the username (see Encrypted passwords)Replace Invalid DateStringMySQL Reader and MariaDB Xpand Reader only: if the database contains \"zero\" dates (0000-00-00 00:00:00 or 0000-00-00), specify a replacement date in the format YYYY-MMM-dd HH:mm:ss (see Joda-Time > Pattern-based formatting).Send Before ImageBooleanTrueset to False to omit before data from outputStart PositionStringWith the default value of null, reading starts with transactions that are committed after the Striim application is started. To start from an earlier point, specify the name of the file and the offset for the start position, for example, FileName:clustrix-bin.000001;offset:720.If your environment has multiple binlog files, specify the name of the one to use, for example, FileName:clustrix-bin.000001.If you are using schema evolution (see Handling schema evolution, set a Start Position only if you are sure that there have been no DDL changes after that point.Handling schema evolutionWhen the application is recovered after a system failure, it will automatically resume from the point where it left off.See also Switching from initial load to continuous replication.Start TimestampStringnullMySQL Reader and MariaDB Xpand Reader only: see: With the default value of null, only new (based on current system time) transactions are read. If a timestamp is specified, transactions that began after that time are also read. The format is YYYY-MMM-DD HH:MM:SS. For example, to start at 5:00 pm on February 1, 2020, specify 2020-FEB-01 17:00:00.If you are using schema evolution (see Handling schema evolution, set a Start Timestamp only if you are sure that there have been no DDL changes after that point.Handling schema evolutionWhen the application is recovered after a system failure, it will automatically resume from the point where it left off.See also Switching from initial load to continuous replication.TablesStringThe table(s) for which to return change data in the format .
. Names are case-sensitive. You may specify multiple tables as a list separated by semicolons or with the following wildcards in the table name only (not in the database name):%: any series of characters_: any single characterFor example, my.% would include all tables in the my database.The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.If any specified tables are missing Striim will issue a warning. If none of the specified tables exists, start will fail with a \"found no tables\" error.UsernameStringthe login name for the user created as described in MariaDB setupMySQL Reader WAEvent fieldsThe output data type for MySQLReader is WAEvent. The fields are:metadata:BinlogFile: the binlog file from which MySQL Reader read the operationBinlogPosition: the operation's position in the binlog fileOperationName: BEGIN, INSERT, UPDATE, DELETE, COMMIT, STOPWhen schema evolution is enabled, OperationName for DDL events will be Alter, AlterColumns, Create, or Drop. This metadata is reserved for internal use by Striim and subject to change, so should not be used in CQs, open processors, or custom Java functions.PK_UPDATE: for UPDATE only, true if the primary key value was changed, otherwise falseTxnID: unique transaction ID generated by MySQLReader (the internal MySQL transaction ID is not written to the MySQL binary log until the COMMIT operation)TimeStamp: timestamp from the MySQL binary logTableName: fully qualified name of the table (for INSERT, UPDATE, and DELETE only).To retrieve the values for these fields, use the META function. See Parsing the fields of WAEvent for CDC readers.data: an array of fields, numbered from 0, containing:for a BEGIN operation, 0 is the current database name and 1 is BEGINfor an INSERT or DELETE, the values that were inserted or deletedfor an UPDATE, the values after the operation was completedfor a COMMIT, 0 is the ID number of the transactionfor a DDL CREATE or DDL DROP, 0 is the current database name and 1 is the CREATE or DROP statementTo retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operationdataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.MySQLReader simple applicationThe following application will write change data for the specified table to SysOut. Replace wauser and ****** with the user name and password for the MySQL account you created for use by MySQLReader (see MySQL setup) and mydb and mytable with the names of the database and table(s) to be read.CREATE APPLICATION MySQLTest;\n\nCREATE SOURCE MySQLCDCIn USING MySQLReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'mysql://192.168.1.10:3306',\n Database:'mydb',\n Tables:'mytable'\n) \nOUTPUT TO MySQLCDCStream;\n\nCREATE TARGET MySQLCDCOut\nUSING SysOut(name:MySQLCDC)\nINPUT FROM MySQLCDCStream;\n\nEND APPLICATION MySQLTest;MySQLReader example outputMySQLReader's output type is WAEvent. See WAEvent contents for change data for general information.The following are examples of WAEvents emitted by MySQLReader for various operation types. They all use the following table:CREATE TABLE POSAUTHORIZATIONS (BUSINESS_NAME varchar(30),\n MERCHANT_ID varchar(100),\n PRIMARY_ACCOUNT bigint,\n POS bigint,\n CODE varchar(20),\n EXP char(4),\n CURRENCY_CODE char(3),\n AUTH_AMOUNT decimal(10,3),\n TERMINAL_ID bigint,\n ZIP integer,\n CITY varchar(20));INSERTIf you performed the following INSERT on the table:INSERT INTO POSAUTHORIZATIONS VALUES(\n 'COMPANY 1',\n 'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu',\n 6705362103919221351,\n 0,\n '20130309113025',\n '0916',\n 'USD',\n 2.20,\n 5150279519809946,\n 41363,\n 'Quicksand');The WAEvent for that INSERT would be:data: [\"COMPANY 1\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\"20130309113025\",\n \"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]\n metadata: {\"BinlogFile\":\"ON.000004\",\"TableName\":\"mydb.POSAUTHORIZATIONS\",\n \"TxnID\":\"1:000004:3559:1685955321000\",\"OperationName\":\"INSERT\",\"TimeStamp\":1685955321000,\n \"OPERATION_TS\":1685955321000,\"BinlogPosition\":3727}\n userdata: null\n before: null\n dataPresenceBitMap: \"fw8=\"\n beforePresenceBitMap: \"AAA=\"\n typeUUID: {\"uuidstring\":\"01ee037e-a8e6-6c61-a752-c2cd07892059\"}UPDATEIf you performed the following UPDATE on the table:UPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 5A' where pos=0;The WAEvent for that UPDATE for the row created by the INSERT above would be:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\n \"20130309113025\",\"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]\n metadata: {\"PK_UPDATE\":\"false\",\"BinlogFile\":\"ON.000004\",\n \"TableName\":\"mydb.POSAUTHORIZATIONS\",\"TxnID\":\"1:000004:3990:1685955341000\",\n \"OperationName\":\"UPDATE\",\"TimeStamp\":1685955341000,\n \"OPERATION_TS\":1685955341000,\"BinlogPosition\":4167}\n userdata: null\n before: [\"COMPANY 1\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,\n 0,\"20130309113025\",\"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]\n dataPresenceBitMap: \"fw8=\"\n beforePresenceBitMap: \"fw8=\"\n typeUUID: {\"uuidstring\":\"01ee037e-a8e6-6c61-a752-c2cd07892059\"}DELETEIf you performed the following DELETE on the table:DELETE from POSAUTHORIZATIONS where pos=0;The WAEvent for that DELETE for the row affected by the INSERT above would be:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\n \"20130309113025\",\"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]\n metadata: {\"BinlogFile\":\"ON.000004\",\"TableName\":\"mydb.POSAUTHORIZATIONS\",\n \"TxnID\":\"1:000004:4550:1685955350000\",\"OperationName\":\"DELETE\",\n \"TimeStamp\":1685955350000,\"OPERATION_TS\":1685955350000,\"BinlogPosition\":4718}\n userdata: null\n before: null\n dataPresenceBitMap: \"fw8=\"\n beforePresenceBitMap: \"AAA=\"\n typeUUID: {\"uuidstring\":\"01ee037e-a8e6-6c61-a752-c2cd07892059\"}Note that the contents of data and before are reversed from what you might expect for a DELETE operation. This simplifies programming since you can get data for INSERT, UPDATE, and DELETE operations using only the data field.MySQL Reader data type support and correspondenceMySQL typeTQL typecommentsBIGINTlongBIGINT UNSIGNEDlongBINARYstringBITlongBLOBstringCHARstringDATEorg.joda.time.LocalDateIf the MySQL and Striim hosts are not in the same time zone, the value will be converted to Striim's time zone.Appending ?zeroDateTimeBehavior=convertToNull to the connection URL will convert \"zero\" values (0000-00-00) to nulls (see Configuration Properties for Connector/J).DATETIMEorg.joda.time.DateTimeFractional seconds, if used, are dropped. If the MySQL and Striim hosts are not in the same time zone, the value will be converted to Striim's time zone.Appending ?zeroDateTimeBehavior=convertToNull to the connection URL will convert \"zero\" values (0000-00-00\u00a000:00:00) to nulls (see Configuration Properties for Connector/J).DECIMALstringDECIMAL UNSIGNEDstringDOUBLEdoubleENUMintThe value is the integer that is MySQL's internal representation (enumeration literals are assigned numbers in the order the literals were written in the declaration).FLOATfloatIf replicating from one MySQL database to another, see the notes for the Compression property in MySQL Reader properties.geometry typesunsupportedINTintINT UNSIGNEDint\u00a0JSONJSONNodeLONGBLOBstringLONGTEXTstringMEDIUMBLOBstringMEDIUMINTintMEDIUMINT UNSIGNEDint\u00a0MEDIUMTEXTstringNUMERICstringNUMERIC UNSIGNEDstringSETlongThe value is the integer that is MySQL's internal representation (the integer represented by the bit string in which the nth bit is set, if the nth member of the SET's literals is present in the set).SMALLINTshortSMALLINT UNSIGNEDshort\u00a0spatial typesunsupportedTEXTstringTIMEorg.joda.time.LocalTimeFractional seconds, if used, are dropped. If the MySQL and Striim hosts are not in the same time zone, the value will be converted to Striim's time zone.TIMESTAMPorg.joda.time.DateTimeFractional seconds, if used, are dropped. If the MySQL and Striim hosts are not in the same time zone, the value will be converted to Striim's time zone.TINYBLOBstringTINYINTbyteTINYINT UNSIGNEDbyte\u00a0TINYTEXTstringVARBINARYstringVARCHARstringYEARintRuntime considerations when using MySQL ReaderIf when connecting to MySQL 5.7 or earlier you get errors including javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate), append useSSL=false to the connection URL, for example:ConnectionURL:'mysql://192.168.1.10:3306?useSSL=false'The default value of MySQL's wait_timeout is 28800 seconds (eight hours). Reducing this to 300 seconds (five minutes) can resolve a variety of errors such as \"connect timed out\" or \"unexpected end of stream.\" See wait_timeout for more information.In this section: MySQLMySQL setupOn-premise MySQL setupOn-premise MariaDB Xpand setupAmazon Aurora for MySQL setupAmazon RDS for MySQL setupAzure Database for MySQL setupGoogle Cloud SQL for MySQL setupMySQL Reader propertiesMySQL Reader WAEvent fieldsMySQLReader simple applicationMySQLReader example outputINSERTUPDATEDELETEMySQL Reader data type support and correspondenceRuntime considerations when using MySQL ReaderSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/en/mysql-162388.html", "title": "MySQL", "language": "en"}} {"page_content": "\n\nOracle DatabaseSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Oracle DatabasePrevNextOracle DatabaseStriim offers two CDC readers to read data from your Oracle databases: Oracle Reader and OJet.Oracle Reader and OJet can both read from Oracle databases 11g and higher, RAC, and PDB / CDB. Both can read from a primary database, logical standby database, or Active Data Guard standby database.OJet's primary advantage over Oracle Reader is higher throughput. Striim recently published a white paper documenting OJet's ability to read 160 GB of CDC data per hour: see Real Time Data Streaming from Oracle to Google BigQuery and Real-time Data Integration from Oracle to Google BigQuery Using Striim.See the table below for a detailed feature comparison.Feature comparison: Oracle Reader and OJetOracle ReaderOJetsupported versions\u00a0\u00a0\u00a0\u00a0\u00a0Oracle Database 11g Release 2 version 11.2.0.4\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0Oracle Database 12c Release 1 version 12.1.0.2 and Release 2 version 12.2.0.1\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0Oracle Database 18c (all versions)\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0Oracle Database 19c (all versions)\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0Oracle Database 21c (all versions)\u2713Known issue DEV-36641: column map in Database Writer does not work with wildcard\u2713supported topologies\u00a0\u00a0\u00a0\u00a0\u00a0PDB / CDB\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0application PDB\u00a0\u2713\u00a0\u00a0\u00a0\u00a0\u00a0RAC (all versions)\u2713\u2713can read from\u00a0\u00a0\u00a0\u00a0\u00a0primary database\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0logical standby database\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0Active Data Guard standby databasevia archive login real time\u00a0\u00a0\u00a0\u00a0\u00a0Data Guard physical standbyvia archive log\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0downstream database\u00a0\u2713\u00a0\u00a0\u00a0\u00a0\u00a0reference-partitioned tables\u2713key features\u00a0\u00a0\u00a0\u00a0\u00a0DML operations replicable in targetINSERT, UPDATE, DELETEINSERT, UPDATE, DELETE, TRUNCATE\u00a0\u00a0\u00a0\u00a0\u00a0schema evolutionfor 11g to 18c only, not for PDB / CDBfor all supported versions\u00a0\u00a0\u00a0\u00a0\u00a0uncommitted transaction support\u2713\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Striim-side transaction caching\u2713\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0recovery\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0quiesce\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0bidirectional replication\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0SSL\u2713\u2713\u00a0\u00a0\u00a0\u00a0\u00a0Oracle Native Network Encryption\u00a0used automatically in Striim Cloudnot supported in Striim Platform in this release\u00a0\u00a0\u00a0\u00a0\u00a0supported when Striim Platform is running in Microsoft Windows\u2713\u00a0summary of supported data types (for full details, see Oracle Reader and OJet data type support and correspondence)Oracle Reader and OJet data type support and correspondenceBINARY_DOUBLE, BINARY_FLOAT, CHAR, DATE, FLOAT, INTERVALDAYTOSECOND, INTERVALYEARTOMONTH, JSON, NCHAR, NUMBER, NVARCHAR2, RAW, TIMESTAMP, TIMESTAMP WITH LOCAL TIME ZONE, TIMESTAMP WITH TIME ZONE, VARCHAR2\u2713\u2713ROWID\u2713VARRAYsee Oracle Reader and OJet data type support and correspondenceOracle Reader and OJet data type support and correspondenceBLOB, CLOB, LONG, LONG RAW, XMLTYPEsee Oracle Reader and OJet data type support and correspondenceOracle Reader and OJet data type support and correspondence\u2713BFILEsee Oracle Reader and OJet data type support and correspondenceOracle Reader and OJet data type support and correspondenceADT, NESTED TABLE, SD0_GEOMETRY, UDT, UROWIDStriim provides templates for creating applications that read from Oracle and write to various targets. See\u00a0Creating an application using a template for details.To learn more about these CDC readers or purchase them, Contact Striim support.Configuring Oracle to use Oracle ReaderUsing Oracle Reader requires the following configuration changes in Oracle:enable archivelog, if not already enabledenable supplemental log data, if not already enabledset up LogMinercreate a user account for OracleReader and grant it the privileges necessary to use LogMinerBasic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelogLog in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log dataIf you are using Amazon RDS for Oracle, see the instructions below.Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE .
ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;If replicating Oracle data to one of the followingAzure Synapse with Mode set to MERGEBigQuery with Optimized Merge disabledRedshiftSnowflake with Optimized Merge disabledEnable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE .
ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;If using Amazon RDS for Oracle, use the following commands instead:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0 with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, ) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;Creating the QUIESCEMARKER table for Oracle ReaderTo allow Striim to quiesce (see QUIESCE) an application that uses Oracle Reader, you must create a quiescemarker table in Oracle. (This is not necessary when Reading from a standby or using OJet.)The DDL for creating the table is:\u00a0CREATE TABLE QUIESCEMARKER (source varchar2(100), \n status varchar2(100),\n sequence NUMBER(10),\n inittime timestamp, \n updatetime timestamp default sysdate, \n approvedtime timestamp, \n reason varchar2(100), \n CONSTRAINT quiesce_marker_pk PRIMARY KEY (source, sequence));\nALTER TABLE QUIESCEMARKER ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;\nThe Oracle user specified in Oracle Reader's Username property must have SELECT, INSERT, and\u00a0 UPDATE privileges on this table.Reading from a standbyOracle Reader can read from a standby rather than a primary database.RequirementsCreate an Active Data Guard standby. (Striim cannot read from a regular Data Guard standby.)Open the standby in read-only mode.On the primary, perform all steps in Basic Oracle configuration tasks and Create an Oracle user with LogMiner privileges. No quiescemarker table is required when reading from a standby.Creating an Oracle user with LogMiner privilegesLimitationsOracle Reader will read only from the archive log, not from redo logs.Bidirectional replication is not supported.Oracle Reader will reject QUIESCE if there are any open transactions.Create the dictionary fileOn the primary, use SQL Plus or another client to create a dictionary file.For Oracle 11g or 12.1.0.2, enter the following commands, replacing in the second command with the path returned by the first command. If the first command does not return a path, you must set UTL_FILE_DIR.show parameter utl_file_dir;\nexecute dbms_logmnr_d.build('dict.ora', '');For Oracle 12.2.0.1.0 or later, enter the following commands.CREATE DIRECTORY \"dictionary_directory\" AS '/opt/oracle/dictionary';\nEXECUTE dbms_logmnr_d.build(dictionary_location=>'dictionary_directory', \ndictionary_filename=>'dict.ora',\noptions => dbms_logmnr_d.store_in_flat_file);Copy dict.ora to a directory on the standby.Configure Oracle Reader properties in your applicationSet Dictionary Mode to ExternalDictionaryFileCatalog.Set Database Role to PHYSICAL_STANDBY.Set External Dictionary File to the fully qualified name of the dictionary file you copied to the standby, for example, /home/oracle/dict.oraHandling DDL changesWhen DDL changes must be made to the tables being read by Oracle Reader, do the following:On the primary, stop DML activity and make sure there are no open transactions.On the primary, force a log switch.In Striim, quiesce the application (see QUIESCE). If Oracle Reader refuses the quiesce, wait a few minutes and try again.On the primary, perform the DDL changes.Repeat the procedure in \"Create the dictionary file,\" above, replacing the old file on the standby with the new one.Start the Striim application.Configuring Oracle to use OJetOJet requires a special license. For more information, Contact Striim support.OJet is supported only when Striim is running on Linux. glibc version 14 or later must be installed before deploying OJet.In an Oracle RAC environment, OJet must connect to a SCAN listener (see About Connecting to an Oracle RAC Database Using SCANs).OJet can read from:a single primary databasea downstream primary databasea logical standby database (see Creating a Logical Standby Database)an Active Data Guard downstream database using Archived-Log Downstream Capture an Active Data Guard downstream database using Real-Time Downstream CaptureOracle configuration varies depending on which of these topologies you use.For all topologies, Enable archivelog.For logical standby only, set DATABASE GUARD to standby to prevent users other than SYS from making changes to the standby's data:ALTER DATABASE GUARD standby;For a single primary database or logical standby, Running the OJet setup script on Oracle.For Active Data Guard, follow the instructions in Configuring Active Data Guard to use OJet.Execute the following command to extract the database dictionary to the redo log. Going forward, run this command once a week.EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);If there may be open transactions when you start an OJet application, run the following command to get the current SCN, and specify it as the Start Scn value in the application's OJet properties.SELECT CURRENT_SCN FROM V$DATABASE;Enable archivelogLog in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Running the OJet setup script on OracleThe following instructions assume that OJet will be reading from a single primary database or a logical standby. If you are using Active Data Guard and reading from a downstream database, follow the instructions Configuring Active Data Guard to use OJet instead.Before running the setup script, create an Oracle user for use by OJet. In a CDB environment, this must be a common user. There is no need to assign privileges, those will be added by the setup script.Download and extract the setup utility as described in Enabling OJet on Striim Cloud. The setup script is setupOjet/striim/tools/bin/setupOJet.sh. The syntax is:setupOJet.sh [y [y]]\u00a0connection URL: either host:port:SID or host:port/servicesys user name: an Oracle user with DBA privilege that can connect as sysdba. You may need to configure REMOTE_LOGIN_PASSWORDFILE.password: the specified sys user's passwordojet_user: the name of the Oracle user you created before running setupOJet.shremote (for downstream setup only): ysource (for downstream setup only): yfile (for downstream setup only): file name with tables for instantiation at downstream sourceExamples (replace the IP address, SID, and password with those for your environment):for a single primary database or a logical standby: setupOJet.sh\u00a0203.0.113.49:1521:orcl\u00a0sys ******** OJET_USERfor the Active Data Guard standby server: setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER y yfor the Active Data Guard downstream server: setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER yIf the script reports that an Oracle fix is missing, install it and run the script again.The script's output should be similar to this:./setupOJet.sh localhost:1521:ORCL sys oracle OJET_USER\n Configuration for OJet using user OJET_USER started:\u00a0\n Granted resources to user OJET_USER\n Granted select any dictionary privilege to user OJET_USER\n Enabled replication\n Enabled streaming\n Enabled supplemental logging\n Building dictionary log \u2026\n DoneConfiguring Active Data Guard to use OJetTwo types of downstream configurations are supported, real-time and archivelog. The difference between them is how redo changes are shipped from the source database to the downstream database, and where the capture process will run and OJet will connect.In an Active Data Guard environment, the physical standby database is in read-only mode, so OJet cannot attach directly to it. Thus a cascaded setup is required,. A cascaded redo transport destination (also known as a terminal destination) receives the primary database redo indirectly from a standby database, rather than directly from a primary database. Oracle documentation for setting up a cascaded set up needs to be followed. For more information, see Cascaded Redo Transport Destinations.For Active Data Guard, the standby database needs to be in recovery mode so that metadata is in sync with the primary database. The required steps for setup are outlined below, with differences between the two setups. Refer to Oracle documentation for the steps to perform these changes.The passwords for the sys and OJet users must be the same on the standby and downstream databases.Primary database setupPrimary database setupRun the DBMS_CAPTURE_ADM.BUILD procedure on the primary database to extract the data dictionary to the redo log when a capture process is created:DBMS_CAPTURE_ADM.BUILD();Primary or standby database setupConfigure the following settings on both the primary or standby and the source databases:Enable archivelog.Add the connection details for the downstream database to: $TNS_ADMIN/tnsnames.oraTo configure the standby to replicate to the downstream database, add a new log_archive_dest in the standby, depending on the type of downstream configuration (change the identifiers log_archive_dest_3, ORCL, and INST1 to reflect your environment).For real-time capture:ALTER SYSTEM set log_archive_dest_3='service=INST1 ASYNC NOREGISTER\n VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL' scope=both;\nSHOW PARAMETER log_archive_dest_3;\nALTER SYSTEM set log_archive_dest_state_3=enable scope=both;\nFor archivelog capture:ALTER SYSTEM set log_archive_dest_3='SERVICE=inst1 ASYNC NOREGISTER\n VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)\n TEMPLATE=/opt/oracle/product/19c/dbhome_1/srl_dbs/dbs1_arch_%t_%s_%r.log\n DB_UNIQUE_NAME=ORCL' scope=both;\nSHOW PARAMETER log_archive_dest_state_3;\nALTER SYSTEM set log_archive_dest_state_3=enable scope=both;\nBefore running the setup script, create an Oracle user for use by OJet. In a CDB environment, this must be a common user. There is no need to assign privileges, those will be added by the setup script.Run the setup script (see Running the OJet setup script on Oracle), appending a single y parameter. For example:setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER y yDownstream database setupConfigure the following settings on the downstream database:Add the source database connection details to: $TNS_ADMIN/tnsnames.ora.Update the log_archive_config with the source database. For example (change oradb.orcl to reflect your environment):ALTER SYSTEM set log_archive_config='DG_CONFIG=(oradb,orcl)' scope=both;Ensure that the local generated redo is at a different location than the source database redo:ALTER SYSTEM set\n LOG_ARCHIVE_DEST_1='LOCATION=/opt/oracle/product/19c/dbhome_1/dbs/\n VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)' scope=both;For real-time capture only,\u00a0 add the standby log file:\u00a0ALTER SYSTEM set\n LOG_ARCHIVE_DEST_2='LOCATION=/opt/oracle/product/19c/dbhome_1/srl_dbs/\n VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)' scope=both; There must be one more standby log file than in the source (see Create the Standby Redo Log Files). Use(SELECT COUNT(GROUP#) FROM GV$LOG to verify this:ALTER SYSTEM add standby logfile group 4 'slog4a.rdo' SIZE 200M;\nALTER SYSTEM add standby logfile group 5 'slog5a.rdo' SIZE 200M; \nALTER SYSTEM add standby logfile group 6 'slog6a.rdo' SIZE 200M; \nALTER SYSTEM add standby logfile group 7 'slog7a.rdo' SIZE 200M;Before running the setup script, create an Oracle user for use by OJet. In a CDB environment, this must be a common user. There is no need to assign privileges, those will be added by the setup script.Run the setup script (see Running the OJet setup script on Oracle), appending two y parameters. For example:setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER yOracle Reader propertiesBefore you can use this adapter, Oracle must be configured as described in the parts of Configuring Oracle to use Oracle Reader that are relevant to your environment.NoteIf Oracle Reader will be deployed to a Forwarding Agent, install the required JDBC driver as described in Install the Oracle JDBC driver in a Forwarding Agent.Before deploying an Oracle Reader application, see Runtime considerations when using Oracle Reader.Striim provides templates for creating applications that read from Oracle and write to various targets. See\u00a0Creating an application using a template for details.The adapter properties are:propertytypedefault valuenotesBidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.CDDL ActionenumProcess18c and earlier only: see Handling schema evolution.CDDL CaptureBooleanFalse18c and earlier only: enables schema evolution (see Handling schema evolution). When set to True, Dictionary Mode must be set to Offline Catalog and Support PDB and CDB must be False.Committed TransactionsBooleanTrueLogMiner only: by default, only committed transactions are read. Set to False to read both committed and uncommitted transactions.CompressionBooleanFalseIf set to True, update operations for tables that have primary keys include only the primary key and modified columns, and delete operations include only the primary key. With the default value of False, all columns are included. See\u00a0Oracle Reader example output for examples.Set to True when Oracle Reader's output stream is the input stream of Cassandra Writer.Connection Retry PolicyStringtimeOut=30, retryInterval=30, maxRetries=3With the default setting:Striim will wait for the database to respond to a connection request for 30 seconds (timeOut=30).If the request times out, Striim will try again in 30 seconds (retryInterval=30).If the request times out on the third retry (maxRetries=3), a ConnectionException will be logged and the application will stop.Negative values are not supported.Connection URLString:: or :/If using\u00a0Oracle 12c or later with PDB, use the SID for the CDB service. (Note that with DatabaseReader and DatabaseWriter, you must use the SID for the PDB service instead.)If using Amazon RDS for Oracle, the connection URL is ::. The required values are displayed at\u00a0Instance Actions > see details.\u00a0Database RoleStringPRIMARYLeave set to the default value of PRIMARY except when you Reading from a standby.Dictionary ModeStringOnlineCatalogLeave set to the default of OnlineCatalog except when CDDL Capture is True or you are Reading from a standby.Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude from the query. Specify the value exactly as for Tables. For example, to include data from all tables whose names start with HR except HRMASTER:Tables='HR%',\nExcludedTables='HRMASTER'Exclude UsersStringOptionally, specify one or more Oracle user names, separated by semicolons, whose transactions will be omitted from OracleReader output. Possible uses include:omitting transactions that would cause an endless endless loop when data previously read by OracleReader is eventually written back to the same table\u00a0by DatabaseWriter, for example, in the context of high-availability \"active/active\" replicationomitting transactions involving multiple gigabytes of data, thus reducing Striim's memory requirementsomitting long-running transactions, ensuring that OracleReader will restart from a recent SCN after Striim is restartedExternal Dictionary FileStringLeave blank except when you Reading from a standby.Fetch SizeInteger1000LogMiner only: the number of records the JDBC driver will return at a time. For example, if Oracle Reader queries LogMiner and there are 2300 records available, the JDBC driver will return two batches of 1000 records and one batch of 300.Filter Transaction BoundariesBooleanTrueWith the default value of True, BEGIN and COMMIT operations are filtered out. Set to False to include BEGIN and COMMIT operations.Ignorable ExceptionStringDo not change unless instructed to by Striim support.Passwordencrypted passwordthe password specified for the username (see Encrypted passwords)Queue SizeInteger2048Quiesce Marker TableStringQUIESCEMARKERSee\u00a0Creating the QUIESCEMARKER table for Oracle Reader. Modify the default value if the quiesce marker table is not in the schema associated with the user specified in the Username. Three-part CDB / PDB names are not supported in this release.Send Before ImageBooleanTrueset to False to omit before data from outputSet Conservative RangeBooleanFalseIf reading from Oracle 19c, you have long-running transactions, and parallel DML mode is enabled (see Enable Parallel DML Mode), set this to True.SSL ConfigStringIf using SSL with the Oracle JDBC driver, specify the required properties. Examples:If using SSL for encryption only:oracle.net.ssl_cipher_suites=\n (SSL_DH_anon_WITH_3DES_EDE_CBC_SHA,\n SSL_DH_anon_WITH_RC4_128_MD5, \n SSL_DH_anon_WITH_DES_CBC_SHA)If using SSL for encryption and server authentication:javax.net.ssl.trustStore=\n/etc/oracle/wallets/ewallet.p12;\njavax.net.ssl.trustStoreType=PKCS12;\njavax.net.ssl.trustStorePassword=********If using SSL for encryption and both server and client authentication:javax.net.ssl.trustStore=\n/etc/oracle/wallets/ewallet.p12;\njavax.net.ssl.trustStoreType=PKCS12;\njavax.net.ssl.trustStorePassword=********;\njavax.net.ssl.keyStore=/opt/Striim/certs;\njavax.net.ssl.keyStoreType=JKS;\njavax.net.ssl.keyStorePassword=********Start SCNStringOptionally specify an SCN from which to start reading (See Replicating Oracle data to another Oracle database for an example).If you are using schema evolution (see Handling schema evolution, set a Start SCN only if you are sure that there have been no DDL changes after that point.Handling schema evolutionSee also Switching from initial load to continuous replication.Start TimestampStringnullWith the default value of null, only new (based on current system time) transactions are read. If a timestamp is specified, transactions that began after that time are also read. The format is DD-MON-YYYY HH:MI:SS. For example, to start at 5:00 pm on July 15, 2017, specify 15-JUL-2017 17:00:00.If you are using schema evolution (see Handling schema evolution, set a Start Timestamp only if you are sure that there have been no DDL changes after that point.Handling schema evolutionSupport PDB and CDBBooleanFalseSet to True if reading from CDB or PDB.TablesStringThe table or materialized view to be read (supplemental logging must be enabled as described in Configuring Oracle to use Oracle Reader) in the format .
. (If using Oracle 12c with PDB, use three-part names: ..
.) Names are case-sensitive.You may specify multiple tables and materialized views as a list separated by semicolons or with the % wildcard. For example, HR.% would read all tables in the HR schema. You may not specify a wildcard for the schema (that is, %.% is not supported). The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.Unused columns are supported. Values in virtual columns will be set to null. If a table contains an invisible column, the application will terminate.Table and column identifiers (names) may not exceed 30 bytes. When using one-byte character sets, the limit is 30 characters. When using two-byte character sets, the limit is 15 characters.Oracle character set AL32UTF8 (UTF-8) and character sets that are subsets of UTF-8, such as US7ASCII, are supported. Other character sets may work so long as their characters can be converted to UTF-8 by Striim.See also\u00a0Specifying key columns for tables without a primary key.Transaction Buffer Disk LocationStringstriim/LargeBufferSee Transaction Buffer Type.Transaction Buffer Spillover SizeString100MBWhen Transaction Buffer Type is Disk, the amount of memory that Striim will use to hold each in-process transactions before buffering it to disk. You may specify the size in MB or GB.When Transaction Buffer Type is Memory, this setting has no effect.Transaction Buffer TypeStringDiskWhen Striim runs out of available Java heap space, the application will terminate. With Oracle Reader, typically this will happen when a transaction includes millions of INSERT, UPDATE, or DELETE events with a single COMMIT, at which point the application will terminate with an error message such as \"increase the block size of large buffer\" or \"exceeded heap usage threshold.\"To avoid this problem, with the default setting of Disk, when a transaction exceeds the Transaction Buffer Spillover Size, Striim will buffer it to disk at the location specified by the Transaction Buffer Disk Location property, then process it when memory is available.When the\u00a0setting is Disk and recovery is enabled (see Recovering applications), after the application halts, terminates, or is stopped the buffer will be reset, and during recovery any previously buffered transactions will restart from the beginning.Recovering applicationsTo disable transaction buffering, set Transaction Buffer Type to Memory.UsernameStringthe username created as described in Configuring Oracle to use Oracle Reader; if using Oracle 12c or later with PDB, specify the CDB user (c##striim)\u00a0Specifying key columns for tables without a primary keyIf a primary key is not defined for a table, the values for all columns are included in UPDATE and DELETE records, which can significantly reduce performance. You can work around this by setting the Compression property to True and including the KeyColumns option in the Tables property value. The syntax is:Tables:'
KeyColumns(,,...)'The column names must be uppercase. Specify as many columns as necessary to define a unique key for each row. The columns must be supported (see Oracle Reader and OJet data type support and correspondence) and specified as NOT NULL.If the table has a primary key, or the Compression property is set to False, KeyColumns will be ignored.OJet propertiesBefore you can use this adapter, Oracle must be configured as described in Configuring Oracle to use OJet.NoteBefore deploying OJet on a Forwarding Agent, install the Oracle Instant Client as described in Install the Oracle Instant Client in a Forwarding Agent.Before deploying an OJet application, see Runtime considerations when using OJet.Striim provides templates for creating applications that read from Oracle and write to various targets. See\u00a0Creating an application using a template for details.The adapter properties are:propertytypedefault valuenotesBidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive. This property appears only if your Striim cluster has been licensed for bidirectional support.CDDL ActionenumProcessSee Handling schema evolution.CDDL CaptureBooleanFalseSee Handling schema evolution.CompressionBooleanFalseIf set to True, update operations for tables that have primary keys include only the primary key and modified columns, and delete operations include only the primary key. With the default value of False, all columns are included. See\u00a0Oracle Reader example output for examples.Set to True when OJet's output stream is the input stream of Cassandra Writer.Connection Retry PolicyStringtimeOut=30, retryInterval=30, maxRetries=3With the default setting:Striim will wait for the database to respond to a connection request for 30 seconds (timeOut=30).If the request times out, Striim will try again in 30 seconds (retryInterval=30).If the request times out on the third retry (maxRetries=3), a ConnectionException will be logged and the application will stop.Negative values are not supported.Connection URLStringjdbc:oracle:oci:@:: or jdbc:oracle:oci:@:/If using\u00a0Oracle 12c or later with PDB, use the SID for the CDB service. (Note that with DatabaseReader and DatabaseWriter, you must use the SID for the PDB service instead.)If Downstream Capture is enabled, specify the connection URL for the downstream database. Otherwise, specify the connection URL for the primary database.If the specified connection URL is invalid, deployment will fail with an \"ORA-12170: TNS:Connect timeout occurred\" error. Note that this error will also occur if Striim is unable to connect to Oracle for any other reason, such as a network outage or the database being offline.Downstream CaptureBooleanFalseIf set to True, downstream capture is enabled.Downstream Capture ModeStringNoneREAL_TIME: real time downstream capture mode.ARCHIVED_LOG: archived log downstream capture mode.Primary Database Connection URLStringIf Downstream Capture is enabled , specify the connection URL for the primary database.Primary Database Passwordencrypted passwordIf Downstream Capture is enabled , specify the password for the user specified in Primary Database Username.Primary Database UsernameStringIf Downstream Capture is enabled, specify the Oracle user you created as described in Configuring Active Data Guard to use OJet.Excluded TablesStringIf a wildcard is specified for Tables, any tables specified here will be excluded from the query. Specify the value as for Tables.Filter Transaction BoundariesBooleanTrueOJet ConfigStringnullA JSON string that specifies the configuration of OJet reader components. All configuration values are disabled by default. It uses the following format:{\n \"\" : [\n \"\"\n ]\n ,...\n}\nThe components are OJET and CAPTURE. They have following configuration parameters.OJET: queuesize: The maximum queue of events in memoryOJET: open_txn_delay_time: By default, this value is 0, which means that OJet will ignore any open transactions. If you prefer to halt if there are open transactions, set this to a positive value in milliseconds. If OJet detects open transactions, it will wait that number of milliseconds to retry. After three retries, if there are still open transactions, the application will halt.To list open transactions, while OJet is running, in the Striim console enter SHOW OPENTRANSACTIONS.CAPTURE: fetch_lcr_attributes: The default value is False. Set to True to include the additional attributes (TxnId, Thread#, Username, rowid and transactionName in WAEvent.For example:{\n \"OJET\":[\n \u201cqueuesize:20000\u201d\n ],\n \"OJET\":[\n \u201copen_txn_delay_time:60000\u201d\n ],\n \"CAPTURE\":[\n \u201cfetch_lcr_attributes:true\u201d\n ]\n}\nPasswordencrypted passwordThe password for the Oracle user specified in Username.Send Before ImageBooleanTrueSet to False to omit before data from outputSSL ConfigStringIf using SSL with the Oracle JDBC driver, specify the required properties using the syntax oracle.net.ssl_server_cert_dn=;oracle.net.wallet_location=\". The wallet location must be accessible by Striim.Start SCNStringOptionally specify an SCN from which to start reading (See Replicating Oracle data to another Oracle database for an example).When you set a Start SCN, before running the application trigger a dictionary build by running this command:EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=>\n DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);\nSELECT CURRENT_SCN FROM V$DATABASE;See also Switching from initial load to continuous replication.Start TimestampStringnullWith the default value of null, only new (based on current system time) transactions are read. If a timestamp is specified, transactions that began after that time are also read. The format is DD-MON-YYYY HH:MI:SS. For example, to start at 5:00 pm on July 15, 2017, specify 15-JUL-2017 17:00:00.TablesStringThe table or materialized view to be read (supplemental logging must be enabled as described in Configuring Oracle to use Oracle Reader) in the format .
. (If using Oracle 12c or later with PDB, use three-part names: ..
.) Names are case-sensitive.You may specify multiple tables and materialized views as a list separated by semicolons or with the % wildcard. For example, HR.% would read all tables in the HR schema. You may not specify a wildcard for the schema (that is, %.% is not supported). The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.Unused columns are supported. Values in virtual columns will be set to null. If a table contains an invisible column, the application will terminate.When reading from Oracle 11g or 12c Release 1 version 12.1, table and column identifiers (names) may not exceed 30 bytes. When using one-byte character sets, the limit is 30 characters. When using two-byte character sets, the limit is 15 characters.When reading from Oracle 12c Release 2 version 12.2 or later, table and column identifiers (names) may not exceed 128 bytes. When using one-byte character sets, the limit is 128 characters. When using two-byte character sets, the limit is 64 characters.Oracle character set AL32UTF8 (UTF-8) and character sets that are subsets of UTF-8, such as US7ASCII, are supported. Other character sets may work so long as their characters can be converted to UTF-8 by Striim.See also\u00a0Specifying key columns for tables without a primary key.Transaction Age Spillover LimitInteger1000OJet begins to spill messages from the Oracle server's memory to its hard disk for a particular transaction when the amount of time that any message in the transaction has been in memory exceeds the specified number of seconds.Transaction Buffer Spillover CountInteger10000OJet begins to spill messages from the Oracle server's memory to its hard disk for a particular transaction when the number of messages in memory for the transaction exceeds the specified number.UsernameStringThe name of the OJet user created as described in Running the OJet setup script on Oracle or Configuring Active Data Guard to use OJet. if using Oracle 12c or later with PDB, specify the CDB user (c##striim).Oracle Reader and OJet WAEvent fieldsThe output data type for both Oracle Reader and OJet is WAEvent.metadata: for DML operations, the most commonly used elements are:DatabaseName (OJet only): the name of the databaseOperationName: COMMIT, BEGIN, INSERT, DELETE, UPDATE, or (when using Oracle Reader only) ROLLBACKTxnID: transaction IDTimeStamp: timestamp from the CDC logTableName (returned only for INSERT, DELETE, and UPDATE operations): fully qualified name of the tableROWID (returned only for INSERT, DELETE, and UPDATE operations): the Oracle ID for the inserted, deleted, or updated rowTo retrieve the values for these elements, use the META function. See Parsing the fields of WAEvent for CDC readers.data: for DML operations, an array of fields, numbered from 0, containing:for an INSERT or DELETE operation, the values that were inserted or deletedfor an UPDATE, the values after the operation was completedTo retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operationdataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.The following is a complete list of fields that may appear in metadata. The actual fields will vary depending on the operation type and other factors.metadata propertypresent when using Oracle Readerpresent when using OJetcommentsAuditSessionID\u2713Audit session ID associated with the user session making the changeBytesProcessed\u2713COMMIT_TIMESTAMP\u2713the UNIX epoch time the transaction was committed, based on the Striim server's time zoneCOMMITSCNx\u2713system change number (SCN) when the transaction committedCURRENTSCN\u2713system change number (SCN) of the operationDBCommitTimestamp\u2713the UNIX epoch time the transaction was committed, based on the Oracle server's time zoneDBTimestamp\u2713the UNIX epoch time of the operation, based on the Oracle server's time zoneOperationName\u2713\u2713user-level SQL operation that made the change (INSERT, UPDATE, etc.)OperationType\u2713\u2713the Oracle operation typefor OJet: DDL or DMLfor Oracle Reader: COMMIT, DDL, DELETE, INSERT, INTERNAL, LOB_ERASE, LOB_TRIM, LOB_WRITE, MISSING_SCN, ROLLBACK, SELECT_FOR_UPDATE, SELECT_LOB_LOCATOR, START, UNSUPPORTED, or UPDATEParentTxnID\u2713raw representation of the parent transaction identifierPK_UPDATE\u2713\u2713true if an UPDATE operation changed the primary key, otherwise falseRbaBlk\u2713RBA block number within the log fileRbaSqn\u2713sequence# associated with the Redo Block Address (RBA) of the redo record associated with the changeRecordSetID\u2713Uniquely identifies the redo record that generated the row. The tuple (RS_ID, SSN) together uniquely identifies a logical row change.RollBack\u27131 if the record was generated because of a partial or a full rollback of the associated transaction, otherwise 0ROWID\u2713see commentRow ID of the row modified by the change (only meaningful if the change pertains to a DML). This will be NULL if the redo record is not associated with a DML.OJet: will be included only if fetch_lcr_attributes is specified in OJet ConfigSCN\u2713system change number (SCN) when the database change was madeSegmentName\u2713name of the modified data segmentSegmentType\u2713type of the modified data segment (INDEX, TABLE, ...)Serial\u2713serial number of the session that made the changeSerial#see commentserial number of the session that made the change; will be included only if fetch_lcr_attributes is specified in OJet ConfigSession\u2713session number of the session that made the changeSession#see commentsession number of the session that made the change; will be included only if fetch_lcr_attributes is specified in OJet ConfigSessionInfo\u2713Information about the database session that executed the transaction. Contains process information, machine name from which the user logged in, client info, and so on.SQLRedoLength\u2713length of reconstructed SQL statement that is equivalent to the original SQL statement that made the changeTableName\u2713\u2713name of the modified table (in case the redo pertains to a table modification)TableSpace\u2713name of the tablespace containing the modified data segment.ThreadID\u2713ID of the thread that made the change to the databaseThead#see commentID of the thread that made the change to the database; will be included only if fetch_lcr_attributes is specified in OJet ConfigTimeStamp\u2713\u2713the UNIX epoch time of the operation, based on the Striim server's time zoneTransactionName\u2713\u2713name of the transaction that made the change (only meaningful if the transaction is a named transaction)TxnID\u2713\u2713raw representation of the transaction identifierTxnUserID\u2713UserName\u2713name of the user associated with the operationOracleReader simple applicationThe following application will write change data for all tables in myschema to SysOut. Replace the Username and Password values with the credentials for the account you created for Striim for use with LogMiner (see Configuring Oracle LogMiner) and myschema with the name of the schema containing the databases to be read.CREATE APPLICATION OracleLMTest;\nCREATE SOURCE OracleCDCIn USING OracleReader (\n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'myschema.%',\n FetchSize:1\n) \nOUTPUT TO OracleCDCStream;\n\nCREATE TARGET OracleCDCOut\n USING SysOut(name:OracleCDCLM)\n INPUT FROM OracleCDCStream;\nEND APPLICATION OracleLMTest;Alternatively, you may specify a single table, such as myschema.mytable. See the discussion of Tables in Oracle Reader properties for additional examples of using wildcards to select a set of tables.When troubleshooting problems, you can get the current LogMiner SCN and timestamp by entering\u00a0mon .; in the Striim console.Oracle Reader example outputOracleReader's output type is WAEvent. See WAEvent contents for change data for general information.The following are examples of WAEvents emitted by OracleReader for various operation types. Note that many of the metadata values (see Oracle Reader and OJet WAEvent fields) are dependent on the Oracle environment and thus will vary from the examples below.The examples all use the following table:CREATE TABLE POSAUTHORIZATIONS (\n BUSINESS_NAME varchar2(30),\n MERCHANT_ID varchar2(100),\n PRIMARY_ACCOUNT NUMBER,\n POS NUMBER,CODE varchar2(20),\n EXP char(4),\n CURRENCY_CODE char(3),\n AUTH_AMOUNT number(10,3),\n TERMINAL_ID NUMBER,\n ZIP number,\n CITY varchar2(20),\n PRIMARY KEY (MERCHANT_ID));\nCOMMIT;INSERTIf you performed the following INSERT on the table:INSERT INTO POSAUTHORIZATIONS VALUES(\n 'COMPANY 1',\n 'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu',\n 6705362103919221351,\n 0,\n '20130309113025',\n '0916',\n 'USD',\n 2.20,\n 5150279519809946,\n 41363,\n 'Quicksand');\nCOMMIT;Using LogMiner, the WAEvent for that INSERT would be similar to:data: [\"COMPANY 1\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",\"6705362103919221351\",\"0\",\"20130309113025\",\n\"0916\",\"USD\",\"2.2\",\"5150279519809946\",\"41363\",\"Quicksand\"]\nmetadata: \"RbaSqn\":\"21\",\"AuditSessionId\":\"4294967295\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"726174\",\n\"SQLRedoLength\":\"325\",\"BytesProcessed\":\"782\",\"ParentTxnID\":\"8.16.463\",\"SessionInfo\":\"UNKNOWN\",\n\"RecordSetID\":\" 0x000015.00000310.0010 \",\"DBCommitTimestamp\":\"1553126439000\",\"COMMITSCN\":726175,\n\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"726174\",\"SegmentName\":\"POSAUTHORIZATIONS\",\n\"OperationName\":\"INSERT\",\"TimeStamp\":1553151639000,\"TxnUserID\":\"SYS\",\"RbaBlk\":\"784\",\n\"SegmentType\":\"TABLE\",\"TableName\":\"SCOTT.POSAUTHORIZATIONS\",\"TxnID\":\"8.16.463\",\"Serial\":\"201\",\n\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1553151639000,\"OperationType\":\"DML\",\"ROWID\":\"AAAE9mAAEAAAAHrAAB\",\n\"DBTimeStamp\":\"1553126439000\",\"TransactionName\":\"\",\"SCN\":\"72617400000059109745623040160001\",\n\"Session\":\"105\"}\nbefore: null\nUPDATEIf you performed the following UPDATE on the table:UPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 5A' where pos=0;\nCOMMIT;Using LogMiner with the default setting Compression: false, the WAEvent for that UPDATE for the row created by the INSERT above would be similar to:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",null,null,null,\n null,null,null,null,null,null]\nmetadata: \"RbaSqn\":\"21\",\"AuditSessionId\":\"4294967295\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"726177\",\"\nSQLRedoLength\":\"164\",\"BytesProcessed\":\"729\",\"ParentTxnID\":\"2.5.451\",\"SessionInfo\":\"UNKNOWN\",\n\"RecordSetID\":\" 0x000015.00000313.0010 \",\"DBCommitTimestamp\":\"1553126439000\",\"COMMITSCN\":726178,\n\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"726177\",\"SegmentName\":\"POSAUTHORIZATIONS\",\n\"OperationName\":\"UPDATE\",\"TimeStamp\":1553151639000,\"TxnUserID\":\"SYS\",\"RbaBlk\":\"787\",\n\"SegmentType\":\"TABLE\",\"TableName\":\"SCOTT.POSAUTHORIZATIONS\",\"TxnID\":\"2.5.451\",\"Serial\":\"201\",\n\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1553151639000,\"OperationType\":\"DML\",\"ROWID\":\"AAAE9mAAEAAAAHrAAB\",\n\"DBTimeStamp\":\"1553126439000\",\"TransactionName\":\"\",\"SCN\":\"72617700000059109745625006240000\",\n\"Session\":\"105\"}\nbefore: [\"COMPANY 1\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",null,null,null,null,null,null,null,\nnull,null]Note that when using LogMiner the before section contains a value only for the modified column. You may use the IS_PRESENT() function to check whether a particular field value has a value (see Parsing the fields of WAEvent for CDC readers).With\u00a0Compression: true, only the primary key is included in the\u00a0before array:before: [null,\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",null,null,null,\n null,null,null,null,null,null]\nIn all cases, if OracleReader's SendBeforeImage property is set to False, the before value will be null.DELETEIf you performed the following DELETE on the table:DELETE from POSAUTHORIZATIONS where pos=0;\nCOMMIT;Using LogMiner with the default setting Compression: false, the WAEvent for a DELETE for the row affected by the UPDATE above would be:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",\"6705362103919221351\",\"0\",\"20130309113025\",\n\"0916\",\"USD\",\"2.2\",\"5150279519809946\",\"41363\",\"Quicksand\"]\nmetadata: \"RbaSqn\":\"21\",\"AuditSessionId\":\"4294967295\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"726180\",\n\"SQLRedoLength\":\"384\",\"BytesProcessed\":\"803\",\"ParentTxnID\":\"3.29.501\",\"SessionInfo\":\"UNKNOWN\",\n\"RecordSetID\":\" 0x000015.00000315.0010 \",\"DBCommitTimestamp\":\"1553126439000\",\"COMMITSCN\":726181,\n\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"726180\",\"SegmentName\":\"POSAUTHORIZATIONS\",\n\"OperationName\":\"DELETE\",\"TimeStamp\":1553151639000,\"TxnUserID\":\"SYS\",\"RbaBlk\":\"789\",\n\"SegmentType\":\"TABLE\",\"TableName\":\"SCOTT.POSAUTHORIZATIONS\",\"TxnID\":\"3.29.501\",\"Serial\":\"201\",\n\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1553151639000,\"OperationType\":\"DML\",\"ROWID\":\"AAAE9mAAEAAAAHrAAB\",\n\"DBTimeStamp\":\"1553126439000\",\"TransactionName\":\"\",\"SCN\":\"72618000000059109745626316960000\",\n\"Session\":\"105\"}\nbefore: nullWith Compression: true, the\u00a0data array would be:data: [null,\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",null,null,null,null,null,null,null,null,null]Note that the contents of data and before are reversed from what you might expect for a DELETE operation. This simplifies programming since you can get data for INSERT, UPDATE, and DELETE operations using only the data field.OJet simple applicationBefore deploying an OJet application, note the prerequisites discussed in Runtime considerations when using OJet.The following application will write change data for all tables in myschema to SysOut. Replace the Username and Password values with the credentials for the account you created for Striim for use with LogMiner (see Configuring Oracle LogMiner) and myschema with the name of the schema containing the databases to be read.CREATE APPLICATION OJetTest;\nCREATE SOURCE OracleCDCIn USING Ojet (\n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'myschema.%'\n) \nOUTPUT TO OracleCDCStream;\n\nCREATE TARGET OracleCDCOut\n USING SysOut(name:OracleCDCLM)\n INPUT FROM OracleCDCStream;\nEND APPLICATION OJetTest;Alternatively, you may specify a single table, such as myschema.mytable. See the discussion of Tables in OJet properties for additional examples of using wildcards to select a set of tables.Oracle Reader and OJet data type support and correspondenceOracle typeTQL type when using Oracle ReaderTQL type when using OJetADTnot supported, values will be set to nullnot supported, application will halt if it reads a table containing a column of this typeBFILEnot supported, values will be set to nullvalues for a column of this type will contain the file names, not their contentsBINARY_DOUBLEDoubleDoubleBINARY_FLOATFloatFloatBLOBString (a primary or unique key must exist on the table)An insert or update containing a column of this type generates two CDC log entries: an insert or update in which the value for this column is null, followed by an update including the value.When reading from Oracle 19c, values for this type may be incorrect when (1) a table contains multiple columns of this type and operations are performed on more than one of those columns in the same transaction or (2) multiple tables containing columns of this type are being read and different user sessions are performing operations on them. If you encounter either of these issues, Contact Striim support for assistance.Byte[]CHARStringStringCLOBstring (a primary or unique key must exist on the table)An insert or update containing a column of this type generates two CDC log entries: an insert or update in which the value for this column is null, followed by an update including the value.When reading from Oracle 19c, values for this type may be incorrect when (1) a table contains multiple columns of this type and operations are performed on more than one of those columns in the same transaction or (2) multiple tables containing columns of this type are being read and different user sessions are performing operations on them. If you encounter either of these issues, Contact Striim support for assistance.StringDATEDateTimejava.time.LocalDateTimeFLOATStringStringINTERVALDAYTOSECONDstring (always has a sign)String (unsigned)INTERVALYEARTOMONTHstring (always has a sign)String (unsigned)JSONnot supported, values will be set to nullnot supported, application will halt if it reads a table containing a column of this typeLONGResults may be inconsistent.\u00a0Oracle recommends using CLOB instead.StringLONG RAWResults may be inconsistent.\u00a0Oracle recommends using CLOB instead.Byte[]NCHARStringStringNCLOBString (a primary or unique key must exist on the table)String (a primary or unique key must exist on the table)NESTED TABLEnot supported, application will halt if it reads a table containing a column of this typenot supported, application will halt if it reads a table containing a column of this typeNUMBERStringStringNVARCHAR2StringStringRAWStringByte[]REFnot supported, application will halt if it reads a table containing a column of this typenot supported, application will halt if it reads a table containing a column of this typeROWIDStringvalues for a column of this type will be set to nullSD0_GEOMETRYSD0_GEOMETRY values will be set to nullKnown issue DEV-20726: if a table contains a column of this type, the application will terminateTIMESTAMPDateTimejava.time.LocalDateTimeTIMESTAMP WITH LOCAL TIME ZONEDateTimejava.time.LocalDateTimeTIMESTAMP WITH TIME ZONEDateTimejava.time.ZonedDateTimeUDTnot supported, values will be set to nullnot supported, application will halt if it reads a table containing a column of this typeUROWIDnot supported, a table containing a column of this type will not be readnot supported due to Oracle bug 33147962, application will terminate if it reads a table containing a column of this typeVARCHAR2StringStringVARRAYSupported by LogMiner only in Oracle 12c and later. Required Oracle Reader settings:Committed Transactions: TrueDictionary Mode: OnlineCatalogUndo Retention: Set to an interval long enough that VARRAY values will be available when Oracle Reader attempts to read them. If the interval is too short and the data is no longer in the log, Oracle Reader will terminate with java.sql.SQLException \"ORA-30052: invalid lower limit snapshot expression.\"Limitations:Tables containing VARRAY values must have primary keys.The VARRAY must contain only elements that can be returned as Java primitive types.The VARRAY's type name must be unique to its schema. If the same VARRAY type name is used in another schema, Oracle Reader will terminate with java.sql.SQLException \"ORA-01427: single-row subquery returns more than one row.\"Oracle's UNDO_RETENTION policy must be set to an interval long enough that VARRAY values will be available when Oracle Reader attempts to retrieve them with a Flashback (SELECT AS OF) query. If the interval is too short and the data is no longer available, Oracle Reader will terminate with java.sql.SQLException \"ORA-30052: invalid lower limit snapshot expression.\" For more information, see the documentation for UNDO_RETENTION for your version of Oracle.When the output of an Oracle Reader source is the input of a target using XML Formatter, the formatter's Format Column Value As property must be set to xmlelement for VARRAY data to be formatted correctly.known issue DEV-29799: if a table contains a column of this type, the application will terminateXMLTYPESupported only for Oracle 12c and later. When DictionaryMode is OnlineCatalog, values in any XMLType columns will be set to null. When DictionaryMode is OfflineCatalog, reading from tables containing XMLType columns is not supported.StringRuntime considerations when using Oracle ReaderStarting an Oracle Reader source automatically opens an Oracle session for the user specified in the Username property.The session is closed when the source is stopped.If a running Oracle Reader source fails with an error, the session will be closed.Closing a PDB source while Oracle Reader is running will cause the application to terminate.Runtime considerations when using OJetWhen reading from Oracle 11g, the name of an OJet reader must not exceed 18 characters. When reading from Oracle 12c or higher, the name must not exceed 118 charactersSchema evolution does not support tables containing ROWID columns.You must execute the following command before you create or deploy an OJet application. You should run the command again once a week.EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);If there may be open transactions when you start an OJet application, run the following command to get the current SCN, and specify it as the Start Scn value in the application's OJet properties.SELECT CURRENT_SCN FROM V$DATABASE;If reading from a downstream server, any time you stop OJet or it has terminated or halted, you must enter the following command to reconnect the Remote File Services (RFS):SELECT THREAD#, SEQUENCE#, RESETLOG_ID FROM V$MANAGED_STANDBY WHERE process = 'RFS';That should return something similar to the following, indicating that the RFS connection is active:rfs (PID:18798): krsr_rfs_atc: Identified database type as 'PRIMARY': Client is ASYNC (PID:10829)Using the SHOW commandUse the SHOW command to view OJet status or memory usage.SHOW MEMORY [ DETAILS ]\nSHOW STATUS [ DETAILS ]The STATUS output includes:APPLIED_SCN - all changes below this SCN have beenCAPTURE_TIME - Elapsed time (in hundredths of a second) scanning for changes in the redo log since the capture process was last startedCAPTURED_SCN - SCN of the last redo log record scannedENQUEUE_TIME - Time when the last message was enqueuedFILTERED_SCN - SCN of the low watermark transaction processedFIRST_SCN indicates the lowest SCN to which the capture can be repositionedLCR_TIME - Elapsed time (in hundredths of a second) creating LCRs since the capture process was last startedMESSAGES_CAPTURED - Total number of redo entries passed by LogMiner to the capture process for rule evaluation since the capture process last startedMESSAGES_ENQUEUED - Total number of messages enqueued since the capture process was last startedOLDEST_SCN - Oldest SCN of the transactions currently being processedREDO_MINED - The total amount of redo data mined (in bytes) since the capture process last startedREDO_WAIT_TIME - Elapsed time (in hundredths of a second) spent by the capture process in the WAITING FOR REDO stateRESTART_SCN - The SCN from which the capture process started mining redo data when it was last startedRULE_TIME - Elapsed time (in hundredths of a second) evaluating rules since the capture process was last startedSTART_SCN from which the capture process starts to capture changes.Viewing open transactionsSHOW . OPENTRANSACTIONS\n [ -LIMIT ]\n [ -TRANSACTIONID ',...']\n [ DUMP | -DUMP '/' ];This console command returns information about currently open Oracle transactions. The namespace may be omitted when the console is using the source's namespace.With no optional parameters, SHOW OPENTRANSACTIONS; will display summary information for up to ten open transactions (the default LIMIT count is 10). Output for OJet will not include Rba block or Thread #.\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 Transaction ID \u2502 # of Ops \u2502 Sequence # \u2502 StartSCN \u2502 Rba block \u2502 Thread # \u2502 TimeStamp \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 3.5.222991 \u2502 5 \u2502 1 \u2502 588206203 \u2502 5189 \u2502 1 \u2502 2019-04-05T21:28:51.000-07:00 \u2502\n\u2502 5.26.224745 \u2502 1 \u2502 1 \u2502 588206395 \u2502 5189 \u2502 1 \u2502 2019-04-05T21:30:24.000-07:00 \u2502\n\u2502 8.20.223786 \u2502 16981 \u2502 1 \u2502 588213879 \u2502 5191 \u2502 1 \u2502 2019-04-05T21:31:17.000-07:00 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nTo show all open transactions, add -LIMIT ALL.Add -TRANSACTIONID with a comma-separated list of transaction IDs (for example, -TRANSACTIONID '3.4.222991, 5.26.224745') to return summary information about specific transactions in the console and write the details to OpenTransactions_ in the current directory.Add DUMP to show summary information in the console and write the details to OpenTransactions_ in the current directory.Add -DUMP [/' to show summary information in the console and write the details to the specified file.In this section: Oracle DatabaseFeature comparison: Oracle Reader and OJetConfiguring Oracle to use Oracle ReaderBasic Oracle configuration tasksCreate an Oracle user with LogMiner privilegesCreating the QUIESCEMARKER table for Oracle ReaderReading from a standbyConfiguring Oracle to use OJetEnable archivelogRunning the OJet setup script on OracleConfiguring Active Data Guard to use OJetOracle Reader propertiesSpecifying key columns for tables without a primary keyOJet propertiesOracle Reader and OJet WAEvent fieldsOracleReader simple applicationOracle Reader example outputINSERTUPDATEDELETEOJet simple applicationOracle Reader and OJet data type support and correspondenceRuntime considerations when using Oracle ReaderRuntime considerations when using OJetViewing open transactionsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-08\n", "metadata": {"source": "https://www.striim.com/docs/en/oracle-database-cdc.html", "title": "Oracle Database", "language": "en"}} {"page_content": "\n\nOracle GoldenGateSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Oracle GoldenGatePrevNextOracle GoldenGateGG Trail Reader can be used to write Striim applications for auditing and troubleshooting Oracle GoldenGate. If your Striim cluster is licensed for Oracle GoldenGate, GG Trail Reader will be available in the Flow Designer. Striim supports GoldenGate 11g (all versions), 12c (all versions), 18.1, and 19.1.Running GG Trail Reader on the GoldenGate hostIf the GoldenGate trail files are not directly readable over the network by the Striim host, the GG Trail Reader source must be run on the remote GoldenGate host using the Striim Forwarding Agent. See Striim Forwarding Agent installation and configuration.GG Trail Reader propertiesTo rewrite an application using FileReader + GG Trail Parser to use GG Trail Reader, copy the values for the deprecated old properties to the corresponding new properties:FileReader Compression Type > GG Trail Reader Trail Compression typeFileReader Directory > GG Trail Reader Trail DirectoryFileReader Wildcard > GG Trail Reader Trail FIle PatternGG Trail Parser Metadata > GG Trail Reader Definition Filepropertytypedefault valuenotesCDDL ActionenumProcess12.2 or later for Oracle Database only: see Handling schema evolution.CDDL CaptureBooleanFalse12.2 or later for Oracle Database only: see Handling schema evolution. When set to True, you must specify a Definition File.Charset MapStringOptionally, override the GoldenGate source character set mapping (see SOURCECHARSET using the syntax ,;,;....CompressionBooleanFalseIf set to True, update operations for tables that have primary keys include only the primary key and modified columns, and delete operations include only the primary key. With the default value of False, all columns are included.DB Charset IDStringWhen the database does not use the ASCII character set, specify the character set here, for example, Cp037. When this property is specified, the Support Column Charset setting is ignored.Definition FileStringWith GoldenGate version 12.2 or later, leave this property blank to read the metadata from the trail file.Otherwise, specify the path (from root or relative to the .../Striim directory) and name of a GoldenGate source definition file (generated by the GoldenGate defgen utility) containing the metadata description of all the tables for which trail data was captured. When using CDDL, this file is required, and it must contain metadata for all tables specified in Tables and not in Excluded Tables.Do not specify the same file for the Definition File and Trail File Pattern. They should have different prefixes.Exclude TablesStringIf a wildcard is specified for Tables, any tables specified here will be excluded from the query. Specify the value as for Tables.Filter Transaction BoundariesBooleanTrueWith the default value of True, begin and commit transactions are filtered out. Set to False to include begin and commit transactions.Start PositionStringOptionally, specify an offset (FileName:; offset:) or RBA value (FileName:; RBA:) from which to start reading the trail file(s). The value must be either 0 or a valid record position.Offset and RBA two names for the same thing. When you see RBA 1863 in a GoldenGate trail file, that will show up in the corresponding WAEvent metadata as \"FileOffset\":1863.If you are using schema evolution (see Handling schema evolution, set a Start Position only if you are sure that there have been no DDL changes after that point.Handling schema evolutionSupport Column CharsetBooleanFalseUse the default value of False when all columns use the ASCII character set.Set to True if the data contains a mix of ASCII and non-ASCII columns. The DEFGEN must include the database locale and character set and the character set for each column. When a character set is specified using the DB Charset ID property, this setting is ignored.TablesString%The table(s) to be read. With the default value, all tables will be read. Alternatively, specify one or more table names, separated by semicolons, or a string ending with the % wildcard, such as HR.%. The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.With GoldenGate 12.2 or later, when a DDL operation (CREATE, ALTER, DROP, or REPLACE) is performed on one of the specified tables, GG Trail Reader will terminate. With earlier versions, GG Trail Reader may terminate unpredictably at a later time.Trail Byte OrderStringBigEndianSet to LittleEndian if that is the TRAILBYTEORDER of the trail file.Trail Compression TypeStringSet to gzip when Trail File Pattern specifies a file or files in gzip format. Otherwise, leave blank.Trail DirectoryStringSpecify the path to the directory containing the trail files.Trail File PatternStringSpecify the name of the file, or a wildcard pattern to match multiple files. When reading multiple files, Striim will read them in the default order for the operating system. Once Striim has read a file, it will ignore any further updates to it.Do not specify the same file for the Definition File and Trail File Pattern. They should have different prefixes.Sample:CREATE SOURCE GGTrailSource USING GGTrailReader (\n TrailDirectory:'Samples/GG/data',\n TrailFilePattern:'rt*',\n DefinitionFile:'Samples/GG/PosAuthorizationsDef.def'\n)\nOUTPUT TO GGTrailStream;GG Trail Reader WAEvent fieldsThe output data type for GG Trail Reader is WAEvent. The elements are:metadata: a map including:CSN: the Commit Sequence Number for the transactionFileName: name of the trail file from which the operation was readOffset: the position of the operation record in the trail fileOperationName: INSERT, UPDATE, or DELETEWhen schema evolution is enabled, OperationName for DDL events will be Alter, AlterColumns, Create, or Drop. This metadata is reserved for internal use by Striim and subject to change, so should not be used in CQs, open processors, or custom Java functions.Oracle ROWID: the Oracle ID for the inserted, updated, or delete rowTxnID: transaction IDTimeStamp: timestamp from the CDC logTableName: fully qualified name of the tableTo retrieve the values for these elements, use the META function. See Parsing the fields of WAEvent for CDC readers.data: an array of fields, numbered from 0, containing:for an INSERT or DELETE operation, the values that were inserted or deletedfor an UPDATE, the values after the operation was completedTo retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operationdataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.GG Trail Reader sample code and outputSample code:CREATE SOURCE GgTrailReadSrc USING GGTrailReader (\n TrailDirectory:\u2019./Samples/AppData/gg\u2019,\n TrailFilePattern:\u2019b1*\u2019,\n DefinitionFile:\u2019./Samples/AppData/gg/GGReadMultipleTables.def\u2019,\n startPosition:\u2019Filename:b1000000;offset:1039\u2019,\n ExcludeTables :\u2019QATEST.MULTTBL2\u2019\n)\nOUTPUT TO GGTrailReadStream;Sample output, insert:data: [\"1\",\"CharInsert1 \",\"VCahrInsert1\",\"1\",\"1.1\",[2000,4,22,0,0,0,0],\n[2015,10,29,12,57,37,153],\"NCharInsert1 \",\"NVCharInsert1\",\"1\"]\n metadata: {\"TableID\":0,\"TableName\":\"QATEST1.MULTTBL3\",\"TxnID\":\"2.17.1883\",\n\"OperationName\":\"INSERT\",\"FileName\":\"b1000000\",\"FileOffset\":1701,\n\"TimeStamp\":1446103675037,\"Oracle ROWID\":\"AAAXOvAAEAAAAyEAAA\",\"CSN\":\"2222016\",\n\"RecordStatus\":\"VALID_RECORD\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"fwc=\"\n beforePresenceBitMap: \"AAA=\"\n typeUUID: {\"uuidstring\":\"01eaaa3d-0dc1-4b81-b8b4-acde48001122\"}\n};Sample output, update: data: [\"1\",\"Update1 \",\"Update1\",\"2\",\"2.2\",[1970,12,12,0,0,0,0],\n[2015,10,29,12,58,39,327],\"Update1 \",\"Update1\",null]\n metadata: {\"TableID\":0,\"TableName\":\"QATEST1.MULTTBL3\",\"TxnID\":\"2.25.1884\",\n\"OperationName\":\"UPDATE\",\"FileName\":\"b1000000\",\"FileOffset\":3268,\n\"TimeStamp\":1446103733953,\"Oracle ROWID\":\"AAAXOvAAEAAAAyEAAA\",\"CSN\":\"2222056\",\n\"RecordStatus\":\"VALID_RECORD\"}\n userdata: null\n before: [\"1\",null,null,null,null,null,null,null,null,null]\n dataPresenceBitMap: \"fwM=\"\n beforePresenceBitMap: \"AQA=\"\n typeUUID: {\"uuidstring\":\"01eaaa3d-0dc1-4b81-b8b4-acde48001122\"}\n};\nSample output, delete: data: [\"1\",null,null,null,null,null,null,null,null,null]\n metadata: {\"TableID\":0,\"TableName\":\"QATEST1.MULTTBL3\",\"TxnID\":\"6.30.2195\",\n\"OperationName\":\"DELETE\",\"FileName\":\"b1000000\",\"FileOffset\":4379,\n\"TimeStamp\":1446103861964,\"Oracle ROWID\":\"AAAXOvAAEAAAAyEAAA\",\"CSN\":\"2222189\",\n\"RecordStatus\":\"VALID_RECORD\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"AQA=\"\n beforePresenceBitMap: \"AAA=\"\n typeUUID: {\"uuidstring\":\"01eaaa3d-0dc1-4b81-b8b4-acde48001122\"}\n};GG Trail Reader data type support and correspondenceGolden Gate data typescaleStriim type0n/aString1n/aString2n/aDouble64n/aString65n/aString66n/aDouble130> 0Double130<=0Short131> 0Double131<=0Integer132> 0Double132<=0Integer133> 0Double133<=0Long134> 0Double134<=0Long135> 0Double135<=0Long140n/aDouble141n/aDouble142n/aDouble143n/aDouble150n/aDouble151n/aDouble152n/aDouble153n/aDouble154n/aDouble155n/aDouble191n/aorg.joda.time.DateTime192n/aorg.joda.time.DateTime195n/aInteger196n/aInteger197n/aInteger198n/aInteger199n/aInteger200n/aInteger201n/aInteger202n/aInteger203n/aInteger204n/aInteger205n/aInteger206n/aInteger207n/aInteger208n/aInteger209n/aInteger210n/aInteger211n/aInteger212n/aIntegerIn this section: Oracle GoldenGateRunning GG Trail Reader on the GoldenGate hostGG Trail Reader propertiesGG Trail Reader WAEvent fieldsGG Trail Reader sample code and outputGG Trail Reader data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/oracle-goldengate.html", "title": "Oracle GoldenGate", "language": "en"}} {"page_content": "\n\nPostgreSQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)PostgreSQLPrevNextPostgreSQLStriim supports PostgreSQL 9.4.x and later versions, Amazon RDS for PostgreSQL, Amazon Aurora with PostgreSQL compatibility, Azure Database for PostgreSQL, Azure Database for PostgreSQL - Flexible Server, Google AlloyDB for PostgreSQL, and Google Cloud SQL for PostgreSQL.PostgreSQL Reader uses the wal2json plugin to read PostgreSQL change data. 1.x releases of wal2jon can not read transactions larger than 1\u00a0GB. We recommend using a 2.x release of wal2json, which does not have that limitation.Striim provides templates for creating applications that read from PostgreSQL and write to various targets. See\u00a0Creating an application using a template for details.PostgreSQL setupStriim reads change data from PostgreSQL.NotePostgreSQL Reader requires logical replication. For general information about PostgreSQL logical replication, see https://www.postgresql.org/docs/current/logical-replication.html and select your PostgreSQL version.Before\u00a0Striim applications can use the PostgreSQL Reader adapter, a PostgreSQL administrator with the necessary privileges must set up your database as described for your platform.PostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in AzureAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerSpecify PostgreSQL Reader's properties as follows:Postgres Config: if using wal2json version 2, specify that as described in PostgreSQL Reader propertiesPostgreSQL Reader propertiesReplication slot name: see Logical decodingUsername: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userPostgreSQL setup in Google Cloud SQL for PostgreSQLSet up logical replication as described in Setting up logical replication and decoding.Specify PostgreSQL Reader's properties as follows:Postgres Config: do not change default, Google Cloud SQL does not support wal2json version 2Replication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)Username: the name of the user created in Create a replication usePassword: the login password for that userPostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim /0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup for schema evolutionUsing Schema evolution with PostgreSQL Reader requires a tracking table in the source database. To create this table, run pg_ddl_setup.sql, which you can find in Striim/conf/DDLCaptureScripts or download from https://github.com/striim/doc-downloads.PostgreSQL Reader propertiesBefore you can use this adapter, PostgreSQL must be configured as described in PostgreSQL setup.If this reader will be deployed to a Forwarding Agent, install the driver as described in Install the PostgreSQL JDBC driver.Striim provides templates for creating applications that read from PostgreSQL and write to various targets. See\u00a0Creating an application using a template for details.propertytypedefault valuenotesBidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.CDDL ActionenumProcessSee Handling schema evolution.CDDL CaptureBooleanFalseSee Handling schema evolution.CDDL Tracking TableStringSee PostgreSQL setup for schema evolution.Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringjdbc:postgresql:// followed by the primary server's IP address or network name, a colon, the port number, and a slash followed by the database name. If the database name is omitted, the Username value is used as the database name.PostgreSQL Reader cannot read from a replica (standby) server since the replication slot is in the primary server.Excluded TablesStringChange data for any tables specified here will not be returned. For example, if Tables uses a wildcard, data from any tables specified here will be omitted. Multiple table names and wildcards may be used as for Tables.Filter Transaction BoundariesBooleanTrueWith the default value of True, begin and commit transactions are filtered out. Set to False to include begin and commit transactions.Passwordencrypted passwordthe password specified for the username (see Encrypted passwords)Postgres ConfigString{\"ReplicationPluginConfig\": {\"Name\": \"WAL2JSON\", \"Format\": \"1\"}}Change 1 to 2 to use wal2json format 2 (see the wal2json readme for more information).If you are running an older version of Amazon RDS for PostgreSQL that supports only version 1, you may contact AWS technical support to have the wal2json plugin updated.Replication Slot NameStringstriim_slotThe name of the replication slot created as described in\u00a0PostgreSQL setup. If you have multiple instances of PostgreSQLReader, each must have its own slot.Start LSNStringBy default, only new transactions are read. Optionally, specify a\u00a0log sequence number to start reading from that point.If you are using schema evolution (see Handling schema evolution, set a Start LSN only if you are sure that there have been no DDL changes after that point.Handling schema evolutionTablesStringThe table(s) for which to return change data. Tables must have primary keys (required for logical replication).Names are case-sensitive. Specify\u00a0source table names as .
) (The database is specified in the connection URL.)You may specify multiple tables as a list separated by semicolons or using the following wildcards in the schema and/or table names only (not in the database name):%: any series of characters_: any single characterFor example, %.% would include all tables in all schemas in the database specified in the connection URL.The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.All tables specified must have primary keys. Tables without primary keys are not included in output.If any specified tables are missing PostgresReader will issue a warning. If none of the specified tables exists, start will fail with a \"found no tables\" error.If you have multiple instances of PostgreSQLReader, each should read a separate set of tables.UsernameStringthe login name for the user created as described in PostgreSQL setupPostgreSQL Reader WAEvent fieldsThe output data type for PostgreSQLReader is WAEvent. The elements are:metadata: a map including:LSN: log sequence number of the transaction's commitNEXT_LSN: next log sequence number (used for reconnecting to the replication slot after a non-fatal network interruption)OperationName: INSERT, UPDATE, or DELETEWhen schema evolution is enabled, OperationName for DDL events will be Alter, AlterColumns, Create, or Drop. This metadata is reserved for internal use by Striim and subject to change, so should not be used in CQs, open processors, or custom Java functions.PK_UPDATE: included only when an UPDATE changes the primary keySequence: incremented for each operation within a transactionTableName: the name of the table including its schemaTimeStamp: timestamp from the replication subscriptionTxnID: transaction identifierTo retrieve the values for these fields, use the META() function. See Parsing the fields of WAEvent for CDC readers.data: an array of fields, numbered from 0, containing:for an INSERT operation, the values that were insertedfor an UPDATE, the values after the operation was completedfor a DELETE, the value of the primary key and nulls for the other fieldsTo retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.before: for UPDATE operations, contains the primary key value from before the update. When an update changes the primary key value, you may retrieve the previous value using the BEFORE() function.dataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.PostgreSQL Reader simple applicationThe following application will write change data for all tables in all schemas in database mydb to SysOut. Replace striim and ****** with the user name and password for the PostgreSQL account you created for use by PostgreSQLReader (see PostgreSQL setup) and mydb and %.% with the names of the database and tables to be read. If the replication slot name is not striim_slot, specify it using the ReplicationSlotName property.CREATE APPLICATION PostgreSQLTest;\n\nCREATE SOURCE PostgreSQLCDCIn USING PostgreSQLReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'jdbc:postgresql://192.0.2.10:5432/mydb',\n ReplicationSlotName: 'striim_slot',\n Tables:'%.%'\n) \nOUTPUT TO PostgreSQLCDCStream;\n\nCREATE TARGET PostgreSQLCDCOut\nUSING SysOut(name:PostgreSQLCDC)\nINPUT FROM PostgreSQLCDCStream;\n\nEND APPLICATION PostgreSQLTest;PostgreSQL Reader example outputPostgreSQLReader's output type is WAEvent. See WAEvent contents for change data\u00a0and PostgreSQL Reader WAEvent fields for more information.The following are examples of WAEvents emitted by PostgreSQLReader for various operation types. They all use the following table:CREATE TABLE posauthorizations (\n business_name varchar(30),\n merchant_id character varying(35) PRIMARY KEY,\n primary_account bigint,\n pos bigint,\n code character varying(20),\n exp character(4),\n currency_code character(3),\n auth_amount numeric(10,3),\n terminal_id bigint,\n zip bigint,\n city character varying(20));INSERTIf you performed the following INSERT on the table:INSERT INTO posauthorizations VALUES(\n 'COMPANY 1',\n 'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu',\n 6705362103919221351,\n 0,\n '20130309113025',\n '0916',\n 'USD',\n 2.20,\n 5150279519809946,\n 41363,\n 'Quicksand');The WAEvent for that INSERT would be similar to:data: [\"COMPANY 1\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\"20130309113025\",\n\"0916\",\"USD\",2.200,5150279519809946,41363,\"Quicksand\"]\nmetadata: {\"TableName\":\"public.posauthorizations\",\"TxnID\":556,\"OperationName\":\"INSERT\",\n\"LSN\":\"0/152CD58\",\"NEXT_LSN\":\"0/152D1C8\",\"Sequence\":1,\"Timestamp\":\"2019-01-11 16:29:54.628403-08\"}\nUPDATEIf you performed the following UPDATE on the table:UPDATE posauthorizations SET BUSINESS_NAME = 'COMPANY 5A' where pos=0;The WAEvent for that UPDATE would be similar to:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\"20130309113025\",\n\"0916\",\"USD\",2.200,5150279519809946,41363,\"Quicksand\"]\nmetadata: {\"TableName\":\"public.posauthorizations\",\"TxnID\":557,\"OperationName\":\"UPDATE\",\n\"LSN\":\"0/152D2E0\",\"NEXT_LSN\":\"0/152D6F8\",\"Sequence\":1,\"Timestamp\":\"2019-01-11 16:31:54.271525-08\"}\nbefore: [null,\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",null,null,null,null,null,null,null,null,null]\nWhen an UPDATE changes the primary key, you may retrieve the old primary key value from the before array.DELETEIf you performed the following DELETE on the table:DELETE from posauthorizations where pos=0;The WAEvent for that DELETE would be similar to:data: [null,\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",null,null,null,null,null,null,null,null,null]\nmetadata: {\"TableName\":\"public.posauthorizations\",\"TxnID\":558,\"OperationName\":\"DELETE\",\n\"LSN\":\"0/152D730\",\"NEXT_LSN\":\"0/152D7C8\",\"Sequence\":1,\"Timestamp\":\"2019-01-11 16:33:09.065951-08\"}\nOnly the primary key value is included.PostgreSQL Reader data type support and correspondencePostgreSQL typeStriim typebigintlongbigseriallongbitstringbit varyingstringbooleanshortbyteastringcharacterstringcharacter varyingstringcidrstringcircleunsupportedcomposite typestringdateDateTimedaterangestringdouble precisiondoubleinetstringintegerintegerint2shortint4integerint4rangestringint8longint8rangestringintegerintegerintervalstringjsonstringjsonbstringlineunsupportedlsegunsupportedmacaddrstringmacaddr8stringmoneystringname (system identifier)stringnumericstring (Infinity, -Infinity, and NaN values will be converted to null)numrangestringpathunsupportedpg_lanstringpointunsupportedpolygonunsupportedrealfloatsmallintshortsmallserialshortserialintegertextstringtimestringtime with time zonestringtimestampdatetimetsrangestringtimestamp with time zonedatetimetstzrangestringtsqueryunsupportedtsvectorunsupportedtxid_snapshotstringuuidstringxmlstringIn this section: PostgreSQLPostgreSQL setupPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityPostgreSQL setup in Amazon RDS for PostgreSQLPostgreSQL setup in AzurePostgreSQL setup in Google Cloud SQL for PostgreSQLPostgreSQL setup in Linux or WindowsPostgreSQL setup for schema evolutionPostgreSQL Reader propertiesPostgreSQL Reader WAEvent fieldsPostgreSQL Reader simple applicationPostgreSQL Reader example outputPostgreSQL Reader data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-21\n", "metadata": {"source": "https://www.striim.com/docs/en/postgresql-cdc.html", "title": "PostgreSQL", "language": "en"}} {"page_content": "\n\nSQL ServerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)SQL ServerPrevNextSQL ServerMS SQL Reader supports:SQL Server Enterprise versions 2008, 2012, 2014, 2016, 2017, and 2019SQL Server Standard versions 2016, 2017, and 2019Azure SQL Database, S3 tier and above (Standard and Premium tiers; CDC is not supported for Basic tier)Azure SQL Database Managed InstanceStriim provides templates for creating applications that read from SQL Server and write to various targets. See\u00a0Creating an application using a template for details.MSJet reads logical changes directly from SQL Server's transaction logs.Differences between MSJet and MS SQL Reader:MSJet does not require SQL Server's CDC change tables.MSJet automatically enables CDC on a per-table basis.MSJet supports compressed tables (see Learn / SQL / SQL Server / Enable Compression on a Table or Index).MSJet supports TLS.MSJet supports reading from replication logs (enabling CDC is not required if replication publisher is enabled).MS SQL Reader supports reading from a secondary database in an Always On availability group.MSJet supports Microsoft SQL Server versions 2016 (SP2), 2017, and 2019 running on 64-bit Windows 10 or Windows Server 2012 or later. It is not compatible with SQL Server running on other operating systems or on Windows on ARM.SQL Server setupServer-side setup steps vary depending on whether you are using MS SQL Reader or MSJet.SQL Server setup for MS SQL ReaderMS SQL Reader reads SQL Server change data using the native SQL Server Agent utility. For more information, see About Change Data Capture (SQL Server) on msdn.microsoft.com.If a table uses a SQL Server feature that prevents change data capture, MS SQL Reader can not read it. For examples, see the \"SQL Server 2014 (12.x) specific limitations\" section of CREATE COLUMNSTORE INDEX (Transact-SQL).In Azure SQL Database managed instances, change data capture requires collation to be set to the default SQL_Latin1_General_CP1_CI_AS at the server, database, and table level. If you need a different collation, it must be set at the column level.Before Striim applications can use the MS SQL Reader adapter, a SQL Server administrator with the necessary privileges must do the following:If SQL Server is running in a virtual machine in Azure, follow the instructions in\u00a0Configuring an Azure virtual machine running SQL Server.If it is not running already, start SQL Server Agent (see Start, Stop, or Pause the SQL Server Agent Service; if the agent is disabled, see Agent XPs Server Configuration Option). This service must be running for MS SQL Reader to work. If it is not running, you will see an error similar to the following in striim.server.log:2017-01-08 15:40:24,596 @ -ERROR cached5 \ncom.webaction.source.tm.MSSqlTransactionManager.getStartPosition \n(MSSqlTransactionManager.java:389) 2522 : \nCould not position at EOF, its equivalent LSN is NULL \nEnable change data capture on each database to be read using the following commands (for more information, see Learn / SQL / SQL Server / Enable and disable change data capture):for Amazon RDS for SQL Server:EXEC msdb.dbo.rds_cdc_enable_db '';for all others:USE \nEXEC sys.sp_cdc_enable_dbCreate a SQL Server user for use by Striim. This user must use the SQL Server authentication mode, which must be enabled in SQL Server. (If only Windows authentication mode is enabled, Striim will not be able to connect to SQL Server.)Grant the MS SQL Reader user the db_owner role for each database to be read using the following commands:USE \nEXEC sp_addrolemember @rolename=db_owner, @membername=For example, to enable change data capture on the database mydb, create a user striim, and give that user the db_owner role on mydb:USE mydb\nEXEC sys.sp_cdc_enable_db\nCREATE LOGIN striim WITH PASSWORD = 'passwd' \nCREATE USER striim FOR LOGIN striim\nEXEC sp_addrolemember @rolename=db_owner, @membername=striim\nTo confirm that change data capture is set up correctly, run the following command and verify that all tables to read are included in the output:EXEC sys.sp_cdc_help_change_data_captureStriim can capture change data from a secondary database in an Always On availability group. In that case, change data capture must be enabled on the primary database.Configuring an Azure virtual machine running SQL ServerWhen SQL Server is running in an Azure virtual machine as described in\u00a0How to provision a Windows SQL Server virtual machine in the Azure portal, do the following before following the steps in\u00a0SQL Server setup\u00a0.Go to the virtual machine's Overview tab.If there is no public IP address, enable it.If there is no DNS name, specify one, and make a note of the full name (..cloudapp.azure.com), as you will need it to configure MSSQLReader.Go to the virtual machine's SQL Server configuration tab.Set SQL connectivity to Public (Internet).Enable SQL Authentication and\u00a0and specify the login name and password MSSQLReader will use to connect to SQL Server.Make note of the Port setting, as you will need it to configure MSSQLReader.Go to the Overview tab and click Connect.When prompted, download the .rdb file, open it in Remote Desktop Connection, and connect to the virtual machine using the resource group's user name and password (not the user name and password you specified for SQL Server authentication).Open the SQL Server Configuration Manager and set the following as necessary:Protocols: Shared Memory enabled, Named Pipes disabled, TCP/IP enabledTCP/IP Properties IP Addresses tab: TCP Dynamic Ports empty, TCP Port matches the SQL Authentication settingLog out of Remote Desktop Connection and continue with the instructions in\u00a0SQL Server setup for MS SQL Reader.SQL Server setup for MSJetNoteIf transactional replication is already running, follow the instructions in SQL Server setup for MSJet when transactional replication is already running.Before Striim applications can use MSJet, a SQL Server administrator with local administrator privileges on the SQL Server host must do the following:Create a Windows user for use by Striim on the SQL Server host (the Windows system that hosts the SQL Server instance containing the databases to be read).Grant that user local Administrator privileges on the SQL Server host and make it a member of the sysadmin role in SQL Server. (The sysadmin role includes db_owner privileges on all databases.)Log in as that user and install a Forwarding Agent or Striim Server on the SQL Server host (see Striim Forwarding Agent installation and configuration). Your Striim Forwarding Agent or Striim Server must be installed and run as a user with local administrative privileges.If Microsoft Visual C++ 2015-2019 Redistributable (x64) version 14.28.29914 or later (see Visual Studio 2015, 2017, 2019, and 2022) and Microsoft OLE DB Driver for SQL Server version 18.3-18.5 (see Release notes for the Microsoft OLE DB Driver for SQL Server) are not already available on the SQL Server host, install or upgrade them.If Replication subscribers are enabled on each database to be read, skip this step.If Replication is not enabled, you must enable CDC logging. In SQL Server, enable change data capture on each database to be read using the following commands, which require the sysadmin role:USE \nEXEC sys.sp_cdc_enable_dbIf Replication subscribers are enabled on each database to be read, skip this step.If Replication is not enabled, stop the Capture and Cleanup jobs on each of those databases (see Administer and Monitor Change Data Capture (SQL Server)). This will stop SQL Server from writing to its CDC change tables, which MSJet does not require.If using Windows authentication, skip this step.If using SQL Server authentication, create a SQL Server user for use by MSJet.For more information, see Microsoft's Choose an Authentication Mode and the notes for MSJet's Integrated Security property in MSJet properties.MSJet propertiesGrant the SQL Server user (if using SQL Server authentication) or the Windows user (if using Windows authentication) the db_owner role for each database to be read using the following commands, which require the sysadmin role:USE \nEXEC sp_addrolemember @rolename=db_owner, @membername=If you have not previously performed a full backup on each of the databases to be read, do so now (Full Database Backups (SQL Server)).If Replication subscribers are enabled on each database to be read, skip this step.If Replication is not enabled, configure the following stored procedure to run every five minutes on each database that will be read. This will retain the logs read by this adapter for three days. If that is more than necessary or not enough, you may increase the retentionminutes variable. Note that the longer you retain the logs, the more disk space will be required by SQL Server.declare @retentionminutes int = (3 * 24 * 60) --3 days in minute granularity\n\ndeclare @trans table (begt binary(10), endt binary(10))\ninsert into @trans exec sp_repltrans\n\nselect dateadd(minute, -@retentionminutes, getdate())\n\ndeclare @firstlsn binary(10) = null\ndeclare @lastlsn binary(10) = null\ndeclare @firstTime datetime\ndeclare @lasttime datetime\n\nselect top (1) @lastTime = (select top(1) [begin time] \n from fn_dblog(stuff(stuff(convert(char(24), begt, 1), 19, 0, ':'), 11, 0, ':'), default)),\n @lastlsn = begt\n from @trans\norder by begt desc \n\n--All transactions are older than the retention, no further processing required,\n--everything can be discarded\nif (@lasttime < dateadd(minute,-@retentionminutes, getdate()))\nbegin\n EXEC sp_repldone @xactid = NULL, @xact_seqno = NULL, @numtrans = 0, @time = 0, @reset = 1 \nend\nelse\nbegin\n --see if anything can be discarded\n select top (1) @firstTime = (select top(1) [begin time] \n from fn_dblog(stuff(stuff(convert(char(24), begt, 1), 19, 0, ':'), 11, 0, ':'), default)),\n @firstlsn = isnull(@firstlsn, begt)\n from @trans\n order by begt asc\n\n if (@firsttime < dateadd(minute, -@retentionminutes, getdate()))\n begin\n --Since only full VLogs can be truncated we really only need to check the earliest LSN \n --for every Vlog's date\n select @firstlsn = substring(max(t.lsns), 1, 10), \n @lastlsn = substring(max(t.lsns), 11, 10)\n from (select min(begt + endt) as lsns \n from @trans group by substring(begt, 1, 4)) as t\n where (select top(1) [begin time] \n from fn_dblog(stuff(stuff(convert(char(24), t.lsns, 1), 19, 0, ':'), 11, 0, ':'), default)\n where Operation = 'LOP_BEGIN_XACT') < dateadd(minute, -@retentionminutes, getdate())\n\n exec sp_repldone @xactid = @firstlsn, @xact_seqno = @lastlsn, @numtrans = 0, @time = 0,\n @reset = 0 \n end\nendSQL Server setup for MSJet when transactional replication is already runningIf transactional replication is already running, install MSJet on the publisher.Create a Windows user for use by Striim on the SQL Server host (the Windows system that hosts the SQL Server instance containing the databases to be read).Grant that user local Administrator privileges on the SQL Server host and make it a member of the sysadmin role in SQL Server. (The sysadmin role includes db_owner privileges on all databases.)Log in as that user and install a Forwarding Agent or Striim Server on the SQL Server host (see Striim Forwarding Agent installation and configuration). Your Striim Forwarding Agent or Striim Server must be installed and run as a user with local administrative privileges.If Microsoft Visual C++ 2015-2019 Redistributable (x64) version 14.28.29914 or later (see Visual Studio 2015, 2017, 2019, and 2022) and Microsoft OLE DB Driver for SQL Server version 18.3-18.5 (see Release notes for the Microsoft OLE DB Driver for SQL Server) are not already available on the SQL Server host, install or upgrade them.If using Windows authentication, skip this step.If using SQL Server authentication, create a SQL Server user for use by MSJet.For more information, see Microsoft's Choose an Authentication Mode and the notes for MSJet's Integrated Security property in MSJet properties.MSJet propertiesGrant the SQL Server user (if using SQL Server authentication) or the Windows user (if using Windows authentication) the db_owner role for each database to be read using the following commands, which require the sysadmin role:USE \nEXEC sp_addrolemember @rolename=db_owner, @membername=If you have not previously performed a full backup on each of the databases to be read, do so now (Full Database Backups (SQL Server)).Creating the QUIESCEMARKER table for MSJetTo allow Striim to quiesce (see QUIESCE) an application that uses MSJet, you must create a QUIESCEMARKER table in SQL Server.The DDL for creating the table is:\u00a0CREATE TABLE QUIESCEMARKER (\n source varchar(100), \n status varchar(100), \n sequence int, \n inittime datetime2, \n updatetime datetime2 default CURRENT_TIMESTAMP, \n approvedtime datetime2, \n reason varchar(100), \nconstraint quiesce_marker_pk primary key (source, sequence));\nThe user created as described in SQL Server setup for MSJet must have SELECT, INSERT, and\u00a0 UPDATE privileges on this table.MS SQL Reader propertiesNoteBefore using this adapter, you must complete the tasks described in SQL Server setup for MS SQL Reader.Before reading from SQL Server with an application deployed to a Forwarding Agent, you must install the required driver as described in Install the Microsoft JDBC Driver in a Forwarding Agent.By default, SQL Server retains three days of change capture data.Striim provides templates for creating applications that read from SQL Server and write to various targets. See\u00a0Creating an application using a template for details.The adapter properties are:propertytypedefault valuenotesAuto Disable Table CDCBooleanFalseSQL Server starts capturing change data when the Striim application is started. With the default setting of False, SQL Server will continue capturing change data after the application is undeployed. If set to True, when the application is undeployed, SQL Server will stop capturing change data and delete all previously captured data from its change tables.Bidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.CompressionBooleanFalseSet to True when the output of the source is the input of a DatabaseWriter target that writes to Cassandra\u00a0{see Cassandra Writer).Connection Pool SizeInteger10typically should be set to the number of tables, with a large number of tables can set lower to reduce impact on MSSQL hostConnection Retry PolicyStringtimeOut=30, retryInterval=30, maxRetries=3With the default setting:Striim will wait for the database to respond to a connection request for 30 seconds (timeOut=30).If the request times out, Striim will try again in 30 seconds (retryInterval=30).If the request times out on the third retry (maxRetries=3), a ConnectionException will be logged and the application will stop.Negative values are not supported.Connection URLString: or \\\\:, for example, 92.168.1.10:1433. If reading from a secondary database in an Always On availability group, use :;applicationIntent=ReadOnly.If the connection requires SSL, see Set up connection to MSSQLReader with SSL in Striim's knowledge base.Database NameStringthe SQL Server database nameExcluded TablesStringIf the Tables string contains wildcards, any tables specified here will be excluded.Fetch SizeInteger0The fetch size is the number of rows that MSSQLReader will fetch at a time. With the default value of 0, this is controlled by SQL Server. You may set this manually: lower values will reduce memory usage, higher values will increase performance.Fetch Transaction MetadataBooleanFalseWith the default value of False, the metadata array will not include TimeStamp or TxnID fields. If set to True, the metadata array will include TimeStamp and TxnID values (note that this will reduce performance). This must be set to True for Monitoring end-to-end lag (LEE) to produce accurate results.Filter Transaction BoundariesBooleanTrueWith the default value of True, begin and commit transactions are filtered out. Set to False to include begin and commit transactions.Integrated SecurityBooleanFalseWhen set to the default value of False, the adapter will use SQL Server Authentication. Set to True to use Windows Authentication, in which case the adapter will authenticate as the user running the Forwarding Agent or Striim server on which it is deployed, and any settings in Username or Password will be ignored. See Choose an Authentication Mode for more information.Passwordencrypted passwordthe password specified for the username (see Encrypted passwords)Polling IntervalInteger5This property controls how often the adapter reads from the source. The value is in seconds. By default, it checks the source for new data every five seconds. If there is new data, the adapter reads it and sends it to the adapter's output stream.Start PositionStringEOFWith the default value EOF, reading starts at the end of the log file (that is, only new data is read). Alternatively, you may specify a specific time (in the Transact-SQL format TIME: YYYY-MM-DD hh:mm:ss:nnn, for example, TIME:2014-10-03 13:32:32.917) or SQL Server log sequence number (for example, LSN:0x00000A85000001B8002D) for the Begin operation of the transaction from which to start reading.See also Switching from initial load to continuous replication.TablesStringThe table(s) for which to return change data. Names must be specified as .
and are case-sensitive. (The server is specified by the IP address in connectionURL and the database by databaseName.)You may specify multiple tables as a list separated by semicolons or with the following wildcards:%: any series of characters_: any single characterFor example, my.% would read all tables in the my schema. The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.At least one table must match the wildcard or start will fail with a \"Could not find tables specified in the database\" error. Temporary tables (which start with #) are ignored.Transaction SupportBooleanFalseIf set to True, MSSQLReader will preserve the order of operations within a transaction. This is required for Bidirectional replication.Transaction support requires one of the cumulative SQL Server updates listed in FIX: The change table is ordered incorrectly for updated rows after you enable change data capture for a Microsoft SQL Server database. If you have not applied one of those updates, or are reading from SQL Server 2008, leave this at its default value of False.UsernameStringthe login name for the user created as described in Microsoft SQL Server setupMSJet propertiesNoteBefore you can use this adapter, the tasks described in SQL Server setup for MSJet must be completed.MSJet must be deployed to a Forwarding Agent (or Striim server) running on the same Windows system as the SQL Server instance that hosts the databases to be read.If the adapter is deployed to a Forwarding Agent, the Microsoft JDBC driver must be installed as described in Install the Microsoft JDBC Driver in a Forwarding Agent.This adapter has the following properties:propertytypedefault valuenotesBidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.CDDL ActionenumProcessSee Handling schema evolution.CDDL CaptureBooleanFalseSee Handling schema evolution.Committed TransactionsStringTrueBy default, only committed transactions are read. Set to False to read both committed and uncommitted transactions.CompressionBooleanFalseSet to True when the output of the source is the input of a DatabaseWriter target that writes to Cassandra\u00a0{see Cassandra Writer).Connection Retry PolicyStringtimeOut=30, retryInterval=30, maxRetries=3With the default setting:Striim will wait for the database to respond to a connection request for 30 seconds (timeOut=30).If the request times out, Striim will try again in 30 seconds (retryInterval=30).If the request times out on the third retry (maxRetries=3), a ConnectionException will be logged and the application will stop.Negative values are not supported.Connection URLStringIP address and port of Microsoft SQL server, separated by a colon: for example, 192.168.1.10:1433. Reading from a secondary database is not supported.MSJet supports TLS 1.2 (see Transport Layer Security (TLS)). No configuration is required on Striim's side.If the connection requires SSL, see Set up connection to MSSQLReader with SSL in Striim's knowledge base.Database NameStringthe SQL Server database nameExcluded TablesStringIf the Tables string contains wildcards, any tables specified here will be excluded.Filter Transaction BoundariesBooleanTrueWith the default value of True, begin and commit transactions are filtered out. Set to False to include begin and commit transactions.Integrated SecurityBooleanFalseWhen set to the default value of False, the adapter will use SQL Server Authentication. Set to True to use Windows Authentication, in which case the adapter will authenticate as the user running the Forwarding Agent or Striim server on which it is deployed, and any settings in Username or Password will be ignored. See Choose an Authentication Mode for more information.Passwordencrypted passwordThe password specified for the username (see Encrypted passwords).Quiesce Marker TableStringQUIESCEMARKERSee\u00a0Creating the QUIESCEMARKER table for MSJet. Modify the default value if the quiesce marker table is not in the schema associated with the user specified in the Username.Send Before ImageBooleanTrueset to False to omit before data from outputStart PositionStringEOFWith the default value EOF, reading starts at the end of the log file (that is, only new data is read). Alternatively, you may specify a specific time (in the Transact-SQL format TIME: YYYY-MM-DD hh:mm:ss:nnn, for example, TIME:2014-10-03 13:32:32.917) or SQL Server log sequence number (for example, LSN:0x00000A85000001B8002D) for the Begin operation of the transaction from which to start reading.If you are using schema evolution (see Handling schema evolution, set a Start Position only if you are sure that there have been no DDL changes after that point.Handling schema evolutionSee also Switching from initial load to continuous replication.TablesStringThe table(s) for which to return change data. Names must be specified as .
and are case-sensitive. (The server is specified by the IP address in connectionURL and the database by databaseName.)You may specify multiple tables as a list separated by semicolons or with the following wildcards:%: any series of characters_: any single characterFor example, my.% would read all tables in the my schema. The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.At least one table must match the wildcard or start will fail with a \"Could not find tables specified in the database\" error. Temporary tables (which start with #) are ignored.MSJet supports compressed tables and indexes (see Learn / SQL / SQL Server / Enable Compression on a Table or Index).Transaction Buffer Spillover SizeString1MBWhen Transaction Buffer Type is Memory, this setting has no effect.When Transaction Buffer Type is Disk, the amount of memory that Striim will use to hold each in-process transactions before buffering it to disk. You may specify the size in MB or GB.Transaction Buffer TypeStringDiskWhen Striim runs out of available Java heap space, the application will terminate. Typically this will happen when a transaction includes millions of INSERT, UPDATE, or DELETE events with a single COMMIT.To avoid this problem, with the default setting of Disk, when a transaction exceeds the Transaction Buffer Spillover Size, Striim will buffer it to disk at the location specified by the Transaction Buffer Disk Location property, then process it when memory is available.When the\u00a0setting is Disk and recovery is enabled (see Recovering applications), after the application terminates or is stopped the buffer will be reset, and during recovery any previously buffered transactions will restart from the beginning.Recovering applicationsTo disable transaction buffering, set Transaction Buffer Type to Memory.UsernameStringIf Integrated Security is True, leave blank. If Integrated Security is False, specify the login name for the SQL Server user.SQL Server readers WAEvent fieldsThe output data type for MS SQL Reader and MSJet is WAEvent. The elements are:metadata: a map including:BeginLsn (MSJet only): LSN of Begin operation for the transactionBeginTimestamp (MSJet only): timestamp of Begin operation for the transactionCommitLsn (MSJet only): LSN of Commit operation for the transactionCommitTimestamp (MSJet only): timestamp of Commit operation for the transactionOperationName: INSERT, UPDATE, or DELETEMSJet only: When schema evolution is enabled, OperationName for DDL events will be Alter, AlterColumns, Create, or Drop. This metadata is reserved for internal use by Striim and subject to change, so should not be used in CQs, open processors, or custom Java functions.PartitionId (MSJet only): the partition from which the data was readPK_UPDATE:MS SQL Reader: for UPDATE only, true if the primary key value was changed, otherwise falseMSJet: field not included (see limitations in SQL Server)SQL ServerSEQUENCE: LSN of the operationTableName: fully qualified name of the table . It is present but null for key-sequenced files and key-sequenced tables that have a user-defined primary key.TimeStamp (MS SQL Reader only): timestamp from the CDC log. By default, values are included only for the first record of a new transaction (for more details, see FetchTransactionMetadata in MSSQLReader properties).TransactionName: name of the transactionTxnID: transaction ID. When using MS SQL Reader, by default, values are included only for the first record of a new transaction (for more details, see FetchTransactionMetadata in MSSQLReader properties).To retrieve the values for these fields, use the META() function. See Parsing the fields of WAEvent for CDC readers.data: an array of fields, numbered from 0, containing:for an INSERT or DELETE operation, the values that were inserted or deletedfor an UPDATE, the values after the operation was completedTo retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operationdataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.MS SQL Reader simple applicationThe following application will write change data for the specified table to SysOut. Replace the Username and Password values with the credentials for the account you created for Striim (see Microsoft SQL Server setup), dbo.mytable with the name of the table to be read, and watestdb with the name of the database containing the table.CREATE APPLICATION SQLServerTest;\nCREATE SOURCE SQLServerCDCIn USING MSSqlReader (\n Username:'wauser',\n Password:'password',\n DatabaseName:'watestdb',\n ConnectionURL:'192.168.1.10:1433',\n Tables:'dbo.mytable'\n) \nOUTPUT TO SQLServerCDCStream;\nCREATE TARGET SQLServerCDCOut\n USING SysOut(name:SQLServerCDC)\n INPUT FROM SQLServerCDCStream;\nEND APPLICATION SQLServerTest;MSSQLReader example outputMSSQLReader's output type is WAEvent. See WAEvent contents for change data and SQL Server readers WAEvent fields.The following are examples of WAEvents emitted by MSSQLReader for various operation types. They all use the following table:CREATE TABLE POSAUTHORIZATIONS (BUSINESS_NAME varchar(30),\n MERCHANT_ID varchar(100),\n PRIMARY_ACCOUNT bigint,\n POS bigint,\n CODE varchar(20),\n EXP char(4),\n CURRENCY_CODE char(3),\n AUTH_AMOUNT decimal(10,3),\n TERMINAL_ID bigint,\n ZIP integer,\n CITY varchar(20));\nGOINSERTIf you performed the following INSERT on the table:INSERT INTO POSAUTHORIZATIONS VALUES(\n 'COMPANY 1',\n 'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu',\n 6705362103919221351,\n 0,\n '20130309113025',\n '0916',\n 'USD',\n 2.20,\n 5150279519809946,\n 41363,\n 'Quicksand');\nGOThe WAEvent for that INSERT would be:data: [\"COMPANY 1\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\"20130309113025\",\n\"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]\nmetadata: {\"TimeStamp\":0,\"TxnID\":\"\",\"SEQUENCE\":\"0000002800000171001C\",\"PK_UPDATE\":\"false\",\n\"TableName\":\"dbo.POSAUTHORIZATIONS\",\"OperationName\":\"INSERT\"}\nbefore: nullUPDATEIf you performed the following UPDATE on the table:UPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 5A' where pos=0;\nGOThe WAEvent for that UPDATE for the row created by the INSERT above would be:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\"20130309113025\",\n\"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]\nmetadata: {\"TimeStamp\":0,\"TxnID\":\"\",\"SEQUENCE\":\"00000028000001BC0002\",\"PK_UPDATE\":\"false\",\n\"TableName\":\"dbo.POSAUTHORIZATIONS\",\"OperationName\":\"UPDATE\"}\nbefore: [\"COMPANY 1\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\"20130309113025\",\n\"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]DELETEIf you performed the following DELETE on the table:DELETE from POSAUTHORIZATIONS where pos=0;\nGOThe WAEvent for that DELETE for the row affected by the INSERT above would be:data: [\"COMPANY 5A\",\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",6705362103919221351,0,\"20130309113025\",\n\"0916\",\"USD\",\"2.200\",5150279519809946,41363,\"Quicksand\"]\nmetadata: {\"TimeStamp\":0,\"TxnID\":\"\",\"SEQUENCE\":\"00000028000001DE0002\",\"PK_UPDATE\":\"false\",\n\"TableName\":\"dbo.POSAUTHORIZATIONS\",\"OperationName\":\"DELETE\"}\nbefore: nullNote that the contents of data and before are reversed from what you might expect for a DELETE operation. This simplifies programming since you can get data for INSERT, UPDATE, and DELETE operations using only the data field.MSJet example outputMSJet's output type is WAEvent. See WAEvent contents for change data and SQL Server readers WAEvent fields.The following commands are examples of various common operation types.create table employee(id int,name char(40),address char(40));\n\nbegin transaction t1;\n\n--insert\ninsert into dbo.employee values('1','Maha 5','Chennai');\n\n--update\nupdate dbo.employee set name='Maha' where ID=1;\n\n--primary key update\nupdate dbo.employee set ID=10 where ID=1;\n\n--delete\ndelete from dbo.employee where id=10;\n\ncommit transaction t1;The WAEvent output resulting from those commands will be similar to:Data: WAEvent{\ndata: [ ] \n metadata: {\"CommitLsn\":\"0x00000028:00000450:0040\",\"TableName\":null,\"TxnID\":\"0000.000003ea\",\n \"OperationName\":\"BEGIN\",\"SEQUENCE\":\"0x00000028:00000450:0020\",\"CommitTimestamp\":1633414675857,\n \"BeginTimestamp\":1633414675823,\"BeginLsn\":\"0x00000028:00000450:0020\",\"TransactionName\":\"t1\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"AA==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: null\n};\nData: WAEvent{\n data: [\"1\",\"Maha 5 \",\"Chennai \"]\n metadata: {\"CommitLsn\":\"0x00000028:00000450:0040\",\"TableName\":\"dbo.employee\",\"TxnID\":\"0000.000003ea\",\n \"OperationName\":\"INSERT\",\"SEQUENCE\":\"0x00000028:00000450:003a\",\"CommitTimestamp\":1633414675857,\"\n PartitionId\":72057594043170816,\"BeginTimestamp\":1633414675823,\"BeginLsn\":\"0x00000028:00000450:0020\",\n \"TransactionName\":\"t1\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"Bw==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: {\"uuidstring\":\"01ec25a3-fb2f-70d1-901a-001c42ca1a64\"}\n};\nData: WAEvent{\n data: [\"1\",\"Maha \",\"Chennai \"]\n metadata: {\"CommitLsn\":\"0x00000028:00000450:0040\",\"TableName\":\"dbo.employee\",\"TxnID\":\"0000.000003ea\",\n \"OperationName\":\"UPDATE\",\"SEQUENCE\":\"0x00000028:00000450:003c\",\"CommitTimestamp\":1633414675857,\n \"PartitionId\":72057594043170816,\"BeginTimestamp\":1633414675823,\"BeginLsn\":\"0x00000028:00000450:0020\",\n \"TransactionName\":\"t1\"}\n userdata: null\n before: [1,\"Maha 5 \",\"Chennai \"]\n dataPresenceBitMap: \"Bw==\"\n beforePresenceBitMap: \"Bw==\"\n typeUUID: {\"uuidstring\":\"01ec25a3-fb2f-70d1-901a-001c42ca1a64\"}\n};\nData: WAEvent{\n data: [\"10\",\"Maha \",\"Chennai \"]\n metadata: {\"CommitLsn\":\"0x00000028:00000450:0040\",\"TableName\":\"dbo.employee\",\"TxnID\":\"0000.000003ea\",\n \"OperationName\":\"UPDATE\",\"SEQUENCE\":\"0x00000028:00000450:003d\",\"CommitTimestamp\":1633414675857,\n \"PartitionId\":72057594043170816,\"BeginTimestamp\":1633414675823,\"BeginLsn\":\"0x00000028:00000450:0020\",\n \"TransactionName\":\"t1\"}\n userdata: null\n before: [1,\"Maha \",\"Chennai \"]\n dataPresenceBitMap: \"Bw==\"\n beforePresenceBitMap: \"Bw==\"\n typeUUID: {\"uuidstring\":\"01ec25a3-fb2f-70d1-901a-001c42ca1a64\"}\n};\nData: WAEvent{\n data: [\"10\",\"Maha \",\"Chennai \"]\n metadata: {\"CommitLsn\":\"0x00000028:00000450:0040\",\"TableName\":\"dbo.employee\",\"TxnID\":\"0000.000003ea\",\n \"OperationName\":\"DELETE\",\"SEQUENCE\":\"0x00000028:00000450:003e\",\"CommitTimestamp\":1633414675857,\"\n PartitionId\":72057594043170816,\"BeginTimestamp\":1633414675823,\"BeginLsn\":\"0x00000028:00000450:0020\",\"\n TransactionName\":\"t1\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"Bw==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: {\"uuidstring\":\"01ec25a3-fb2f-70d1-901a-001c42ca1a64\"}\n};\nData: WAEvent{\ndata: [ ] \n metadata: {\"CommitLsn\":\"0x00000028:00000450:0040\",\"TableName\":null,\"TxnID\":\"0000.000003ea\",\n \"OperationName\":\"COMMIT\",\"SEQUENCE\":\"0x00000028:00000450:0040\",\"CommitTimestamp\":1633414675857,\n \"BeginTimestamp\":1633414675823,\"BeginLsn\":\"0x00000028:00000450:0020\",\"TransactionName\":\"t1\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"AA==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: null\n};SQL Server readers data type support and correspondenceSQL Server typeMS SQL Reader TQL typeMSJet TQL typenotesbigintlongintegerbinarybyte[]byte[]not included in\u00a0before array for UPDATE or data array for DELETE operations: see cautionary note belowwhen reading from Azure SQL Database, supported only when values are less than 64 kbbitstringbooleancharstringstringdatestringstringdatetimestringstringdatetime2stringstringdatetimeoffsetstringnot supported in this release (known issue DEV-35885)decimalstringstringfloatdoublestringgeometrynot supportednot supportedimagebyte[]byte[]not included in\u00a0before array for UPDATE or data array for DELETE operations: see cautionary note belowintintegerintegermoneystringstringncharstringstringntextstringstringnot included in\u00a0before array for UPDATE or data array for DELETE operations: see cautionary note belownumericstringstringnvarcharstringstringnvarchar(max)stringstringincluded in before array for UPDATE operations only if value is changed by the updaterealfloatstringrowversionbyte[]byte[]\u00a0smalldatetimestringstringsmallintshortshortsmallmoneystringstringsqlvariantnot supportednot supportedColumns of this type will have value null in WAEventtextstringstringnot included in\u00a0before array for UPDATE or data array for DELETE operations: see cautionary note belowtimestringstringtimestampbyte[]byte[]tinyintshortshortudtstringstringuniqueidentifierstringstringvarbinarybyte[]bytenot included in\u00a0before array for UPDATE or data array for DELETE operations: see cautionary note belowwhen reading from Azure SQL Database, supported only when values are less than 64 kbvarbinary(max)byte[]byte[]not included in\u00a0before array for UPDATE or data array for DELETE operations: see cautionary note belowwhen reading from Azure SQL Database, supported only when values are less than 64 kbvarcharstringstringvarchar(max)stringstringincluded in before array for UPDATE operations only if value is changed by the updatexmlstringnot supportedMS SQL Reader: included in before array for UPDATE operations only if value is changed by the updateMSJet: columns of this type type will have value null in in WAEventCautionWhen all tables being read have primary keys and none of those primary key columns is of type binary, image, ntext, text, varbinary, or varbinary(max), you will not encounter the following issue.When replicating MSSQLReader or MSJet output using DatabaseWriter, if one or more of a table's primary key columns is of type binary, image, ntext, text, varbinary, or varbinary(max), or if a table has no primary key and one more columns of those types,\u00a0UPDATE or DELETE operations may erroneously be replicated to more than one row. This may result in additional errors when subsequent operations try to update or delete the missing or incorrectly updated rows.MSJet limitationsThe Forwarding Agent (or Striim server) on which MSJet is deployed must be running on the same Windows system as the SQL Server instance that hosts the databases to be read.Each Striim server or Forwarding Agent can run only a single MSJet source. If you need multiple MSJet sources, deploy each on a different server or Forwarding Agent.Tables with XML columns are not supported.Reading from secondary databases is not supported.Reading from AG listeners is not supported.Reading from backups is supported only if they are accessible only in the location where they were taken.Reading from encrypted or compressed backups is not supported. These backups would have to be uncompressed and unencrypted for Striim to read.Debug messages for the Windows-native portion of the adapter may appear in striim/logs/striim_mssqlnativereader.log rather than in cstriim.server.log.If utilizing both replication and CDC, you must continue to keep CDC jobs enabled.In this section: SQL ServerSQL Server setupSQL Server setup for MS SQL ReaderSQL Server setup for MSJetMS SQL Reader propertiesMSJet propertiesSQL Server readers WAEvent fieldsMS SQL Reader simple applicationMSSQLReader example outputINSERTUPDATEDELETEMSJet example outputSQL Server readers data type support and correspondenceMSJet limitationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-04\n", "metadata": {"source": "https://www.striim.com/docs/en/sql-server-cdc.html", "title": "SQL Server", "language": "en"}} {"page_content": "\n\nWorking with non-SQL CDC readersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Working with non-SQL CDC readersPrevNextWorking with non-SQL CDC readersThis section discusses the common characteristics of Striim's non-SQL-based change data capture readers.Parsing and manipulating the fields of JSONNodeEventSee JSONNode functions.Adding user-defined data to JSONNodeEvent streamsUse the PutUserData function in a CQ to add a field to the JSONNodeEvent USERDATA map. For examples of how to use USERDATA elements in TQL, see\u00a0Modifying output using ColumnMap and the discussions of PartitionKey in\u00a0Kafka Writer and\u00a0S3 Writer.The following example would add the sixth element (counting from zero) in the JSONNodeEvent userdata array to USERDATA as the field \"city\":CREATE CQ AddUserData\nINSERT INTO EnrichedStream\nSELECT putUserData(x, 'city', data[5])\nFROM MongoDBSourceStream x;To remove an element from the USERDATA map, use the removeUserData function (you may specify multiple elements, separated by commas):CREATE CQ RemoveUserData\nINSERT INTO EnrichedStream \nSELECT removeUserData(x, 'city')\nFROM MongoDBSourceStream x;To remove all elements from the USERDATA map, use the clearUserData function:CREATE CQ ClearUserData\nINSERT INTO EnrichedStream \nSELECT clearUserData(x)\nFROM MongoDBSourceStream x;Converting JSONNodeEvent output to a user-defined typeUsing the application and data from\u00a0MongoDBReader example application and output, the following CQ would convert the\u00a0JSONNodeEvent\u00a0data\u00a0array values to a Striim type, which could then be consumed by any other Striim component.CREATE CQ JSON2StriimType\nINSERT INTO EmployeeStream\nSELECT data.get(\"_id\").toString() AS id,\n data.get(\"firstname\").toString() AS firstname,\n data.get(\"lastname\").toString() AS lastname,\n data.get(\"age\").toString() AS age\nFROM MongoDBStream;In this section: Working with non-SQL CDC readersParsing and manipulating the fields of JSONNodeEventAdding user-defined data to JSONNodeEvent streamsConverting JSONNodeEvent output to a user-defined typeSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-27\n", "metadata": {"source": "https://www.striim.com/docs/en/working-with-non-sql-cdc-readers.html", "title": "Working with non-SQL CDC readers", "language": "en"}} {"page_content": "\n\nAzure Cosmos DB using Core (SQL) APISkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Azure Cosmos DB using Core (SQL) APIPrevNextAzure Cosmos DB using Core (SQL) APIAzure Cosmos DB is a web-based no-SQL database from Microsoft. For more information, see Azure Cosmos DB and Common Azure Cosmos DB use cases, and Striim for Azure Cosmos DB.Cosmos DB Reader reads documents from one or more Cosmos DB containers using Cosmos DB's native Core (SQL) API. Its output stream type is JSONNodeEvent, so it requires targets that can read JSONNodeEvents. Alternatively, you must convert the output to a user-defined type (see Converting JSONNodeEvent output to a user-defined type for an example).This reader sends both inserts and updates as inserts. This means that to support replicating Cosmos DB documents the writer must support upsert mode. In upsert mode, a new document (one whose id field does not match that of any existing document) is handled as an insert and an update to an existing documents (based on matching id fields) is handled as an update. For replication, this limits the choice of writers to Cosmos DB Writer and Mongo Cosmos DB Writer. Append-only targets such as files, blobs, and Kafka are also supported so long as they can handle a JSONNodeEvent input stream.Be sure to provision sufficient Request Units (see Request Units in Azure Cosmos DB) to handle the volume of data you expect to read. If you do not, the reader be unable to keep up with the source data.,Cosmos DB setup for Cosmos DB ReaderRequest UnitsProvision sufficient Request Units to handle the volume of data you expect to read. For more information, see Request Units in Azure Cosmos DB.Capturing deletesAzure Cosmos DB's change feed does not capture deletes. To work around this limitation:Set time-to-live (TTL) to -1 on the container(s) to be read. One way to do this is described in Create an Azure Cosmos DB container with unique key policy and TTL. The -1 value means Cosmos DB will not automatically delete any documents, but Cosmos DB Reader will be able to set the TTL for individual documents in order to delete them.Set Cosmos DB Reader's Cosmos DB Config property to:{\"Operations\": {\"SoftDelete\": {\"FieldName\" : \"IsDeleted\",\"FieldValue\" : \"true\"}}}This will add \"IsDeleted\":\"true\" to the output for deleted fields, as shown by the example in Cosmos DB Reader example output. That will cause Cosmos DB Writer to send such operations as DELETEs. If the event is received by a Cosmos DB target, the corresponding fields in the target document will be deleted according to the target\u2019s TTL setting.Cosmos DB Reader propertiesThe Azure Cosmos Java driver used by this reader is bundled with Striim.propertytypedefault valuenotesAccess Keyencrypted passwordThe Primary Key or Secondary Key from the Keys read-only tab of your Cosmos DB account.Connection Retry PolicyStringretryInterval=60, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.ContainersStringThe fully-qualified name(s) of the container(s) to read from, for example, mydb.mycollection. Separate multiple collections by commas. Container names are case-sensitive.You may use the % wildcard, for example, mydb.% The wildcard is allowed only at the end of the string: for example, mydb.prefix% is valid, but mydb.%suffix is not.Note that data will be read only from containers that exist when the Striim application starts, additional containers added later will be ignored until the application is restarted.Cosmos DB ConfigStringOptionally, specify a JSON-format string with additional Cosmos DB options. For an example, see \"Capturing deletes\" in Cosmos DB setup for Cosmos DB Reader.Exclude ContainersStringAny containers to be excluded from the set specified in the Containers property. Specify as for the Containers property.Fetch SizeInteger1000The number of documents the adapter will fetch in a single read operation.ModeStringInitialLoadWith the default InitialLoad setting, will load all existing data using Microsoft Azure Cosmos SDK for Azure Cosmos DB SQL API. After initial load is complete, you must stop the application manually. In this mode, Cosmos DB Reader is not a CDC reader. In this mode, recovery will restart from the beginning.Set to\u00a0Incremental to read CDC data continuously using the change feed API (see Change feed in Azure Cosmos DB). In this mode, insert, update, and replace operations are all sent to the target as inserts, since the change feed does not include the operation type. In this mode, recovery will restart from the timestamp of the last operation for each container.Overload Retry PolicyStringmaxRetryTimeInSecs=30, maxRetries=9This policy determines how the reader handles RequestRateTooLargeException errors received from Cosmos DB.\u00a0maxRetryTimeInSecs sets the total maximum time allowed for all retries, after which the application will halt. For more information, see RetryOptions.setMaxRetryAttemptsOnThrottledRequests(int maxRetryAttemptsOnThrottledRequests) Method and ThrottlingRetryOptions.setMaxRetryWaitTime(Duration maxRetryWaitTime) Method.Polling IntervalInteger10The time in milliseconds the adapter will wait before polling the change feed for new documents. With the default value of 10, when there are new documents, the adapter will fetch the number set by the Fetch Size property, then immediately fetch again, repeating until all new documents have been fetched. Then it will wait for 10 milliseconds before polling again.Quiesce on IL CompletionBooleanFalseWith the default value of False, you must stop the application manually after all data has been read.Set to True to automatically quiesce the application after all data has been read (see discussion of QUIESCE in Console commands). When you see on the Apps page that the application is in the Quiescing state, it means that all the data that existed when the query was submitted has been read and that the target(s) are writing it. When you see that the application is in the Quiesced state, you know that all the data has been written to the target(s). At that point, you can undeploy the initial load application and then start another application for continuous replication of new data.Console commandsNoteSet to True only if all targets in the application support auto-quiesce (see Writers overview).Writers overviewService EndpointStringThe URI from the Overview page of your Cosmos DB account.Start TimestampOptionally, in incremental mode, specify a _ts field value from which to start reading. Supported formats (see dateTimeParser for more information):YYYY\nYYYY-MM\nYYYY-MM-DD\nYYYY-MM-DD\"T\"hhTZD\nYYYY-MM-DD\"T\"hh:mmTZD\nYYYY-MM-DD\"T\"hh:mm:ssTZD\nYYYY-MM-DD\"T\"hh:mm:ss.sssTZDThreadPool SizeInteger10The number of threads Striim will use for reading containers. If this number is lower than the number of containers being read, threads will be read in round-robin fashion. If this number equals the number of containers, each thread will read from one container. If this number exceeds the number of containers, only this number of threads will be active.Cosmos DB Reader JSONNodeEvent fieldsThe output type for Cosmos DB Reader is JSONNodeEvent. The fields are:data: contains the field names and values of a document, for example:data:{\n \"id\":\"1d40842b-f28d-4b29-b5bf-7168712c9807eanOlpyItG\",\n \"brand\":\"Jerry's\",\n \"type\":\"plums\",\n \"quantity\":\"50\"\n}metadata: contains the following elements:CollectionName: the collection from which the document was readDatabaseName: the database of the collectionDocumentKey: the value of the id field of the documentFullDocumentReceived: value is True if data includes the entire image of the document, False if it does notNamespace:\u00a0.OperationName: in InitialLoad mode, SELECT; in Incremental mode, INSERT or DELETE (for \"soft deletes\")Partitioned: value is True if the operation was on a sharded collectionPartitionKeys: a JsonNode object containing the shard keys and their valuesTimestamp: the Unix epoch time at which the operation was performed in the sourceFor example:metadata:{\n \"CollectionName\":\"container2\",\n \"OperationName\":\"SELECT\",\n \"DatabaseName\":\"testDB\",\n \"DocumentKey\":\"1d40842b-f28d-4b29-b5bf-7168712c9807eanOlpyItG\",\n \"NameSpace\":\"testDB.container2\",\n \"ResumeToken\":\"\",\n \"TimeStamp\":1639999991\n}\nCosmos DB Reader example applicationThe following application will read CDC data from Cosmos DB and write it to MongoDB.CREATE APPLICATION CosmosToMongo recovery 5 second interval;\n\nCREATE SOURCE CosmosSrc USING CosmosDBReader ( \nCosmosDBConfig: '{\\\"Operations\\\": {\\\"SoftDelete\\\": {\\\"FieldName\\\" : \\\"IsDeleted\\\",\\\"FieldValue\\\" : \\\"true\\\"}}}',\n Mode: 'Incremental', \n AccessKey: '*******', \n Containers: 'src.emp', \n ServiceEndpoint: 'https://******.documents.azure.com:443/'\n}\nOUTPUT TO cout;\n\nCREATE TARGET MongoTarget USING MongoDBWriter ( \n collections: 'src.emp,targdb.emp', \n ConnectionURL: ******:27018', \n Password: '******', \n Username: 'myuser', \n AuthDB: 'targdb', \n upsertMode: 'true'\n) \nINPUT FROM cout;\n\nEND APPLICATION CosmosToMongo;Cosmos DB Reader example outputInitial loadWhen Mode is InitialLoad, the Operation Name is reported as SELECT, even though it is actually an insert.JsonNodeEvent{\n data:{\n \"id\":\"1d40842b-f28d-4b29-b5bf-7168712c9807eanOlpyItG\",\n \"brand\":\"Jerry's\",\n \"type\":\"plums\",\n \"quantity\":\"50\"\n } metadata:{\n \"CollectionName\":\"container2\",\n \"OperationName\":\"SELECT\",\n \"DatabaseName\":\"testDB\",\n \"DocumentKey\":\"1d40842b-f28d-4b29-b5bf-7168712c9807eanOlpyItG\",\n \"NameSpace\":\"testDB.container2\",\n \"TimeStamp\":1639999991\n } userdata:null\n } removedfields:null\n};Incremental - insertJsonNodeEvent{\n data:{\n \"id\":\"c6de96ef-d7f0-44a9-a5ab-e3d0298652afNvEDtXTPqO\",\n \"brand\":\"Kraft Heinz\",\n \"type\":\"kool-aid\",\n \"quantity\":\"50\"\n } metadata:{\n \"CollectionName\":\"container2\",\n \"OperationName\":\"INSERT\",\n \"DatabaseName\":\"testDB\",\n \"DocumentKey\":\"c6de96ef-d7f0-44a9-a5ab-e3d0298652afNvEDtXTPqO\",\n \"NameSpace\":\"testDB.container2\",\n \"TimeStamp\":1643876905\n } userdata:null\n } removedfields:null\n};Incremental - deleteNote that this includes the IsDeleted field discussed in Cosmos DB setup for Cosmos DB Reader.JsonNodeEvent{\n data:{\n \"id\":\"c6de96ef-d7f0-44a9-a5ab-e3d0298652afNvEDtXTPqO\",\n \"brand\":\"Kraft Heinz\",\n \"type\":\"kool-aid\",\n \"quantity\":\"50\",\n \"IsDeleted\":\"true\"\n } metadata:{\n \"CollectionName\":\"container2\",\n \"OperationName\":\"DELETE\",\n \"DatabaseName\":\"testDB\",\n \"DocumentKey\":\"c6de96ef-d7f0-44a9-a5ab-e3d0298652afNvEDtXTPqO\",\n \"NameSpace\":\"testDB.container2\",\n \"TimeStamp\":1643877020\n } userdata:null\n } removedfields:null\n};Cosmos DB Reader limitationsThe change feed captures field-level updates as document replace operations, so the entire document will be read.The change feed does not capture deletes. Use the \"soft delete\" approach to add an IsDeleted field with value True to target documents that have been deleted in the source (see Cosmos DB setup for Cosmos DB Reader and Cosmos DB Reader example output).If there are multiple replace operations on a document during the polling interval (see Cosmos DB Reader properties), only the last will be read.When a document's id field is changed, the change feed treats it as an insert rather than a replace, so the previous version of the document with the old id field will not be overwritten, and it will remain in the target.Document id fields must be unique across all partitions. Otherwise you may encounter errors or data corruption.Multi-region writes are not supported.The order of operations is guaranteed to be preserved only for events with the same partition key. The order of operations may not be preserved for events with different partition keys.Cosmos DB's change feed timestamp (_ts) resolution is in seconds. Consequently, to avoid events being missing from the target, recovery will start one second earlier than the time of the last recovery checkpoint, so there may be some duplicate events.The change feed does not capture delete operations. Consequently, recovery will not capture those operations, and the deleted documents will remain in the target.Cosmos DB's change feed does not capture changes to deleted documents. Consequently, if the Striim application is offline when a document is changed, and document is deleted before recovery starts, the.changes will not be written to the target during recovery.In this section: Azure Cosmos DB using Core (SQL) APICosmos DB setup for Cosmos DB ReaderCosmos DB Reader propertiesCosmos DB Reader JSONNodeEvent fieldsCosmos DB Reader example applicationCosmos DB Reader example outputInitial loadIncremental - insertIncremental - deleteCosmos DB Reader limitationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-27\n", "metadata": {"source": "https://www.striim.com/docs/en/azure-cosmos-db-using-core--sql--api.html", "title": "Azure Cosmos DB using Core (SQL) API", "language": "en"}} {"page_content": "\n\nAzure Cosmos DB using Cosmos DB API for MongoDBSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)Azure Cosmos DB using Cosmos DB API for MongoDBPrevNextAzure Cosmos DB using Cosmos DB API for MongoDBAzure Cosmos DB is a web-based no-SQL database from Microsoft. For more information, see Azure Cosmos DB and Common Azure Cosmos DB use cases, and Striim for Azure Cosmos DB.Striim 4.2.0 supports Cosmos DB API for Mongo DB versions 3.6, 4.0, and 4.2. Version 3.2 is also supported but in initial load mode only.Mongo Cosmos DB Reader reads documents from one or more Cosmos DB containers using the Mongo Java driver (bundled with Striim). Its output stream type is JSONNodeEvent, so it requires targets that can read JSONNodeEvents. Alternatively, you must convert the output to a user-defined type (see Converting JSONNodeEvent output to a user-defined type for an example).This reader sends both inserts and updates as inserts. This means that to support replicating Cosmos DB documents the writer must support upsert mode. In upsert mode, a new document (one whose _id field does not match that of any existing document) is handled as an insert and an update to an existing documents (based on matching _id fields) is handled as an update. For replication, this limits the choice of writers to Cosmos DB Writer and Mongo Cosmos DB Writer. Append-only targets such as files, blobs, and Kafka are also supported so long as they can handle a JSONNodeEvent input stream.Be sure to provision sufficient Request Units (see Request Units in Azure Cosmos DB) to handle the volume of data you expect to read. If you do not, the reader be unable to keep up with the source data.,Cosmos DB setup for Mongo Cosmos DB ReaderSSLBy default, SSL is enabled in Cosmos DB API for Mongo DB. Mongo Cosmos DB Reader uses SSL to connect to Cosmos DB. Other encryption methods are not supported.Server Side RetryServer Side Retry is enabled by default for Cosmos DB API for Mongo DB 3.6 and later. Disabling it may result in rate-limiting errors. For more information, see Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations.Capturing deletesAzure Cosmos DB API for MongoDB's change stream does not capture delete operations. To work around this limitation, do not delete documents directly. Instead, use the following process.Enable soft deletesUsing the MongoDB shell, create a _ts index on the collection with expireAfterSeconds set to -1. For example:mydb.mycollection.createIndex({\"_ts\":1}, {expireAfterSeconds: -1})The -1 value means Cosmos DB will not automatically delete any documents, but Mongo Cosmos DB Reader will be able to set the TTL for individual documents in order to delete them. For more information, see Expire data with Azure Cosmos DB's API for MongoDB.Set Mongo Cosmos DB Reader's Cosmos DB Config property to:{\"Operations\": {\"SoftDelete\": {\"FieldName\" : \"IsDeleted\",\"FieldValue\" : \"true\"}}}This will enable the IsDeleted field for soft-delete operations, as shown by the example in Mongo Cosmos DB Reader example output. When the IsDeleted field value is true, the OperationName value in the metadata of the output event is DELETE even though the operation is actually an UPDATE.Perform a soft deleteInstead of deleting a document, set IsDeleted to true and ttl to the number of seconds after which Cosmos DB will delete the source document. For a TTL of five seconds, the syntax is:..updateOne({_id:}, {$set : {\"IsDeleted\":\"true\", \"ttl\": 5}})The source document will be deleted in five seconds and the output will include \"OperationName\":\"DELETE\".For example, to soft-delete the following document from the mydb.employee collection:{\n \"_id\": 1001,\n \"name\": \"Kim\",\n \"lastname\": \"Taylor\",\n \"email\": \"ktaylor@example.com\"\n}You would use the command:mydb.employee.updateOne({_id:1001}, {$set : {\"IsDeleted\":\"true\", \"ttl\": 5}})Immediately after entering that command, the document would be:{\n \"_id\": 1001,\n \"name\": \"Kim\",\n \"lastname\": \"Taylor\",\n \"email\": \"ktaylor@example.com\",\n \"IsDeleted\":\"true\",\n \"ttl\":5\n}Five seconds after entering the command, the source document would be deleted. The output event would be similar to:JsonNodeEvent {\n data:{\n \"_id\":\"1001\",\n \"name\": \"Kim\",\n \"lastname\": \"Taylor\",\n \"email\": \"ktaylor@example.com\",\n \"IsDeleted\":\"true\",\n \"ttl\":5\n } metadata:{\n \"CollectionName\":\"employee\",\n \"OperationName\":\"DELETE\",\n \"DatabaseName\":\"mydb\",\n \"DocumentKey\":{\"id\":\"1001\"},\n \"NameSpace\":\"mydb.employee\",\n \"TimeStamp\":1646819488,\n \"Partitioned\":false,\n \"FullDocumentReceived\":true,\n \"PartitionKeys\":{}\n } userdata:null\n } removedfields:null\n};Mongo Cosmos DB Reader propertiesThe Mongo Java driver used by this reader is bundled with Striim.For best performance and lower Azure ingress and egress charges, Mongo CosmosDB Reader should be run in Striim in Azure.propertytypedefault valuenotesCollectionsStringThe fully-qualified name(s) of the collection(s) to read from, for example, mydb.mycollection. Separate multiple collections by commas.You may use the $ wildcard, for example, mydb.$ The wildcard is allowed only at the end of the string: for example, mydb.prefix$ is valid, but mydb.$suffix is not.Note that data will be read only from collections that exist when the Striim application starts, additional collections added later will be ignored until the application is restarted.Connection Retry PolicyStringretryInterval=60, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringEnter the Host and Port from the Connection String read-only tab of your Azure Cosmos DB API for MongoDB account, separated by a colon, for example, mydb.mongo.cosmos.azure.com:10255.Cosmos DB ConfigStringOptionally, specify a JSON-format string with additional Cosmos DB options. For an example, see \"Capturing deletes\" in Cosmos DB setup for Mongo Cosmos DB Reader.Exclude CollectionsStringAny collections to be excluded from the set specified in the Collections property. Specify as for the Collections property.Fetch SizeInteger1000The number of documents the adapter will fetch in a single read operation.ModeStringInitialLoadWith the default setting, will load all existing data using mongo-driver-sync and stop. In this mode, Mongo Cosmos DB Reader is not a CDC reader. In this mode, recovery will restart from the beginning.Set to\u00a0Incremental to read CDC data continuously using the change streams API (see Change streams in Azure Cosmos DB\u2019s API for MongoDB). In this mode, insert, update, and replace operations are all sent to the target as inserts, since the change stream does not include the operation type. See Mongo Cosmos DB Reader limitations for discussion of recovery in this mode.MongoDB ConfigStringOptionally specify a JSON string to define a subset of documents to be selected in InitialLoad mode and for inserts in Incremental mode. See Selecting documents using MongoDB Config.When Mode is Incremental, insert operations are sent only for the defined subset of documents, but updates and deletes are sent for all documents. If Cosmos DB Writer, Mongo Cosmos DB Writer, or MongoDB Writer receive an update or delete for a document not in the subset, the application will halt. To avoid this, set Ignorable Exception Code to RESOURCE_NOT_FOUND for Cosmos DB Writer or KEY_NOT_FOUND for Mongo Cosmos DB Writer or MongoDB Writer. Note that in this case you will have to check the exception store to see if there were any ignored exceptions for documents in the subset.Overload Retry PolicyStringretryInterval=30, maxRetries=10With the default setting, if reading is interrupted because the number of request units (RUs) per second exceeded the provisioned limit, the adapter will try again in 30 seconds (retryInterval). If this attempt is unsuccessful, every 30 seconds it will try again. If the tenth attempt (maxRetries) is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Passwordencrypted passwordThe Primary Password or Secondary Password from the Connection String read-only tab of your Azure Cosmos DB API for MongoDB account.Quiesce on IL CompletionBooleanFalseWith the default value of False, you must stop the application manually after all data has been read.Set to True to automatically quiesce the application after all data has been read (see discussion of QUIESCE in Console commands). When you see on the Apps page that the application is in the Quiescing state, it means that all the data that existed when the query was submitted has been read and that the target(s) are writing it. When you see that the application is in the Quiesced state, you know that all the data has been written to the target(s). At that point, you can undeploy the initial load application and then start another application for continuous replication of new data.Console commandsNoteSet to True only if all targets in the application support auto-quiesce (see Writers overview).Writers overviewThreadPool SizeInteger10The number of threads Striim will use for reading collections. If this number is lower than the number of collections being read, threads will read in round-robin fashion. If this number equals the number of collections, each thread will read from one collection. If this number exceeds the number of collections, only this number of threads will be active.UsernameStringThe Username from the Connection String read-only tab of your Azure Cosmos DB API for MongoDB account.Mongo Cosmos DB Reader JSONNodeEVent fieldsThe output type for Mongo Cosmos DB Reader is JSONNodeEvent. The fields are:data: contains the field names and values of a document, for example:data:{\n \"_id\":{\"$oid\":\"620365482b7622580d9e6e43\"},\n \"state\":4,\n \"name\":\"Willard\",\n \"last_name\":\"Valek\",\n \"email\":\"wvalek3@vk.com\",\n \"gender\":\"Male\",\n \"ip_address\":\"67.76.188.26\",\n \"ttl\":-1\n}metadata: contains the following elements:CollectionName: the collection from which the document was readDatabaseName: the database of the collectionDocumentKey: for an unsharded collection, the _id field and its value; for a sharded collection, also the shard key field and its valueFullDocumentReceived: value is True if data includes the entire image of the document, False if it does notNamespace:\u00a0.OperationName: in InitialLoad mode, SELECT; in Incremental mode, INSERT or DELETE (for \"soft deletes\")Partitioned: value is True if the operation was on a sharded collectionPartitionKeys: a JsonNode object containing the shard keys and their valuesTimestamp: In InitialLoad mode, the current time of the Striim server when the document was read. In Incremental mode, 0 (zero), because the change streams API (see Change streams in Azure Cosmos DB\u2019s API for MongoDB does not provide a timestamp.For example:metadata:{\n \"CollectionName\":\"collection1\",\n \"OperationName\":\"SELECT\",\n \"DatabaseName\":\"testDB\",\n \"NameSpace\":\"testDB.collection1\",\n \"id\":{\"$oid\":\"620365482b7622580d9e6e43\"},\n \"TimeStamp\":1644429967516\n}\nMongo Cosmos DB Reader example applicationThe following application will read CDC data from Cosmos DB and write it to MongoDB.CREATE APPLICATION MongoCosmosToMongo;\n\nCREATE SOURCE MongoCosmosSrc USING MongoCosmosDBReader ( \n CosmosDBConfig: '{\\\"Operations\\\": {\\\"SoftDelete\\\": {\\\"FieldName\\\" : \\\"IsDeleted\\\",\\\"FieldValue\\\" : \\\"true\\\"}}}', \n Mode: 'Incremental', \n Username: 'az-cosmos-mongodb', \n ConnectionURL: 'az-cosmos-mongodb.mongo.cosmos.azure.com:10255', \n Collections: 'testDB.collection$', \n Password: '********' \n) \nOUTPUT TO cout;\n\nCREATE TARGET MongoTarget USING MongoDBWriter ( \n collections: 'src.emp,targdb.emp', \n ConnectionURL: ******:27018', \n Password: '******', \n Username: 'myuser', \n AuthDB: 'targdb', \n upsertMode: 'true' \n) \nINPUT FROM cout;\n\nEND APPLICATION MongoCosmosToMongo;Mongo Cosmos DB Reader example outputInitial loadWhen Mode is InitialLoad, the Operation Name is reported as SELECT, even though it is actually an insert.JsonNodeEvent{\n data:{\n \"_id\":{\n \"$oid\":\"620365482b7622580d9e6e43\"\n },\n \"state\":4,\n \"name\":\"Willard\",\n \"last_name\":\"Valek\",\n \"email\":\"wvalek3@vk.com\",\n \"gender\":\"Male\",\n \"ip_address\":\"67.76.188.26\",\n \"ttl\":-1\n } metadata:{\n \"CollectionName\":\"collection1\",\n \"OperationName\":\"SELECT\",\n \"DatabaseName\":\"testDB\",\n \"NameSpace\":\"testDB.collection1\",\n \"id\":{\n \"$oid\":\"620365482b7622580d9e6e43\"\n },\n \"TimeStamp\":1644429967516\n } userdata:null\n } removedfields:null\n};Incremental - insertJsonNodeEvent {\n data:{\n \"id\":\"updated\",\n \"type\":2,\n \"name\":\"Alex\"\n } metadata:{\n \"CollectionName\":\"container1\",\n \"OperationName\":\"INSERT\",\n \"DatabaseName\":\"testDB\",\n \"DocumentKey\":{\n \"id\":\"updated\",\n \"type\":2\n },\n \"NameSpace\":\"testDB.container1\",\n \"TimeStamp\":1646819481\n } userdata:null\n } removedfields:null\n};Incremental - deleteNote that this includes the IsDeleted field discussed in Cosmos DB setup for Mongo Cosmos DB Reader.JsonNodeEvent {\n data:{\n \"id\":\"updated\",\n \"type\":2,\n \"name\":\"Alex\",\n \"IsDeleted\":\"Yes\"\n } metadata:{\n \"CollectionName\":\"container1\",\n \"OperationName\":\"DELETE\",\n \"DatabaseName\":\"testDB\",\n \"DocumentKey\":{\n \"id\":\"updated\",\n \"type\":2\n },\n \"NameSpace\":\"testDB.container1\",\n \"TimeStamp\":1646819488\n } userdata:null\n } removedfields:null\n};Mongo Cosmos DB Reader limitationsThe change stream (see Change streams in Azure Cosmos DB\u2019s API for MongoDB) does not capture timestamps for operations.The change stream does not capture deletes. Use the \"soft delete\" approach to add an IsDeleted field with value True to target documents that have been deleted in the source (see Cosmos DB setup for Mongo Cosmos DB Reader and Mongo Cosmos DB Reader example output).The change stream captures field-level updates as document replace operations, so the entire document will be read.If multiple updates are made to a document in a short period of time, the change stream may consolidate them all into a single document update.The order of operations is guaranteed to be preserved only for events with the same shard key in the change stream. The order of operations may not be preserved for events with different shard keys.Multi-region writes are not supported.Document _id fields must be unique across all shards. Otherwise you may encounter errors or data corruption.Recovery (see Recovering applications) from the point at which the application stopped is not possible until the change stream has two resume tokens for each collection. Prior to that point, after the application restarts, Mongo Cosmos DB Reader will start reading from the latest document, resulting in a gap in the target from the time the application stopped until it was restarted. In other words, at-least once processing (A1P) is not guaranteed until after Mongo Cosmos DB Reader has been running for a few hours or days.Recovering applicationsTo tell whether recovery would result in data loss, run the command.SHOW . CHECKPOINT HISTORY. If the output includes any occurrences of ResumeToken[null], when the application is restarted Mongo Cosmos DB Reader will resume reading from the latest document. To avoid this, you may start from scratch with a new initial load. If you need advice or assistance in this situation, Contact Striim support.In this section: Azure Cosmos DB using Cosmos DB API for MongoDBCosmos DB setup for Mongo Cosmos DB ReaderCapturing deletesMongo Cosmos DB Reader propertiesMongo Cosmos DB Reader JSONNodeEVent fieldsMongo Cosmos DB Reader example applicationMongo Cosmos DB Reader example outputInitial loadIncremental - insertIncremental - deleteMongo Cosmos DB Reader limitationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/azure-cosmos-db-using-cosmos-db-api-for-mongodb.html", "title": "Azure Cosmos DB using Cosmos DB API for MongoDB", "language": "en"}} {"page_content": "\n\nMongoDBSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Change Data Capture (CDC)MongoDBPrevNextMongoDBStriim supports MongoDB versions 3.6 through 6.3 and MongoDB and MongoDB Atlas on AWS, Azure, and Google Cloud Platform.With MongoDB 4.2 or later, MongoDB reads from MongoDB change streams rather than its oplog. See MongoDB Manual > Change Streams. When reading from change streams, MongoDB Reader also:can read from multiple shards. (With MongoDB 4.1 or earlier, a separate MongoDB Reader will be needed for each shard.)supports transactions, providing transaction metadata, and creating individual events for each operation in a transaction. See MongoDB Manual > Transactions for more information.MongoDB Reader applications created in releases of Striim prior to 4.2 will continue to read from the oplog after upgrading. To switch to change streams, see \"Changes that may require modification of your TQL code, workflow, or environment\" in the Release notes.Striim provides a template for creating applications that read from MongoDB and write to Cosmos DB. See\u00a0Creating an application using a template for details.MongoDB setupMongoDBReader reads data from the Replica Set Oplog, so to use it you must be running\u00a0a replica set (see\u00a0Deploy a Replica Set or\u00a0Convert a Standalone to a Replica Set, or your MongoDB cloud provider's documentation) on each shard to be read. For more information, see Replication and Replica Set Data Synchronization.For all versions of MongoDB, the user specified in MongoDBReader's Username property must have read access to the config database (see MongoDB Manual > Config DatabaseFor MongoDB Atlas, for both InitialLoad and Incremental mode, the user specified in MongoDBReader's Username property must have the atlasAdmin role (see Built-in Roles) as this is required to read the oplog.For other versions of MongoDB, In InitialLoad mode, the user specified in MongoDBReader's Username property must have read access to all databases containing the specified collections..In Incremental mode, for MongoDB 4.2 and later, the user specified in MongoDBReader's Username property must have changeStream and find privileges on all the collections of the cluster. You may want to create a role with these two privileges (see MongoDB Manual > User-Defined Roles).In Incremental mode, for MongoDB 4.1 and earlier, the user specified in MongoDBReader's Username property must have read access to the\u00a0local database and the\u00a0oplog.rs collection.\u00a0The oplog is a capped collection, which means that the oldest data is automatically removed to keep it within the specified size. To support recovery, the oplog must be large enough to retain all data that may need to be recovered. See Oplog Size and Change Oplog Size for more information.To support recovery (see\u00a0Recovering applications), for all versions of MongoDB, the replica set's oplog must be large enough to retain all the events generated while Striim is offline.Using SSL or Kerberos or X.509 authentication with MongoDBIf you have an on-premise MongoDB deployment with your own certificate authority, set the Security Config properties as discussed below.To set up SSL in MongoDB, see MongoDB Manual > Configure mongod and mongos for TLS/SSL.To set up Kerberos in MongoDB, see MongoDB Manual > Kerberos Authentication.To set up X.509 in MongoDB, see MongoDB Manual > Use x.509 Certificates to Authenticate Clients.To secure your authentication parameters, store the entire Security Config string in a vault (see use Using vaults). For example, assuming your Kerberos realm is MYREALM.COM, its Key Distribution Center (KDC) is kerberos.realm.com, the path to the SSL trust store is /cacerts, the path to the SSL keystore file is /client.pkcs12, and the password for both stores is MyPassword, the Striim console commands to store the Security Config string with the key SSLKerberos in a vault named MongoDBVault would be:CREATE VAULT MongoDBVault;\nWRITE INTO MongoDBVault (\n vaultKey: \"SSLKerberos\",\n vaultValue: \"RealmName:MYREALM.COM;\n KDC:kerberos.myrealm.com;\n KeyStore:/keystore.pkcs12;\n TrustStore:/cacerts;\n trustStorePassword:MyPassword;\n KeyStorePassword:MyPassword\"\n);Enter READ ALL FROM MongoDBVault; to verify the contents.In TQL or the Flow Designer, you would then specify the Security Config as [[MongoDBVault.SSLKerberos]].The following are examples for each authentication option.Kerberos authenticationWithout SSL:CREATE VAULT MongoDBVault;\nWRITE INTO MongoDBVault (\n vaultKey: \"Kerberos\",\n vaultValue : \"RealmName:MYREALM.COM;\n KDC:kdc.myrealm.com\"\n);With SSL:CREATE VAULT MongoDBVault;\nWRITE INTO MongoDBVault (\n vaultKey: \"KerberosSSL\",\n vaultValue : \"RealmName:MYREALM.COM;\n KDC:kdc.myrealm.com;\n KeyStore:UploadedFiles/keystore.ks;\n KeyStorePassword:MyPassword;\n TrustStore:Platform/UploadedFiles/truststore.ks;\n TrustStorePassword:MyPassword\"\n);If required, also specify:JAAS_CONF: the path to and name of the uploaded JAAS configuration file, for example, UploadedFiles/jaas.confJAAS_ENTRY: the name of an entry in jaas.conf, for example, striim_userKeyStoreType:PKCS12: specify this if the KeyStore is PKCS12 rather than JKSany of the elements listed below for SSLSSLIf you are using MongoDB on premise with SSL, use Security Config to configure key store and trust stores.CREATE VAULT MongoDBVault;\nWRITE INTO MongoDBVault (\n vaultKey: \"SSL\",\n vaultValue : \"KeyStore:UploadedFiles/keystore.ks;\n TrustStore:UploadedFiles/truststore.ks;\n TrustStorePassword:MyPassword;\n KeyStorePassword:MyPassword\"\n);If required, also specify:SecureSocketProtocol: specify if a specific protocol is requiredTrustStoreType:PKCS12: specify this if the TrustStore is PKCS12 rather than JKSX.509 authenticationCREATE VAULT MongoDBVault;\nWRITE INTO MongoDBVault (\n vaultKey: \"X509\",\n vaultValue : \"KeyStore:UploadedFiles/keystore.ks;\n KeyStorePassword:MyPassword\"\n);If required, also specify:KeyStoreType:PKCS12: specify this if the KeyStore is PKCS12 rather than JKSConnecting to MongoDB Atlas clusters using private endpoints or network peeringStriim provides support for connecting to MongoDB Atlas clusters using private endpoints or network peering.Private endpoints: MongoDB Atlas\u00a0supports private endpoints on dedicated clusters. For example, you may configure a private endpoint connection to a Mongo cluster in Atlas from Striim Platform installed in a Google Cloud Platform VM using a private aware endpoint.Network peering: MongoDB Atlas supports network peering connections. Network peering establishes a private connection between your\u00a0Atlas\u00a0VPC\u00a0and your cloud provider's\u00a0VPC. The connection isolates traffic from public networks for added security.There are no Striim-specific configuration steps required; the configuration steps you need to perform are in MongoDB Atlas.See the following MongoDB Atlas doc topics:MongoDB Atlas > Configure Security Features for Database Deployments > Set Up a Private EndpointMongoDB Atlas > Configure Security Features for Database Deployments > Set Up a Network Peering ConnectionMongoDB Reader propertiesThe MongoDB driver is bundled with Striim, so no installation is necessary.The adapter properties are:propertytypedefault valuenotesauthDBStringadminSpecify the authentication database for the specified username. If not specified, uses the\u00a0admin database.authTypeenumDefaultSpecify the authentication mechanism used by your MongoDB instance (see MongoDB Manual > Authentication). The Default setting uses MongoDB's default authentication mechanism, SCRAM. Other supported choices are\u00a0GSSAPI, MONGODBCR, MONGODBX509, PLAIN,\u00a0SCRAMSHA1, and SCRAMSHA256.\u00a0Set to NoAuth if authentication is not enabled.\u00a0Set to GSSAPI if you are using Kerberos.CollectionsStringThe fully-qualified name(s) of the MongoDB collection(s) to read from, for example, mydb.mycollection. Separate multiple collections by commas.You may use the $ wildcard, for example, mydb.$ The wildcard is allowed only at the end of the string: for example, mydb.prefix$ is valid, but mydb.$suffix is not.Note that data will be read only from collections that exist when the Striim application starts, additional collections added later will be ignored until the application is restarted.Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringTo use an Azure private endpoint to connect to MongoDB Atlas, see Specifying Azure private endpoints in sources and targets.With MongoDB 4.2 or higher or with earlier versions when Mode is InitialLoadand you are connecting to MongoDB with DNS SRV, specify mongodb+srv:///, for example, mongodb+srv://abcdev3.gcp.mongodb.net/mydb. If you do not specify a database, the connection will use admin.and you are connecting to a sharded MongoDB instance with mongos, specify : of the mongos instance.and you are connecting to a sharded instance of MongoDB without mongos, specify : for all instances of the replica set, separated by commas. For example: 192.168.1.1:27107, 192.168.1.2:27107, 192.168.1.3:27017..With MongoDB 4.1 or earlier when Mode is Incrementaland you are connecting to an unsharded instance of MongoDB, specify : for all instances of the replica set, separated by commas. For example: 192.168.1.1:27107, 192.168.1.2:27107, 192.168.1.3:27017..and you are connecting to a sharded instance of MongoDB, create a separate source for each shard. For each reader, specify all the instances of the replica set, separated by commas. For example: 192.168.1.1:27107, 192.168.1.2:27107, 192.168.1.3:27017..Exclude CollectionsStringAny collections to be excluded from the set specified in the Collections property. Specify as for the Collections property.Full Document Update LookupBooleanFalseWhen Mode is InitialLoad, this setting is ignored and will not appear in the Flow Designer.With the default setting of False, for UPDATE events the JSONNodeEvent data field will contain only the _id and modified values.Set to True to include the entire document. Note that the document will be the current version, and depending on other write operations that may have occurred between the update and the lookup, the returned document may differ significantly from the document at the time of the update. Enabling this option setting may affect performance, since MongoDB Reader will have to call the database to fetch more data.ModeStringInitialLoadWith the default setting, will load all existing data using db.collection.find()and stop. In this mode, MongoDBReader is not a CDC reader.Set to\u00a0Incremental to read CDC data continuously. In this mode, when reading from MongoDB 4.2 and later, MongoDB Reader will read change streams (see MongoDB Manual > Change Streams). When reading from earlier versions of MongoDB, it will read the oplog (see MongoDB Manual > Replica Set Oplog).MongoDB ConfigStringOptionally specify a JSON string to define a subset of documents to be selected in InitialLoad mode and for inserts in Incremental mode. See Selecting documents using MongoDB Config.When Mode is Incremental, insert operations are sent only for the defined subset of documents, but updates and deletes are sent for all documents. If Cosmos DB Writer, Mongo Cosmos DB Writer, or MongoDB Writer receive an update or delete for a document not in the subset, the application will halt. To avoid this, set Ignorable Exception Code to RESOURCE_NOT_FOUND for Cosmos DB Writer or KEY_NOT_FOUND for Mongo Cosmos DB Writer or MongoDB Writer. Note that in this case you will have to check the exception store to see if there were any ignored exceptions for documents in the subset.Passwordencrypted passwordThe password for the specified Username.Quiesce on IL CompletionBooleanFalseWith the default value of False, you must stop the application manually after all data has been read.Set to True to automatically quiesce the application after all data has been read (see discussion of QUIESCE in Console commands). When you see on the Apps page that the application is in the Quiescing state, it means that all the data that existed when the query was submitted has been read and that the target(s) are writing it. When you see that the application is in the Quiesced state, you know that all the data has been written to the target(s). At that point, you can undeploy the initial load application and then start another application for continuous replication of new data.Console commandsNoteSet to True only if all targets in the application support auto-quiesce (see Writers overview).Writers overviewRead PreferenceStringprimaryPreferredSee\u00a0Read Preference Modes. Supported values are primary, primaryPreferred, secondary, secondaryPreferred, and nearest.Security ConfigStringSee Using SSL or Kerberos or X.509 authentication with MongoDB.SSL EnabledBooleanFalseIf MongoDB requires SSL or individual MongoDB Atlas nodes are specified in the Connection URL, set to True (see\u00a0Configure mongod and mongos for TLS/SSL). If you have an on-premise MongoDB deployment with your own certificate authority, see Using SSL or Kerberos or X.509 authentication with MongoDB.Start TimestampStringLeave blank to read only new data. Specify a UTC DateTime value (for example, 2018-07-18T04:56:10) to read all data from that time forward or to wait to start reading until a time in the future.\u00a0If the MongoDB and Striim servers are in different time zones, adjust the value to match the Striim time zone. If the oplog no longer contains data back to the specified time,\u00a0reading will start from the beginning of the oplog.If milliseconds are specified (for example,\u00a02017-07-18T04:56:10.999), they will be interpreted as the incrementing ordinal for the MongoDB timestamp (see\u00a0Timestamps).UsernameStringA MongoDB user with access as described in MongoDB setup.Selecting documents using MongoDB ConfigYou can use the MongoDB Config property to define a subset of documents to be selected.The subset is defined by a query in JSON format.Documents may be selected from multiple collections.Specify the wildcard $ at the end of a collection name to match multiple collections using a single QueryClause.All collections referenced in the query must be specified in the Collections property.Multiple queries may be specified.OperatorsThe logical operators AND and OR are supported for nested expressionsThe following comparison operators are supported:=: equals!=: does not equal<: is less than<=: is less than or equal to>: is greater than>=: is greater than or equal toThe data types supported for comparison using FieldValue are Boolean, String and Numeric.JSON fieldsThe filter criteria for all the collections should be provided as an object inside the QueryClause field.QueryClause can contain multiple JSON objects with the fully qualified name (or pattern) of the collection and its filter criteria as key and value respectively.If a collection matches more than one pattern, the filter criteria provided with the first pattern will be considered.The Filter object contains the filter criteria of the query clause. Simple expressions and nested expressions are supported for Filter.The leaf fields of the MongoDBConfig JSON object are FilterField and FilterValue. The field names in a document and their filter values can be provided here. These are combined by a comparison operator field called Operator.A simple expression involves directly providing the Operator, FieldName and FieldValue to the Filter field as an object.Multiple nested expressions are created by combining individual Filter JSON objects using logical operators.FieldName is the JSON path of the field. Dot notation can be used to provide the FieldName of a nested field.MongoDB Config syntax and examplesBasic syntax:{\n \"QueryClause\": {\n \".\": {\n \"Filter\": {\n \"\": \"\"\n }\n }\n }\n}For example, to select documents with the city value Bangalore from the collection MyCollection in the database MyDB:{\n \"QueryClause\": {\n \"MyDB.MyCollection\": {\n \"Filter\": {\n \"city\": \"Bangalore\"\n }\n }\n }\n}An example using the logical operator OR to select documents matching multiple cities:{\n \"QueryClause\": {\n \"MyDB.MyCollection\": {\n \"Filter\": {\n \"OR\": [\n {\n \"operator\": {\n \"city\": \"Bangalore\"\n }\n },\n {\n \"operator\": {\n \"city\": \"Bangalore\"\n }\n }\n ]\n }\n }\n }\n}Complex MongoDB Config exampleSpecifying the JSON query below in MogoDB Config will select the documents that match the following criteria:CollectionCriteriamongodb.employeeDocuments that have a field named City with Chennai as the value.Collections whose name start with mongodb.depDocuments that match both the following conditionsdo not have a field Name with value Accountshave a field named State with value Tamil Nadumongodb.payrollDocuments that have either of the following conditionsdo not have a field Source with value Fmatch both the following conditionshave a field named Age with a value greater than 30have a field named City with a value Bangalore as the value{\n \"QueryClause\": {\n \"mongodb.employee\": {\n \"Filter\": {\n \"City\": \"Chennai\"\n }\n },\n \"mongodb.dep$\": {\n \"Filter\": {\n \"and\": [\n {\n \"!=\": {\n \"Name\": \"Accounts\"\n }\n },\n {\n \"State\": \"Tamil Nadu\"\n }\n ]\n }\n },\n \"mongodb.payroll\": {\n \"Filter\": {\n \"or\": [\n {\n \"!=\": {\n \"Source\": \"F\"\n }\n },\n {\n \"and\": [\n {\n \">\": {\n \"Age\": 30\n }\n },\n {\n \"City\": \"Bangalore\"\n }\n ]\n }\n ]\n }\n }\n }\n}MongoDBReader JSONNodeEvent fieldsThe output type for MongoDBReader is JSONNodeEvent. The fields are:data: contains the field names and values of a document, for example:data: {\"_id\":2441,\"company\":\"Striim\",\"city\":\"Palo Alto\"}Updates include only the modified values. Deletes include only the document ID.removedfields: contains the names of any fields deleted by the $unset function. If no fields were deleted, the value of removedfields is null. For example:removedfields: {\"myField\":true}Or if no fields have been removed:removedfields: nullmetadata: contains the following elements:CollectionName: the collection from which the document was readDatabaseName: the database of the collectionDocumentKey: the document ID (same as the _id value in data)FullDocumentReceived: value is True if data includes the entire image of the document, False if it does notLsid (for MongoDB 4.2 or later only, for operations that are part of a multi-document transaction): the logical session identifier of the transaction sessionNamespace:\u00a0.OperationName: in InitialLoad mode, SELECT; in Incremental mode, INSERT, UPDATE, or DELETE (with MongoDB 4.1 or earlier, operations within a transaction are not included; see Oplog does not record operations within a transaction)Partitioned: value is True if the operation was on a sharded collectionPartitionKeys: a JsonNode object containing the shard keys and their valuesTimestamp: in InitialLoad mode, the current time of the Striim server when the document was read; in Incremental mode, the MongoDB timestamp when the operation was performedTxnNumber (for MongoDB 4.2 or later only, for operations that are part of a multi-document transaction): the transaction numberFor example:metadata: {\"CollectionName\":\"employee\",\"OperationName\":\"SELECT\",\"DatabaseName\":\"test\",\n \"DocumentKey\":1.0,\"NameSpace\":\"test.employee\",\"TimeStamp\":1537433999609}\nMongoDBReader example application and outputThe following Striim application will write change data for the specified collection to SysOut. To run this yourself, replace striim and ****** with the user name and password for the MongoDB user account discussed in MongoDB setup, specify the correct connection URL for your instance, and replace\u00a0mydb with the name of your database.CREATE APPLICATION MongoDBTest;\n\nCREATE SOURCE MongoDBIn USING MongoDBReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'192.168.1.10:27107',\n Collections:'mydb.employee'\n) \nOUTPUT TO MongoDBStream;\n\nCREATE TARGET MongoDBCOut\nUSING SysOut(name:MongoDB)\nINPUT FROM MongoDBStream;\n\nEND APPLICATION MongoDBTest;With the above application running, the following MongoDB shell commands:use mydb;\ndb.employee.insertOne({_id:1,\"firstname\":\"Larry\",\"lastname\":\"Talbot\",\"age\":10,\"comment\":\"new record\"});\ndb.employee.updateOne({_id:1},{$set:{ \"age\":40, \"comment\":\"partial update\"}});\ndb.employee.deleteOne({_id:1});would produce output similar to the following:data: {\"_id\":1,\"firstname\":\"Larry\",\"lastname\":\"Talbot\",\"age\":10,\"comment\":\"new record\"}\nmetadata: {\"CollectionName\":\"employee\",\"OperationName\":\"INSERT\",\"DatabaseName\":\"mydb\",\"DocumentKey\":1,\n\"NameSpace\":\"mydb.employee\",\"TimeStamp\":1537250474, \"Partitioned\":false,\"FullDocumentReceived\":true,\n\"PartitionKeys\":{}}\n...\ndata: {\"_id\":1.0,\"age\":40,\"comment\":\"partial update\"}\nmetadata: {\"CollectionName\":\"employee\",\"OperationName\":\"UPDATE\",\"DatabaseName\":\"mydb\",\"DocumentKey\":1,\"\nNameSpace\":\"mydb.employee\",\"TimeStamp\":1537250474, \"Partitioned\":false,\"FullDocumentReceived\":false,\n\"PartitionKeys\":{}}\n...\ndata: {\"_id\":1}\nmetadata: {\"CollectionName\":\"employee\",\"OperationName\":\"DELETE\",\"DatabaseName\":\"mydb\",\"DocumentKey\":1,\n\"NameSpace\":\"mydb.employee\",\"TimeStamp\":1537250477, \"Partitioned\":false,\"FullDocumentReceived\":false,\n\"PartitionKeys\":{}}\nNote that output for the \"partial\" update and delete operations includes only the fields specified in the shell commands. See\u00a0Replicating MongoDB data to Azure CosmosDB for discussion of the issues this can cause when writing to targets and how to work around those issues.Replicating MongoDB data to Azure CosmosDBTo replicate one or many MongoDB collections to Cosmos DB, specify multiple collections in the Collections properties of MongoDBReader and CosmosDBWriter. You may use wildcards ($ for MongoDB, % for Cosmos DB) to replicate all collections in a database, as in the example below, or specify multiple collections manually, as described in the notes for Cosmos DB Writer's Collections property.You must create the target collections in Cosmos DB manually. The partition key names must match one of the fields in the MongoDB documents.Data will be read only from collections that exist when the source starts. Additional collections added later will be ignored until the source is restarted. When the target collection is in a fixed container (see\u00a0Partition and scale in Azure Cosmos DB), inserts, updates, and deletes are handled automatically. When the target collection is in an unlimited container, updates require special handling and deletes must be done manually, as discussed below.If you wish to run the examples, adjust the\u00a0MongoDB Reader properties and\u00a0Cosmos DB Writer\u00a0properties to reflect your own environment.When the target collection is in a fixed containerNoteWriting to a target collection in a fixed container will not be possible until Microsoft fixes the bug discussed in this Azure forum discussion.In Cosmos DB, create database mydb containing the collection employee with partition key /name\u00a0 (note that the collection and partition names are case-sensitive).In MongoDB, create the collection employee and populate it as follows:use mydb;\ndb.employee.insertMany([\n{_id:1,\"name\":\"employee1\",\"company\":\"Striim\",\"city\":\"Madras\"},\n{_id:2,\"name\":\"employee2\",\"company\":\"Striim\",\"city\":\"Seattle\"},\n{_id:3,\"name\":\"employee3\",\"company\":\"Striim\",\"city\":\"California\"}\n]);In Striim, run the following application to perform the initial load of the existing data:CREATE APPLICATION Mongo2CosmosInitialLoad; \n \nCREATE MongoDBIn USING MongoDBReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'',\n Collections:'mydb.$'\n )\nOUTPUT TO MongoDBStream;\n \nCREATE TARGET WriteToCosmos USING CosmosDBWriter (\n ServiceEndpoint: '',\n AccessKey: '',\n Collections: 'mydb.$,mydb.%'\n)\nINPUT FROM MongoDBStream;\n \nEND APPLICATION Mongo2CosmosInitialLoad;After the application is finished, the Cosmos DB employee collection should contain the following:{\n \"_id\": 1,\n \"name\": \"employee1\",\n \"company\": \"striim\",\n \"city\": \"madras\",\n \"id\": \"1.0\",\n \"_rid\": \"HnpSALVXpu4BAAAAAAAACA==\",\n \"_self\": \"dbs/HnpSAA==/colls/HnpSALVXpu4=/docs/HnpSALVXpu4BAAAAAAAACA==/\",\n \"_etag\": \"\\\"0800b33d-0000-0000-0000-5bb5aafa0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538632442\n}\n{\n \"_id\": 2,\n \"name\": \"employee2\",\n \"company\": \"striim\",\n \"city\": \"seattle\",\n \"id\": \"2.0\",\n \"_rid\": \"HnpSALVXpu4BAAAAAAAADA==\",\n \"_self\": \"dbs/HnpSAA==/colls/HnpSALVXpu4=/docs/HnpSALVXpu4BAAAAAAAADA==/\",\n \"_etag\": \"\\\"2b00f87b-0000-0000-0000-5bb5aafb0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538632443\n}\n{\n \"_id\": 3,\n \"name\": \"employee3\",\n \"company\": \"striim\",\n \"city\": \"california\",\n \"id\": \"3.0\",\n \"_rid\": \"HnpSALVXpu4BAAAAAAAAAA==\",\n \"_self\": \"dbs/HnpSAA==/colls/HnpSALVXpu4=/docs/HnpSALVXpu4BAAAAAAAAAA==/\",\n \"_etag\": \"\\\"2700ad2a-0000-0000-0000-5bb5aafb0000\\\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538632443\n}\nIn Striim, run the following application to continuously replicate new data from MongoDB to Cosmos DB:CREATE APPLICATION Mongo2CosmosIncrementalFixedContainer; \n \nCREATE MongoDBIn USING MongoDBReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'',\n authType: 'NoAuth',\n Mode:'Incremental',\n FullDocumentUpdateLookup:'true',\u00a0\n startTimestamp: ''\n Collections:'mydb.$'\n )\nOUTPUT TO MongoDBStream;\n\nCREATE TARGET WriteToCosmos USING CosmosDBWriter (\n ServiceEndpoint:'',\n AccessKey:'',\n Collections:'mydb.$,mydb.%'\n)\nINPUT FROM MongoDBStream ;\n\nCREATE CQ SelectDeleteOperations\nINSERT INTO DeleteOpsStream\nSELECT META(MongoDBStream,\"DatabaseName\"),\n META(MongoDBStream,\"CollectionName\"),\n META(MongoDBStream,\"DocumentKey\")\nFROM MongoDBStream\nWHERE META(MongoDBStream,\"OperationName\").toString() = \"DELETE\";\n\nCREATE TARGET WriteIgnoredDeleteOps USING FileWriter (\n filename:'DeleteOperations.json'\n)\nFORMAT USING JSONFormatter()\nINPUT FROM DeleteOpsStream;\n \nEND APPLICATION Mongo2CosmosIncrementalFixedContainer;In MongoDB, modify the employees collection as follows to add employee4:use mydb;\ndb.employee.save({_id:4,\"name\":\"employee4\",\"company\":\"Striim\",\"city\":\"Palo Alto\"});\ndb.employee.save({_id:1,\"name\":\"employee1\",\"company\":\"Striim\",\"city\":\"Seattle\"});\ndb.employee.update({_id:2},{$set : {\"city\":\"Palo Alto\"}});\ndb.employee.remove({_id:3});Within 30 seconds, those\u00a0changes should be replicated to the corresponding Cosmos DB collection with results similar to the following:{\n \"_id\": 1,\n \"name\": \"employee1\",\n \"company\": \"striim\",\n \"city\": \u201cSeattle\u201d,\n \"id\": \"1.0\",\n \"_rid\": \"HnpSALVXpu4BAAAAAAAACA==\",\n \"_self\": \"dbs/HnpSAA==/colls/HnpSALVXpu4=/docs/HnpSALVXpu4BAAAAAAAACA==/\",\n \"_etag\": \"\"0800b33d-0000-0000-0000-5bb5aafa0000\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538632442\n}\n{\n \"_id\": 2,\n \"name\": \"employee2\",\n \"company\": \"striim\",\n \"city\": \u201cPalo Alto\u201d,\n \"id\": \"2.0\",\n \"_rid\": \"HnpSALVXpu4BAAAAAAAADA==\",\n \"_self\": \"dbs/HnpSAA==/colls/HnpSALVXpu4=/docs/HnpSALVXpu4BAAAAAAAADA==/\",\n \"_etag\": \"\"2b00f87b-0000-0000-0000-5bb5aafb0000\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538632443\n}\n{\n \"_id\": 3,\n \"name\": \"employee3\",\n \"company\": \"striim\",\n \"city\": \"california\",\n \"id\": \"3.0\",\n \"_rid\": \"HnpSALVXpu4BAAAAAAAAAA==\",\n \"_self\": \"dbs/HnpSAA==/colls/HnpSALVXpu4=/docs/HnpSALVXpu4BAAAAAAAAAA==/\",\n \"_etag\": \"\"2700ad2a-0000-0000-0000-5bb5aafb0000\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538632443\n}\n{\n \"_id\": 4,\n \"name\": \"employee4\u201d,\n \"company\": \"striim\",\n \"city\": \u201cPalo Alto\u201d,\n \"id\": \u201c4.0\u201d,\n \"_rid\": \"HnpSALVXpu4BAAAAAAAAAE==\",\n \"_self\": \"dbs/HnpSAA==/colls/HnpSALVXpu4=/docs/HnpSALVXpu4BAAAAAAAAAE==/\",\n \"_etag\": \"\"2700ad2a-0000-0000-0000-5bb5aafb0000\"\",\n \"_attachments\": \"attachments/\",\n \"_ts\": 1538632443\n}When the target collection is in an unlimited containerWhen a Cosmos DB collection is in an unlimited container, it must have a partition key, which must be specified when you create the collection.When MongoDB save operations create new documents, all fields are included in MongoDBReader's output, so CosmosDBWriter can write to the correct partition.When MongoDB save operations update existing documents, all fields are included in MongoDBReader's output, so CosmosDBWriter can use the partition key and document ID to update the correct target document.MongoDB update operations do not include all fields, so the partition key may be missing from MongoDBReader's output. In those cases, the PartialRecordPolicy open processor retrieves the missing fields from MongoDB and adds them before passing the data to CosmosDBWriter.MongoDB remove operations include only the document ID, so the partition key is missing from\u00a0MongoDBReader's output. Since CosmosDBWriter would be unable to determine the correct partition, the application writes the database name, collection name, and document key to a DeleteOps collection in CosmosDB.CREATE APPLICATION Mongo2CosmosIncrementalUnlimitedContainer; \n \nCREATE SOURCE MongoDBIn USING MongoDBReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'',\n authType: 'NoAuth',\n Mode:'Incremental',\n startTimestamp: '',\n Collections:'mydb.$'\n )\nOUTPUT TO MongoDBStream;\n\nCREATE STREAM FilteredMongoDBStream OF Global.JsonNodeEvent;\n\nCREATE CQ ExcludeDeleteOperations\nINSERT INTO FilteredMongoDBStream\nSELECT META(MongoDBStream,\"DatabaseName\"),\n META(MongoDBStream,\"CollectionName\"),\n META(MongoDBStream,\"DocumentKey\")\nFROM MongoDBStream\nWHERE META(MongoDBStream,\"OperationName\").toString() != \"DELETE\";\n\nCREATE STREAM FullDocstream OF Global.JsonNodeEvent;\n\nCREATE OPEN PROCESSOR CompletePartialDocs USING MongoPartialRecordPolicy ( \n ConnectionURL:'', \n authType:'NoAuth',\n OnMissingDocument: 'Process'\n)\nINSERT INTO FullDocstream\nFROM FilteredMongoDBStream;\n\nCREATE TARGET WriteToCosmos USING CosmosDBWriter (\n ServiceEndpoint:'',\n AccessKey:'',\n Collections:'mydb.$,mydb.%',\n IgnorableExceptionCode:'PARTITION_KEY_NOT_FOUND'\n)\nINPUT FROM FullDocstream;\n\nCREATE CQ SelectDeleteOperations\nINSERT INTO DeleteOpsStream\nSELECT TO_STRING(META(MongoDBStream,\"DatabaseName\")) AS DatabaseName,\n TO_STRING(META(MongoDBStream,\"CollectionName\")) AS CollectionName,\n TO_STRING(META(MongoDBStream,\"DocumentKey\")) AS DocumentKey\nFROM MongoDBStream\nWHERE META(MongoDBStream,\"OperationName\").toString() = \"DELETE\";\n\nCREATE TARGET WriteDeleteOpsToCosmos USING CosmosDBWriter (\n ServiceEndpoint:'',\n AccessKey:'',\n Collections:'mydb.DeleteOps'\n)\nINPUT FROM DeleteOpsStream;\n \nEND APPLICATION Mongo2CosmosIncrementalUnlimitedContainer;In this section: MongoDBMongoDB setupUsing SSL or Kerberos or X.509 authentication with MongoDBConnecting to MongoDB Atlas clusters using private endpoints or network peeringMongoDB Reader propertiesSelecting documents using MongoDB ConfigOperatorsJSON fieldsMongoDB Config syntax and examplesComplex MongoDB Config exampleMongoDBReader JSONNodeEvent fieldsMongoDBReader example application and outputReplicating MongoDB data to Azure CosmosDBWhen the target collection is in a fixed containerWhen the target collection is in an unlimited containerSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-08\n", "metadata": {"source": "https://www.striim.com/docs/en/mongodb.html", "title": "MongoDB", "language": "en"}} {"page_content": "\n\nTargetsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsPrevNextTargetsIn a Striim application, writers send data to external targets such as Google BigQuery, Snowflake, Azure Synapes, SQL Server, MySQL, PostgreSQL, or Kafka, For a complete list of supported targets, see Writers overview.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-07-07\n", "metadata": {"source": "https://www.striim.com/docs/en/targets.html", "title": "Targets", "language": "en"}} {"page_content": "\n\nWriters overviewSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWriters overviewPrevNextWriters overviewThe following is a summary of writer capabilities. For additional information, see:Using source and target adapters in applicationsType for discussion of user-defined typesSQL CDC replication examples and Replicating MongoDB data to Azure CosmosDBHow update and delete operations are handled in writers for discussion of \"insert only\"How update and delete operations are handled in writersFormatters for discussion of Avro, delimited text, JSON, and XML outputsUsing the Confluent or Hortonworks schema registryDDL support in writersRecovering applicationsRecovering applicationsWriter capabilitieswriterinput stream type(s)supports replication[a]supports Database Reader auto-quiesceoutput(s)DDL supportparallel threadsrecovery[b]ADLS Gen1 WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnonoAvro, delimited text, JSON, XMLoptional rollover on schema evolutionnoA1PADLS Gen2 WriterJSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XMLoptional rollover on schema evolutionnoA1PAzure Blob WriterJSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XML-noA1PAzure Event Hub WriterAzure Event Hub WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XML-noAIP (default) or E1P[d]Azure Synapse Writeruser-defined, WAEvent[e]yesyesAzure Synapse table(s)schema evolutionyesA1PBigQuery WriterBigQuery Writeruser-defined, WAEvent[e]yesyesBigQuery table(s)[f]schema evolutionyesA1PCassandra Cosmos DB Writeruser-defined, WAEvent[e]yesyesCosmos DB Cassandra API tables-yesE1P[d][g]Cloudera Hive Writeruser-defined, WAEvent1nonoHive table(s)[f]-noA1PCosmos DB WriterCosmos DB Writeruser-defined, JSONNodeEvent, WAEvent[e]yesyesCosmosDB documents-yesA1P[h]Database WriterDatabase Writeruser-defined, WAEvent[e]yesyesJDBC to table(s) in a supported DBMS[f]schema evolutionyesE1P[d]Databricks Writeruser-defined, WAEvent[e]yesyesDelta Lake tables in Databricksschema evolutionyesA1PFile WriterJSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XMLoptional rollover on schema evolutionnoA1PGalerasee Database Writer, aboveGCS WriterJSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XMLoptional rollover on schema evolutionyesA1PGoogle PubSub WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XML-noA1PHazelcast WriterHazelcast Writeruser-defined, WAEvent[e]yesnoHazelcast map(s)[f]-noA1PHBase WriterHBase Writeruser-defined, WAEvent[e]yesnoHBase table(s)**-yesA1PHDFS WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnonoAvro, delimited text, JSON, XMLoptional rollover on schema evolutionnoA1PHive WriterHive Writeruser-defined, WAEvent[e]yes (when using SQL MERGE)yesHive table(s)[f]-yesE1P (when using MERGE) or A1PHortonworks Hive Writeruser-defined, WAEvent[e]yes (when using SQL MERGE)noHive table(s)[f]-noE1P (when using MERGE) or A1PHP NonStop SQL/MP & SQL/MXsee Database Writer, aboveJMS WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnonodelimited text, JSON, XML-noA1PKafka WriterKafka Writeruser-defined, JSONNodeEvent, WAEvent[e]no, but see Using the Confluent or Hortonworks schema registryyesAvro, delimited text, JSON, XMLcan track schema evolution using schema registryyesE1P[d]Kinesis WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XML-noE1P[d]Kudu WriterKudu Writeruser-defined, WAEvent[e]yesyesKudu table(s)[f]-yesA1PMapR DB Writeruser-definednonoMapR DB table-yesA1PMapR FS WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnonoAvro, delimited text, JSON, XMLnoA1PMapR Stream WriterJSONNodeEvent, user-defined, WAEvent, XMLNodeEventnonoAvro, delimited text, JSON, XMLnoA1PMariaDBsee Database Writer, aboveMemSQLsee Database Writer, aboveMongoDB Cosmos DB WriterJSONNodeEvent, user-defined, WAEvent[e]yesyesCosmosDB documentsnoA1P[h]MongoDB WriterJSONNodeEvent, user-defined, WAEvent[e]yesyesMongoDB documents-yesA1P or E1P[i]MQTT Writeruser-definednonoAvro, delimited text, JSON, XMLnoA1PMySQLsee Database Writer, aboveOracle Databasesee Database Writer, abovePostgreSQLsee Database Writer, aboveRedshift WriterRedshift Writeruser-defined, WAEvent[e]yesyesRedshift table(s)[f]yesA1PS3 WriterS3 WriterJSONNodeEvent, ParquetEvent[c], user-defined, WAEvent, XMLNodeEventnoyesAvro, delimited text, JSON, XMLoptional rollover on schema evolutionyesA1PSalesforce Writeruser-defined (in APPENDONLY mode), WAEvent[e]yes (in MERGE mode)yesSalesforce objectsnoyesA1PSAP HANAsee Database Writer, aboveServiceNow Writeruser-defined, WAEvent[e]yes (in MERGE mode)ServiceNow table(s)A1PSnowflake WriterSnowflake Writeruser-defined, WAEvent[e]yesyesSnowflake table(s)[f]schema evolutionyesA1PSpanner Writeruser-defined, WAEvent[e]yesyesSpanner table(s)[f]schema evolutionyesE1P[d]SQL Serversee Database Writer, above[a] Supporting replication means that the target can replicate insert, update, and delete events from the source.[b] A1P (\"at-least once processing\") means that after recovery there may be some duplicate events written to the target. E1P (\"exactly once processing\") means there will be no duplicate events.[c] When the input stream is of type ParquetEvent, the writer must use Avro Formatter or Parquet Formatter.[d] If the source is WAEvent from Incremental Batch Reader, recovery is A1P.[e] WAEvent must be the output of a Database Reader, Incremental Batch Reader, or SQL CDC source.[f] With an input stream of a user-defined type, output is to a single table or map. Output to multiple tables or maps requires source database metadata included in WAEvent.[g] Primary key updates to source rows cannot be replicated.[h] Not supported when the writer's input stream is the output of Cosmos DB Reader or Mongo Cosmos DB Reader in incremental mode.[i] See notes for the Checkpoint Collection property.In this section: Writers overviewWriter capabilitiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/en/writers-overview.html", "title": "Writers overview", "language": "en"}} {"page_content": "\n\nSupported writer-formatter combinationsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsSupported writer-formatter combinationsPrevNextSupported writer-formatter combinationsThe following writer-formatter combinations are supported:Avro FormatterDSV FormatterJSON FormatterParquet FormatterXML FormatterADLS Gen1 Writer\u2713\u2713\u2713\u2713ADLS Gen2 Writer\u2713\u2713\u2713\u2713\u2713Azure Blob Writer\u2713\u2713\u2713\u2713\u2713Azure Event Hub Writer\u2713\u2713\u2713\u2713File Writer\u2713\u2713\u2713\u2713\u2713GCS Writer\u2713\u2713\u2713\u2713\u2713Google PubSub Writer\u2713\u2713\u2713\u2713HDFS Writer\u2713\u2713\u2713\u2713JMS Writer\u2713\u2713\u2713Kafka Writer\u2713\u2713\u2713\u2713Kinesis WriterKinesisWriter\u2713\u2713\u2713\u2713MapR Stream Writer\u2713\u2713\u2713\u2713MQTT Writer\u2713\u2713\u2713\u2713S3 Writer\u2713\u2713\u2713\u2713\u2713In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-08-23\n", "metadata": {"source": "https://www.striim.com/docs/en/supported-writer-formatter-combinations.html", "title": "Supported writer-formatter combinations", "language": "en"}} {"page_content": "\n\nWorking with writersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersPrevNextWorking with writersThis section discusses the common characteristics of Striim's writers. See also Using source and target adapters in applications.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/en/working-with-writers.html", "title": "Working with writers", "language": "en"}} {"page_content": "\n\nSetting output names and rollover / upload policiesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersSetting output names and rollover / upload policiesPrevNextSetting output names and rollover / upload policiesADLS Gen1 Writer,, ADLS Gen2 Writer Azure Blob Writer,\u00a0 FileWriter, GCS Writer, HDFS Writer, and\u00a0S3 Writer support the following options to define the paths and names for output files, when new files are created, and how many files are retained.GCS WriterDynamic output namesThe blobname string in Azure Blob Writer, the bucketname\u00a0and objectname strings in GCSWriter and S3Writer, the\u00a0directory and filename strings in ADLS Gen1 Writer, ADLS Gen2 Writer, File Writer, and HDFSWriter, and the\u00a0foldername\u00a0string in AzureBlobWriter, GCS Writer, and S3Writer may include field-name tokens that will be replaced with values from the input stream. For example, if the input stream included yearString, monthString, and dayString fields, directory:'%yearString%/%monthString%/%dayString%' would create a new directory for each date, grouped by month and year subdirectories. If desirable, you may filter these fields from the output using the\u00a0members property in DSVFormatter or JSONFormatter.Note: If a bucketname in GCSWriter or S3Writer, directoryname in ADLS Gen1 Writer or ADLS Gen2 Writer, or foldername in S3Writer contains two field-name tokens, such as %field1%%field2%, the second creates a subfolder of the first.When the target's input is the output of a CDC or DatabaseReader source, values from the WAEvent metadata or userdata map or JSONNodeEvent metadata map may be used in these names using the syntax\u00a0%@metadata()% or\u00a0%@userdata()%, for example,\u00a0%@metadata(TableName)%. You may combine multiple metadata and/or userdata values, for example,\u00a0%@metadata(name)%/%@userdata(TableName)%'. You may also mix field, metadata, and userdata values.For S3Writer bucket names, do not include punctuation between values. Hyphens, the only punctuation allowed, will be added automatically.\u00a0For more information, see Amazon's\u00a0Rules for Bucket Naming.Rollover and upload policiesrolloverpolicy\u00a0or uploadpolicy trigger output file or blob rollover different ways depending on which parameter you specify:parameterrollover triggerexampleeventcount (or size)specified number of events have been accumulated (in Kinesis Writer, this is specified as a size in bytes rather than a number of events)\u00a0eventcount:10000filesizespecified file size has been reached (value must be specified in megabytes, the maximum is 2048M)filesize:1024Mintervalspecified time has elapsed (use s for second or m for minute, h for hour)interval:1hYou may specify both eventcount and interval, in which case rollover is triggered whenever one of the limits is reached. For example,\u00a0eventcount:10000,interval:1h will start a new file after one hour has passed since the current file was created or after 10,000 events have been written to it, whichever happens first.CautionWhen the rollover policy includes an interval or eventcount, and the file or blob name includes the creation time, be sure that the writer will never receive events so quickly that it will create a second file with the same name, since that may result in lost data. You may work around this by using nanosecond precision in the creation time or by using a sequence number instead of the creation time.If you drop and\u00a0re-create the application and there are existing files in the output directory, start will fail with a \"file ... already exists in the current directory\" error. To retain the existing files in that directory, add a sequencestart parameter to the rollover policy, or add %
) and Snowflake Writer ..
.For example, to write the CDC log timestamp in the METADATA map to the target column CDCTIMESTAMP:... ColumnMap(EMP_NAME=NAME,EMP_ID=ID,EMP_DOB=DOB,CDCTIMESTAMP=@METADATA(TimeStamp))'To specify a field in the USERDATA map (see Adding user-defined data to WAEvent streams or Adding user-defined data to JSONNodeEvent streams), use\u00a0@USERDATA():... ColumnMap(EMP_NAME=NAME,EMP_ID=ID,EMP_DOB=DOB,EMP_CITY=@USERDATA(city))'\nTo write the Striim server's $HOSTNAME environment variable to the target column STRIIMSERVER:... ColumnMap(EMP_NAME=NAME,EMP_ID=ID,EMP_DOB=DOB,STRIIMSERVER=$HOSTNAME)'To write a static string, the syntax is:... ColumnMap(='')'You may modify multiple tables by using wildcards. For example:SRC_SCHEMA.%,TGT_SCHEMA.% COLUMNMAP(CDCTIMESTAMP=@METADATA(TimeStamp))In this section: Mapping columnsExample 1: columns in different order and extra column in targetExample 2: some columns have the same nameExample 3: extra column in sourceModifying output using ColumnMapSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/mapping-columns.html", "title": "Mapping columns", "language": "en"}} {"page_content": "\n\nHandling \"table not found\" errorsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersHandling \"table not found\" errorsPrevNextHandling \"table not found\" errorsBy default, when a writer's Tables property specifies a table that does not exist in the target database, when it receives an event for that table it will terminate with a TargetTableNotFoundException error.Writers with the Ignorable Exception Code property may be configured to ignore such errors and continue by setting that property's value to TABLE_NOT_FOUND. All dropped events will be written to the exception store (see CREATE EXCEPTIONSTORE) and the first 100 will appear in the Flow Designer's exception list. Even. if the missing table is added, Striim will not write to it until the application is restarted.CREATE EXCEPTIONSTOREIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-14\n", "metadata": {"source": "https://www.striim.com/docs/en/handling--table-not-found--errors.html", "title": "Handling \"table not found\" errors", "language": "en"}} {"page_content": "\n\nSetting encryption policiesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersSetting encryption policiesPrevNextSetting encryption policiesWriters with an Encryption Policy property can encrypt files using AES, PGP, or RSA before sending them to the target. The property includes the following sub-properties:sub-propertytypedefault valuenotesAlgorithmStringfor AES, supported values are:AES (no transformation)AES/ECB/NoPadding (128)AES/ECB/PKCS5Padding (128)for PGP, set to PGPfor RSA, supported values are:RSA (no transformation): required if key size is not 1024 or 2048RSA/ECB/OAEPWithSHA-256AndMGF1PaddingRSA/ECB/PKCS1PaddingCompressBooleanFalseIf using PGP, optionally set to True to compress the data as .zip using org.bouncycastle.openpgp.PGPCompressedData.ZIP from the Bouncy Castle OpenPGP API before encrypting it.Key File NameStringName of the key file (do not include the path). For PGP, this must be the public key. For RSA, you may use either the public key (default) or private key, in which case you must set Use Private Key to True.If the key file is encrypted, specify its passphrase in the adapter's Data Encryption Key Passphrase property.Key LocationStringpath to the specified key file (must be readable by Striim)Key SizeLongIf the Key Type is RSA, specify the key size, for example, 2048.Key TypeString\u00a0supported values are AES, PGP, and RSAUse Private KeyBooleanFalseFor RSA only: With the default value of False, the file specified in Key File Name must be the public key. Set to True if to use the private key file instead.For example:CREATE TARGET EncryptedFileOut using FileWriter(\n filename:'EncryptedOutput.txt',\n directory:'FileOut',\n encryptionpolicy:'\n KeyType=PGP, \n KeyLocation=/opt/striim/keys, \n KeyFileName=myPgp.pub, \n Algorithm=PGP'\n) ...\nIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-05-20\n", "metadata": {"source": "https://www.striim.com/docs/en/setting-encryption-policies.html", "title": "Setting encryption policies", "language": "en"}} {"page_content": "\n\nViewing discarded eventsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersViewing discarded eventsPrevNextViewing discarded eventsWhen a writer can not find the target specified in the Tables or Collection property, Striim writes the event to its server log and increments the Discarded Event Count for the target, which is displayed at the top of the Flow Designer.In the example above, the database name was not specified correctly, so none of the target tables exist in that database, and all of the events were discarded. To avoid this problem, enable data validation before deploying the application (see Creating a data validation dashboard).To view the Discarded Event Count in the Striim console, enter mon ..\u00a0To view the Discarded Event Count in the Monitor page, click the application name, click Targets, and click More Details next to the target.See also Collecting discarded events in an exception store.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/viewing-discarded-events.html", "title": "Viewing discarded events", "language": "en"}} {"page_content": "\n\nDDL support in writersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersDDL support in writersPrevNextDDL support in writersWhen the source is a CDC reader, see Handling schema evolution.ADLSGen1Writer, ADLSGen2Writer, AzureBlobWriter, FileWriter, GCSWriter, HDFSWriter, and S3Writer can roll over to a new file when a DDL event is received from one of the sources listed in Handling schema evolution.KafkaWriter can track schema evolution Using the Confluent or Hortonworks schema registry.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-10-08\n", "metadata": {"source": "https://www.striim.com/docs/en/ddl-support-in-writers.html", "title": "DDL support in writers", "language": "en"}} {"page_content": "\n\nHow update and delete operations are handled in writersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersHow update and delete operations are handled in writersPrevNextHow update and delete operations are handled in writersIn some cases, when the input of a target is the output of a CDC source, Striim can match primary keys or other unique identifiers to replicate source update and delete operations in the target, keeping the source and target in sync.In other cases, such matching is impossible, so updates and deletes are written as inserts. This\u00a0is often acceptable when the target is a data warehouse.When the target is HiveWriter or HortonworksHiveWriter, update and delete operations may be handled by the target as updates and deletes or as inserts depending on various factors (see Hive Writer for details).Hive WriterUpdate and delete operations in the source handled as updates and deletes in the targetWith the following source-target combinations, update and delete operations in the source are handled as updates and deletes in the target:sourcetargetGG Trail ReaderHPNonStop readersMSSQL ReaderMySQL ReaderOracle ReaderPostgreSQL ReaderSalesforce ReaderAzure Synapse WriterBigQuery Writer (in MERGE mode)CosmosDB WriterDatabase WriterHazelcast WriterHBase WriterKudu WriterMapRDB WriterMongoDB WriterRedshift WriterSnowflake WriterSpanner WriterMongoDB ReaderCosmosDB WriterUpdate and delete operations in the source handled as inserts in the targetWith the following source-target combinations, update and delete operations in the source are handled as inserts in the target:sourcetargetGG Trail ReaderHPNonStop readersMSSQL ReaderMySQL ReaderOracle ReaderPostgreSQL ReaderSalesforce ReaderBigQueryWriter (in APPENDONLY mode)Cloudera Hive WriterDatabaseR eaderIncremental Batch ReaderAzure Synapse WriterBigQuery WriterCosmosDB WriterDatabase WriterHazelcast WriterHBase WriterKudu WriterMapRDB WriterRedshift WriterSnowflake WriterSpanner WriterIn this section: How update and delete operations are handled in writersUpdate and delete operations in the source handled as updates and deletes in the targetUpdate and delete operations in the source handled as inserts in the targetSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/how-update-and-delete-operations-are-handled-in-writers.html", "title": "How update and delete operations are handled in writers", "language": "en"}} {"page_content": "\n\nUsing Private Service Connect with Google Cloud adaptersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersUsing Private Service Connect with Google Cloud adaptersPrevNextUsing Private Service Connect with Google Cloud adaptersGoogle's Private Service Connect allows private services to be securely accessed from Virtual Private Cloud (VPC) networks without exposing the services to the public internet (for more information, see Virtual Private Cloud > Documentation > Guides > Private Service Connect). You can use Private Service Connect to access managed services across VPCs or to access Google APIs and services.Typical use cases for Private Service Connect with Striim CloudIn this release, Striim supports Private Service Connect for the following purposes:For accessing Google APIs with the following adapters: BigQueryWriter, GCSReader, GCSWriter, and SpannerWriter.For accessing Google APIs through an internal HTTPs load balancer. Allows you to connect regional Google APIs and services using an internal HTTP(S) load balancer in your VPC network without any external IP address. Supports only selected regional Google APIs and services (such as cloud key management, cloud logging, cloud run, and Pub/Sub).For accessing published services (such as a VM instance serving as a database, cloud SQL, Snowflake, or Mongo DB Atlas). Supports connecting to published services in the same region but different VPCs without an external IP address.Connecting to services securely using Private Service ConnectIn a cloud-based infrastructure, services often communicate with each other over a public internet connection, making them vulnerable to various security threats. For example BigQuery Writer currently uses a publicly available API bigquery.googleapis.com to communicate and ingest data into BigQuery service.In the same way storage.googleapis.com and spanner.googleapis.com are used by GCS Writer and Spanner Writer respectively.These API calls are routed through the internet before reaching the actual BigQuery instance or GCS bucket in your VPC.Private Service Connect provides a secure way to connect services privately over the GCP network.Preparing Striim Cloud to use Private Service ConnectStriim Cloud supports the following Private Service Connect endpoints:Cloud SQL for MySQLCloud SQL for PostgresGoogle SpannerGoogle BigQueryGoogle Cloud StorageNoteMongoDB Atlas is not a currently supported endpoint.You can perform the following prerequisites to support using Private Service Connect for these use cases:Use Private Service Connect to access a published serviceUse Private Service Connect to access Google APIsUse Private Service Connect to access a published serviceTo access a published service:Publish your service and provide the service attachment URI while creating the private service connection.The service attachment URI will be in the following format:projects//regions//serviceAttachments/where,Service Project: Name of the project where the service is published.Region: Name of the region where the published service resides.Service Name: The published service name.Configure the approval method for the service attachment access though the private service connection.Approval may be automatic or manual. For the manual approval method, you can approve access whenever the private service connection is created for a project. While approving the connection, you can set the connection limit to that project. If the number of private service connections to that service attachment exceeds the limit, the next created private service connection to that service attachment goes into the pending state. You can accept the connection by increasing the connection limit.Create a Private Service Connect endpoint using a global internal IP address within the VPC. Note that Google Cloud Platform does not allow\u00a0the use of special characters\u00a0for the private endpoint with\u00a0Google managed services.Assign a meaningful DNS name to the internal IP address used above.NoteThese names and IP addresses are internal to the VPC network and on-premises networks that are connected to it using Cloud VPN tunnels or VLAN attachments.NoteDNS names will be automatically created for the Google managed services such as BigQuery, Storage, and Spanner. The DNS names are created with the following convention:-.p.googleapis.comFor example:storage-striimdev.p.googleapis.com, spanner-striimqa.p.googleapis.comProvide the Private Service Connect IP address or DNS name to access the published service when configuring your pipeline in the Striim application. Whether to provide an IP address or a DNS name depends on the type of the service that you have published. Specify a value in one of the following formats for the Private Service Connect Endpoint property in the Striim adapter, so that the API calls are made using the private connection:A PSC endpoint name as a string. For example,\u00a0 striimdevpsc . The adapter will construct the full domain name for the specific service. This format is recommended for most users.A full DNS name representing the PSC endpoint of specific service. This format is useful if you want to use a custom DNS name. For example, bigquery-striimdevpsc.p.googleapis.com, spanner-pscep2.p.googleapis.com, or mycustomdomainname.striimdns.com.For example, for Mongo DB Atlas, you need to provide the PSC name and IP address to configure the PSC in Mongo DB and access it. For a VM serving as a database, you can use the IP address or DNS name to access it. The DNS name format is pvtEpName.accShortName.installName.private-endpoint.For specific procedures, see Connecting to VMs or databases in Google Cloud.Use Private Service Connect to access Google APIsTo access Google APIs once you have created your Private Service Connect, you enter the PSC name in the template while creating your application. If your application is already running, you can undeploy the app, add the PSC parameter and restart the application.Striim Cloud will construct the Google APIs in the format -.p.googleapis.com. For example, storage-psctest.p.googleapis.com). Striim Cloud will use these API to access the service instead of the global Google APIs.Sample applicationThe following sample application configure a Private Service Connect endpoint for a BigQuery Writer target:CREATE APPLICATION OracleToBQ RECOVERY 10 SECOND INTERVAL;\n\nCREATE OR REPLACE SOURCE oracle_source_CDC Using OracleReader(\n Username:\u2019*****\u2019,\n Password:\u2019*****\u2019,\n ConnectionURL:'jdbc:oracle:thin:@//localhost:1521/xe',\n OnlineCatalog:true,\n FetchSize:'1',\n Tables: 'HR.EMPLOYEE'\n) Output To sourcestream1;\n\n\nCREATE OR REPLACE TARGET bq_target USING BigQueryWriter ( \n projectId: 'striimdev'\n ,ServiceAccountKey: '/path/to/serviceaccountkey.json'\n ,StandardSQL: 'true'\n ,Mode: 'MERGE'\n ,optimizedMerge: 'true'\n ,PrivateServiceConnectEndpoint: 'striimdevpsc'\n ,BatchPolicy: 'eventCount:1000'\n ,Tables: 'HR.EMPLOYEE, HR.EMPLOYEE KeyColumns(RONUM)'\n) \nINPUT FROM sourcestream1;Usage notesNote the following requirements and limitations for Private Service Connect support:The Private Service Connect endpoint details you provide to the adapter must already exist. The adapter will not create the endpoint.The Private Service Connect endpoint you provide to the adapter must be reachable or routable from the network where the Striim application is running. If the provided Private Service Connect endpoint becomes not reachable, the adapter will halt.The BigQuery Storage Write API is currently not supported with Private Service Connect endpoints.In this section: Using Private Service Connect with Google Cloud adaptersTypical use cases for Private Service Connect with Striim CloudConnecting to services securely using Private Service ConnectPreparing Striim Cloud to use Private Service ConnectSample applicationUsage notesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-07-07\n", "metadata": {"source": "https://www.striim.com/docs/en/using-private-service-connect-with-google-cloud-adapters.html", "title": "Using Private Service Connect with Google Cloud adapters", "language": "en"}} {"page_content": "\n\nConnecting to VMs or databases in Google Cloud using Private Service ConnectSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsWorking with writersUsing Private Service Connect with Google Cloud adaptersConnecting to VMs or databases in Google Cloud using Private Service ConnectPrevNextConnecting to VMs or databases in Google Cloud using Private Service ConnectThis topic describes how to connect to VMs or databases in Google Cloud using Private Service Connect:Connecting to a VM instance serving as a databaseConnecting to cloud SQL databases managed by GCPConnecting to Google SpannerConnecting to Google BigQueryConnecting to a VM instance serving as a databaseYou can publish the VM instance serving as the database and generate the service attachment to that VM instance.To connect to a VM instance serving as a database:Install a VM instance (vm-mysql) with MySQL Server.Make sure the VM is accessible as a database by adding firewall rules.Publish the VM by creating the backend service and service attachment.Create another VM (test-vm) in the same region where the VM serving as the database resides.Create a Private Service Connect endpoint in the VM VPC by consuming the VM serving as the database's service attachment.Use the IP of the private service connect from the VM (test-vm) to access the database.For example, create an application in Striim Cloud where you configure the private endpoint as the target and the cloud database as the source.Connecting to cloud SQL databases managed by Google Cloud PlatformYou can create a VM which has private service access to a cloud SQL database (Google Cloud Platform managed service). You can publish the VM instance and create a service attachment to that VM instance.To connect to cloud SQL databases including MySQL or Postgres managed by Google Cloud Platform:Create a cloud MySQL (cloud-sql) DB instance.Create private service access to the cloud MySQL instance.Create a VM (vm-cloud-sql) in same region and project of the cloud SQL instance.Add iptable rules in the VM to redirect TCP traffic to the database.Publish the VM by creating the backend service and service attachment.Create another VM (test-vm) in the same region where the VM serving as the database (vm-cloud-sql) resides.Create a Private Service Connect endpoint in the VM VPC by consuming the VM peered with the cloud MySQL database's service attachment.Use the IP of the private service connect from the VM (test-vm) to access the database.For example, create an application in Striim Cloud where you configure the private endpoint as the target and the cloud database as the source.Connecting to Google SpannerYou can create a private service connect which has access to all the Google Cloud Platform APIs.Create a Google spanner instance (gs-db) that you can accessed through Google APIs from the remote host.Create a Private Service Connect endpoint in a VM (test-vm) VPC to access all the Google APIs.Use the URL to test access to the Google Spanner instance.For example, create an application in Striim Cloud where you configure Google Spanner as the target and MySQL as the source.Connecting to Google BigQueryYou can create a private service connect which has access to all the Google Cloud Platform APIs.To connect to BigQuery:Create a Google BigQuery instance which you can access though Google-managed APIs from a remote host.Create private service connect in a VM (test-vm) VPC to access the Google APIs.Test access to the Google BigQuery instance.In this section: Connecting to VMs or databases in Google Cloud using Private Service ConnectConnecting to a VM instance serving as a databaseConnecting to cloud SQL databases managed by Google Cloud PlatformConnecting to Google SpannerConnecting to Google BigQuerySearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-07-07\n", "metadata": {"source": "https://www.striim.com/docs/en/connecting-to-vms-or-databases-in-google-cloud-using-private-service-connect.html", "title": "Connecting to VMs or databases in Google Cloud using Private Service Connect", "language": "en"}} {"page_content": "\n\nADLS Gen1 WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsADLS Gen1 WriterPrevNextADLS Gen1 WriterWrites to files in Azure Data Lake Storage Gen1. A common use case is to write data from on-premise sources to an ADLS staging area from which it can be consumed by Azure-based analytics tools.ADLS Gen1 Writer propertiespropertytypedefault valuenotesAuth token EndpointStringthe token endpoint URL for your web application (see \"Generating the Service Principal\" under Using Client Keys)Client IDStringthe application ID for your web application (see \"Generating the Service Principal\" under Using Client Keys)Client Keyencrypted passwordthe key for your web application (see \"Generating the Service Principal\" under Using Client Keys)Compression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.Data Lake Store NameStringthe name of your Data Lake Storage Gen1 account, for example, mydlsname.azuredatalakestore.net (do not include adl://)DirectoryStringThe full path to the directory in which to write the files. See Setting output names and rollover / upload policies for advanced options.File NameString\u00a0The base name of the files to be written. See Setting output names and rollover / upload policies.Rollover on DDLBooleanTrueHas effect only when the input stream is the output stream of a CDC reader source. With the default value of True, rolls over to a new file when a DDL event is received. Set to False to keep writing to the same file.Rollover PolicyStringeventcount:10000, interval:30sSee Setting output names and rollover / upload policies.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsData is written in 4 MB batches or whenever rollover occurs.ADLS Gen1 Writer sample applicationCREATE APPLICATION testADLSGen1;\n\nCREATE SOURCE PosSource USING FileReader ( \n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) \nOUTPUT TO PosSource_Stream;\n\nCREATE CQ PosSource_Stream_CQ \nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET testADSLGen1target USING ADSLGen1Writer (\n directory:'mydir',\n filename:'myfile.json',\n datalakestorename:'mydlsname.azuredatalakestore.net',\n clientid:'********-****-****-****-************',\n authtokenendpoint:'https://login.microsoftonline.com/********-****-****-****-************/oauth2/token',\n clientkey:'********************************************'\n)\nFORMAT USING JSONFormatter ()\nINPUT FROM PosSource_TransformedStream;\n\nEND APPLICATION testADLSGen1;Since the test data set is less than 10,000 events, and ADSLGen1Writer is using the default rollover policy, the data will be uploaded in 30 seconds.In this section: ADLS Gen1 WriterADLS Gen1 Writer propertiesADLS Gen1 Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/adls-gen1-writer.html", "title": "ADLS Gen1 Writer", "language": "en"}} {"page_content": "\n\nADLS Gen2 WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsADLS Gen2 WriterPrevNextADLS Gen2 WriterWrites to files in an Azure Data Lake Storage Gen2 file system. A common use case is to write data from on-premise sources to an ADLS staging area from which it can be consumed by Azure-based analytics tools.When you create the Gen2 storage account, set Storage account kind to StorageV2 and enable Hierarchical namespace.ADLS Gen2 Writer propertiespropertytypedefault valuenotesAccount NameStringthe storage account nameCompression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.DirectoryStringThe full path to the directory in which to write the files. See Setting output names and rollover / upload policies for advanced options.File NameString\u00a0The base name of the files to be written. See Setting output names and rollover / upload policies.File System NameStringthe ADLS Gen2 file system where the files will be writtenRollover on DDLBooleanTrueHas effect only when the input stream is the output stream of a MySQLReader or OracleReader source. With the default value of True, rolls over to a new file when a DDL event is received. Set to False to keep writing to the same file.SAS Tokenencrypted passwordThe SAS token for a shared access signature for the storage account. Allowed services must include Blob, allowed resource types must include Object, and allowed permissions must include Write and Create. Remove the ? from the beginning of the SAS token.Note that SAS tokens have an expiration date. See Best practices when using SAS.If a running Striim Cloud private endpoint is associated with the same Azure service as the SAS token, Striim will use it automatically (see Using Azure private endpoints for more information).Upload PolicyStringeventcount:10000, interval:5mSee Setting output names and rollover / upload policies. Keep these settings low enough that individual uploads do not exceed the underlying Microsoft REST API's limit of 100 MB for a single operation.For best performance, Microsoft recommends uploads between 4 and 16 MB. Setting UploadPolicy to filesize:16M will accomplish that. However, if there is a long gap between events, this will mean some events will not be written to ADLS for some time. For example, if Striim receives events only during working hours, the last events received at the end of the day on Friday would not be written until Monday morning.When the app is stopped, any remaining data in the upload buffer is discarded.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsADLS Gen2 Writer sample applicationCREATE APPLICATION ADLSGen2Test;\n\nCREATE SOURCE PosSource USING FileReader (\n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false )\nOUTPUT TO PosSource_Stream;\n\nCREATE CQ PosSource_Stream_CQ\nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET ADLSGen2Target USING ADLSGen2Writer (\n accountname:'mystorageaccount',\n sastoken:'********************************************',\n filesystemname:'myfilesystem',\n directory:'mydir',\n filename:'myfile.json',\n uploadpolicy: 'interval:15s'\n)\nFORMAT USING JSONFormatter ()\nINPUT FROM PosSource_TransformedStream;\n\nEND APPLICATION ADLSGen2Test;In this section: ADLS Gen2 WriterADLS Gen2 Writer propertiesADLS Gen2 Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-30\n", "metadata": {"source": "https://www.striim.com/docs/en/adls-gen2-writer.html", "title": "ADLS Gen2 Writer", "language": "en"}} {"page_content": "\n\nAzure Blob WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsAzure Blob WriterPrevNextAzure Blob WriterWrites to a blob in a Microsoft Azure Storage account (see Creating apps using templates).\u00a0propertytypedefault valuenotesAccount Access KeyStringthe account access key from Storage accounts > > Access keysAccount NameStringthe name of the Azure storage account for the blob containerBlob NameStringThe base name of the blobs to be written. See Setting output names and rollover / upload policies.Client ConfigurationStringIf using a proxy, specify ProxyHost=,ProxyPort=.Compression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.Container NameStringthe blob container name from Storage accounts > > ContainersFolder NameStringname of the directory to contain the blob (optional)See Setting output names and rollover / upload policies for instructions on defining dynamic directory names.Rollover on DDLBooleanTrueHas effect only when the input stream is the output stream of a CDC reader source. With the default value of True, rolls over to a new file when a DDL event is received. Set to False to keep writing to the same file.Upload PolicyStringeventcount:10000,interval:5mThe upload policy may include eventcount, interval, and/or filesize (see Setting output names and rollover / upload policies for syntax). Cached data is written to Azure every time any of the specified values is exceeded. With the default value, data will be written every five minutes or sooner if the cache contains 10,000 events. When the app is undeployed, all remaining data is written to Azure.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-05-20\n", "metadata": {"source": "https://www.striim.com/docs/en/azure-blob-writer.html", "title": "Azure Blob Writer", "language": "en"}} {"page_content": "\n\nAzure DatabricksSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsAzure DatabricksPrevNextAzure DatabricksSee Databricks Writer.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-06-16\n", "metadata": {"source": "https://www.striim.com/docs/en/azure-databricks.html", "title": "Azure Databricks", "language": "en"}} {"page_content": "\n\nAzure Event Hub WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsAzure Event Hub WriterPrevNextAzure Event Hub WriterWrites to an existing Azure event hub, which is equivalent to a Kafka topic.Azure Event Hubs is similar to Kafka, compatible with many Kafka tools, and uses some of the same architectural elements, such as consumer groups and partitions. AzureEventHubWriter is generally similar to\u00a0Kafka Writer in sync mode and its output formats are the same.When Striim is deployed on a network with both a firewall and a proxy, open port 443. If there is a firewall but no proxy, open port 5671 and perhaps also 5672. See Connections and sessions for information on firewall settings.Azure Event Hub Writer propertiespropertytypedefault valuenotesBatch PolicyStringSize:1000000, Interval:30sThe batch policy may include size or interval.\u00a0Cached data is written to the target every time either of the specified values is exceeded. With the default setting, data will be written every 30 seconds or sooner if the cache contains 1,000,000 bytes. When the application is stopped any remaining data in the buffer is discarded.Connection RetryStringRetries:0, RetryBackOff:1mWith the default Retries:0, retry is disabled. To enable retries, set a positive value for Retries and in RetryBackOff specify the interval between retries in minutes (#m) or seconds (#s) . For example, with the setting Retries:3, RetryBackOff:30s, if the first connection attempt is unsuccessful, in 30 seconds Striim will try again. If the second attempt is unsuccessful, in 30 seconds Striim will try again. If the third attempt is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Consumer GroupStringIf E1P is true, specify an Event Hub consumer group for Striim to use for tracking which events have been written.E1PBooleanfalseWith the default value, after recovery (see\u00a0Recovering applications) there may be some duplicate events. Set to true to ensure that there are no duplicates (\"exactly once processing\"). If recovery is not enabled for the application, this setting will have no effect.Recovering applicationsWhen this property is set to true, the target event hub must be empty the first time the application is started, and other applications must not write to the event hub.When set to true, AzureEventHubWriter will use approximately 42\u00a0MB of memory per partition, so if the hub has 32 partitions, it will use 1.3\u00a0GB.Event Hub ConfigStringIf Striim is connecting with Azure through a proxy server, provide the connection details, in the format ProxyIP=, ProxyPort=, ProxyUsername=, ProxyPassword:, for example, EventHubConfig='ProxyIP=192.0.2.100, ProxyPort=8080, ProxyUsername=myuser, ProxyPassword=passwd.Event Hub NameStringthe name of the event hub, which must exist when the application is started and have between two and 32 partitionsEvent Hub NamespaceStringthe namespace of the specified event hubOperation TimeoutInteger1mamount of time Striim will wait for Azure to respond to requests (reading, writing, or closing connections) before the application will failPartition KeyStringThe name of a field in the input stream whose values determine how events will be distributed among multiple partitions. Events with the same partition key field value will be written to the same partition.If the input stream is of any type except WAEvent, specify the name of one of its fields.If the input stream is of the WAEvent type, specify a field in the METADATA map (see WAEvent contents for change data) using the syntax\u00a0@METADATA(), or a field in the USERDATA map (see\u00a0Adding user-defined data to WAEvent streams), using the syntax\u00a0@USERDATA(). If appropriate, you may concatenate multiple METADATA and/or USERDATA fields.WAEvent contents for change dataSAS KeyStringthe primary key associated with the SAS policyIf a running Striim Cloud private endpoint is associated with the same Azure service as the SAS key, Striim will use it automatically (see Using Azure private endpoints for more information).SAS Policy NameStringan Azure SAS policy to authenticate connections (see Shared Access Authorization Policies)For samples of the output, see:KafkaWriter output with AvroFormatterKafkaWriter output with DSVFormatterKafkaWriter output with JSONFormatterKafkaWriter output with XMLFormatterIf E1P is set to true, the records will contain information Striim can use to ensure no duplicate records are written during recovery (see Recovering applications).Recovering applicationsAzure Event Hub Writer sample application that writes data to an event hubThe following sample application will write data from\u00a0PosDataPreview.csv to an event hub.CREATE SOURCE PosSource USING FileReader (\n directory:'Samples/PosApp/AppData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:yes\n)\nOUTPUT TO RawStream;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM RawStream;\n\nCREATE TARGET EventHubTarget USING AzureEventHubWriter (\n EventHubNamespace:'myeventhub-ns',\n EventHubName:\u2019PosAppData\u2019,\n SASTokenName:'RootManageSharedAccessKey',\n SASToken:'******',\n PartitionKey:'merchantId'\n)\nFORMAT USING DSVFormatter ()\nINPUT FROM PosDataStream;\nAzure Event Hub Writer sample application that replicates data to an event hubThe following sample application will replicate data from two Oracle tables to two partitions in an event hub.CREATE SOURCE OracleSource USING OracleReader (\n Username:'myname',\n Password:'******',\n ConnectionURL: 'localhost:1521:XE\u2019,\n Tables:'QATEST.EMP;QATEST.DEPT\u2019\n) \nOUTPUT TO sourceStream;\n\nCREATE TARGET EventHubTarget USING AzureEventHubWriter (\n EventHubNamespace:'myeventhub-ns',\n EventHubName:\u2019OracleData\u2019,\n SASTokenName:'RootManageSharedAccessKey',\n SASToken:'******',\n PartitionKey:'@metadata(TableName)',\n E1P:'True',\n ConsumerGroup:'testconsumergroup'\n)\nFORMAT USING DSVFormatter()\nINPUT FROM sourceStream;In this section: Azure Event Hub WriterAzure Event Hub Writer propertiesAzure Event Hub Writer sample application that writes data to an event hubAzure Event Hub Writer sample application that replicates data to an event hubSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-30\n", "metadata": {"source": "https://www.striim.com/docs/en/azure-event-hub-writer.html", "title": "Azure Event Hub Writer", "language": "en"}} {"page_content": "\n\nAzure SQL DWH (Data Warehouse) WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsAzure SQL DWH (Data Warehouse) WriterPrevNextAzure SQL DWH (Data Warehouse) WriterSee Azure Synapse Writer.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-06-06\n", "metadata": {"source": "https://www.striim.com/docs/en/azure-sql-dwh--data-warehouse--writer.html", "title": "Azure SQL DWH (Data Warehouse) Writer", "language": "en"}} {"page_content": "\n\nAzure Synapse WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsAzure Synapse WriterPrevNextAzure Synapse WriterWrites to Azure Synapse (formerly Azure SQL Data Warehouse).Prerequisites:Deploy an Azure Synapse instance.Deploy an Azure Blob Storage or Azure Data Lake Storage instance to be used for staging the data. See Best practices for loading data into a dedicated SQL pool in Azure Synapse Analytics.Optionally, connect the Azure Synapse instance and the Azure Blob Storage or Azure Data Lake Storage Gen2 instance with an Azure Virtual Network (VNet). See Impact of using VNet Service Endpoints with Azure storage, particularly the prerequisites and the instructions for creating a database master key.Create an Azure Synapse login for use by Striim.Create an Azure Synapse database scoped credential with the storage account name as the IDENTITY and the storage account access key as the SECRET. For example:CREATE MASTER KEY ENCRYPTION BY PASSWORD='';\nCREATE DATABASE SCOPED CREDENTIAL AppCred WITH IDENTITY = '',\nSECRET = ''; You can view scoped credentials with the command:SELECT * FROM sys.database_scoped_credentials;If using MERGE mode:All target tables must be hash-distributed. See Guidance for designing distributed tables using dedicated SQL pool in Azure Synapse Analytics. If all tables are not hash-distributed, use APPENDONLY mode to avoid possible data loss.For best performance, partition the tables (see Partitioning tables in dedicated SQL pool). If an Oracle source column is used as the partition column, the source column must have supplemental logging enabled.Source tables must not have more than approximately 510 columns if Optimized Merge is False and not more than approximately 340 columns if Optimized Merge is True. This is due to Synapse's limit of 1024 columns per join in staging (see Capacity limits for dedicated SQL pool in Azure Synapse Analytics; Striim requires multiple columns in staging for each column in the source.Azure Synapse Writer propertiespropertytypedefault valuenotesAccount Access KeyStringthe account access key for the storage account from Storage accounts > > Access keysAccount NameStringthe storage account nameCDDL ActionStringProcessSee Handling schema evolution.If TRUNCATE commands may be entered in the source and you do not want to delete events in the target, precede the writer with a CQ with the select statement ELECT * FROM WHERE META(x, OperationName).toString() != 'Truncate'; (replacing with the name of the writer's input stream). Note that there will be no record in the target that the affected events were deleted.Client ConfigurationStringIf using a proxy, specify ProxyHost=,ProxyPort=.Column DelimiterString|If the data to be written may contain the default column delimiter (ASCII / UTF-8 124), specify a different delimiter that will never appear in the data.Connection Retry PolicyStringinitialRetryDelay=10s, retryDelayMultiplier=2, maxRetryDelay=1m, maxAttempts=10, totalTimeout=10mWith the default setting, if a connection attempt is unsuccessful, the adapter will try again in 10 seconds (InitialRetryDelay=10s). If the second attempt is unsuccessful, in 20 seconds it will try a third time (InitialRetryDelay=10s multiplied by retryDelayMultiplier=2). If that fails, the adapter will try again in 40 seconds (the previous retry interval 20s multiplied by 2). If connection attempts continue to fail, the the adapter will try again every 60 seconds (maxRetryDelay=1m) until a total of 10 connection attempts have been made (maxAttempts=10), after which the adapter will halt and log an exception.The adapter will halt when either maxAttempts or totalTimeout is reached.InitialRetryDelay, maxRetryDelay, and totalTimeout may be specified in milliseconds (ms), seconds (s, the default), or minutes (m).If retryDelayMultiplier is set to 1, connection will be attempted on the fixed interval set by InitialRetryDelay.To disable connection retry, set maxAttempts=0.Negative values are not supported.Connection URLStringthe JDBC connection URL for Azure Synapse, in the format\u00a0jdbc:sqlserver://:;database=, for example,\u00a0jdbc:sqlserver://mysqldw.database.windows.net:1433;database=mydbExcluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude from the query. Specify the value exactly as for Tables. For example, to include data from all tables whose names start with HR except HRMASTER:Tables='HR%',\nExcludedTables='HRMASTER'Ignorable Exception CodeStringSet to TABLE_NOT_FOUND to prevent the application from terminating when Striim tries to write to a table that does not exist in the target. See Handling \"table not found\" errors for more information.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Merge APIStringSTRIIM_MERGEWith the default value STRIIM_MERGE, Striim will delete the existing row and insert the updated row.Change to AZURE_MERGE to use Azure Synapse's new Merge command (see Learn > SQL > Transact-SQL (T-SQL) Reference > Statements > General > MERGE).ModeStringMERGEWith the default value of MERGE, inserts and deletes in the source are handled as inserts and deletes in the target. With this setting:Since Synapse does not have primary keys, you may include the\u00a0keycolumns option in the Tables property to specify a column in the target table that will contain a unique identifier for each row: for example,\u00a0Tables:'SCOTT.EMP,mydb.mydataset.employee keycolumns(emp_num)'.You may use wildcards for the source table provided all the tables have the key columns: for example,\u00a0Tables:'DEMO.%,mydataset.% KeyColumns(...)'.If you do not specify keycolumns , Striim will concatenate all column values and use that as a unique identifier.Set to APPENDONLY to handle all operations as inserts. With this setting:Updates and deletes from DatabaseReader, IncrementalBatchReader, and SQL CDC sources are handled as inserts in the target.Primary key updates result in two records in the target, one with the previous value and one with the new value. If the Tables setting has a ColumnMap that includes @METADATA(OperationName), the operation name for the first event will be DELETE and for the second INSERT.Optimized MergeBoolanFalseNot supported when CDDL Action is Process.Set to True only when Mode is MERGE and the target's input stream is the output of an HP NonStop reader, MySQL Reader, or Oracle Reader source and the source events will include partial records. For example, with Oracle Reader, when supplemental logging has not been enabled for all columns, partial records are sent for updates. When the source events will always include full records, leave this set to false.Set to True also when the source is Oracle Reader and the source table includes BLOB or CLOB columnsParallel ThreadsIntegerSee Creating multiple writer instances. Not supported when Mode is MERGE.Passwordencrypted passwordThe password for the specified user. See Encrypted passwords.Storage Access Driver TypeStringWASBSSet to ABFS if you are using an Azure Data Lake Storage instance for staging the data, or if you are using a general-purpose blob storage instance connected to Synapse using VNet or across a firewall. (See The Azure Blob Filesystem driver (ABFS) for more information.)Leave at the default setting WASBS if using a general-purpose V1 or V2 blob storage account without VNet or a firewall.TablesStringThe name(s) of the table(s) to write to. The table(s) must exist in the DBMS and the user specified in Username must have insert permission.When the target's input stream is a user-defined event, specify a single table.If the source table has no primary key, you may use the KeyColumns option to define a unique identifier for each row in the target table: for example,\u00a0Tables:'sourcedb.emp,mydb.mySchema.emp KeyColumns(emp)'. The target table must be specified with a three-part name. If necessary to ensure uniqueness, specify multiple columns with the syntax\u00a0KeyColumns(,,...). You may use wildcards for the source table, provided all the tables have the key columns: for example,\u00a0Tables:'sourcedb.%,mydb.myschema.% KeyColumns(...)'. If the source has no primary key and KeyColumns is not specified, the concatenated value of all source fields is used as the primary key in the target.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nMySQL and Oracle names are case-sensitive, SQL Server names are not. Specify names as .
for MySQL and Oracle and as\u00a0..
for SQL Server.See\u00a0Mapping columns for additional options.Upload PolicyStringeventcount:10000, interval:5mThe upload policy may include eventcount, interval, and/or filesize (see Setting output names and rollover / upload policies for syntax). Cached data is written to the storage account every time any of the specified values is exceeded. With the default value, data will be written every five minutes or sooner if the cache contains 10,000 events. When the app is undeployed, all remaining data is written to the storage account.UsernameStringthe user name Striim will use to log in to the Azure Synapse specified in ConnectionURLAzure Synapse Writer sample applicationThe following sample application would read from Oracle using IncrementalBatchReader and write to Azure Synapse.CREATE SOURCE ibr2azdw_Source USING IncrementalBatchReader ( \n Username: 'striim',\n Password: '********',\n ConnectionURL: '192.0.2.1:1521:orcl',\n Tables: 'MYSCHEMA.TABLE1',\n CheckColumn: 'MYSCHEMA.TABLE1=UUID',\n StartPosition: 'MYSCHEMA.TABLE1=1234'\n) \nOUTPUT TO ibr2azdw_Source_Stream ;\n\nCREATE TARGET ibr2azdw_AzureSynapseTarget1 USING AzureSQLDWHWriter ( \n Username: 'striim',\n Password: '********',\n ConnectionURL: 'jdbc:sqlserver://testserver.database.windows.net:1433;database=rlsdwdb',\n Tables: 'MYSCHEMA.TABLE1,dbo.TABLE1',\n AccountName: 'mystorageaccount'\n AccountAccessKey: '********'\n) \nINPUT FROM ibr2azdw_Source_Stream;Azure Synapse data type support and correspondenceTQL typeAzure Synapse typejava.lang.Bytetinyintjava.lang.Doublefloatjava.lang.Floatfloatjava.lang.Integerintjava.lang.Longbigintjava.lang.Shortsmallintjava.lang.Stringchar, nchar, nvarchar, varcharorg.joda.time.DateTimedatetime, datetime2, datetimeoffsetWhen the input\u00a0of an Azure Synapse target is the output of a MySQL source (DatabaseReader, IncremenatlBatchReader, or MySQLReader):MySQL typeAzure Synapse typebigintbigint, numericbigint unsignedbigintbinarybinarycharnchardatedatedatetimedatetime, datetime2, datetimeoffsetdecimaldecimaldecimal unsigneddecimaldoublemoney, smallmoneyfloatfloat, realintintint unsignedintlongblobvarbinarylongtextvarcharmediumblobbinarymediumintintmediumint unsignedintmediumtextvarcharnumeric unsignedintsmallintsmallintsmallint unsignedsmallinttextvarchartimetimetinyblobbinarytinyintbit (if only one digit), tinyinttinyint unsignedtinyinttinytextvarcharvarbinaryvarbinaryvarcharnvarchar, varcharyearvarcharWhen the input\u00a0of an Azure Synapse target is the output of an Oracle source (DatabaseReader, IncremenatlBatchReader, or OracleReader):Oracle typeAzure SQL Data Synapse typebinary_doublefloatbinary_floatrealblobbinary, varbinarycharcharclobnvarchardatedatefloatfloatncharncharnclobvarcharnumber(1)bitnumber(10,4)smallmoneynumber(10)intnumber(19,4)moneynumber(19)bigintnumber(3)tinyintnumber(5)char, smallinttimestampdatetime, datetime2, datetimeoffsettimestamp with local timezonedatetimeoffsettimestamp with timezonedatetimeoffsetvarchar2varcharvarchar2(30)timexmltypevarcharWhen the input\u00a0of an AzureSynapse target is the output of a SQL Server source (DatabaseReader, IncremenatlBatchReader, or MSSQLReader):SQL Server typeAzure Synapse typebigintbigintbinarybinarybitbit, chardatedatedatetimedatetimedatetime2datetime2datetimeoffsetdatetimeoffsetdecimaldecimalfloatfloatimagevarbinaryintintmoneymoneyncharncharntextvarcharnumericnumericnvarcharnvarcharnvarcharnvarcharrealrealsmalldatetimesmalldatetimesmallintsmallintsmallmoneysmallmoneytextvarchartimetimetinyinttinyintvarbinaryvarbinaryvarcharvarcharxmlvarcharIn this section: Azure Synapse WriterAzure Synapse Writer propertiesAzure Synapse Writer sample applicationAzure Synapse data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/azure-synapse-writer.html", "title": "Azure Synapse Writer", "language": "en"}} {"page_content": "\n\nBigQuery WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsBigQuery WriterPrevNextBigQuery WriterBigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse offered by Google. Striim\u2019s BigQueryWriter writes the data from various supported sources into Google\u2019s BigQuery data warehouse to support real time data warehousing and reporting.BigQuery Writer can be used to move data from transactional databases such as Oracle, Amazon RDS, Azure SQL DB, PostgreSQL, Microsoft SQL Server, MySQL, Google Spanner, and other supported databases into BigQuery with low latency and in real time. Striim supports complete table loads, change feeds (CDC) , and incremental batch feeds into BigQuery. BigQuery Writer properties can be configured to support authentication, object mappings, batching, performance, and failure handling.BigQuery upload methods: Streaming vs. LoadYou have a choice of three methods, using different APIs for BigQuery Writer to use to write to its target tables. You cannot switch between these methods while an application is running.Storage Write API (streaming): Incoming data is buffered locally as one memory buffer per target table. Once the upload condition is met, BigQuery Writer will make multiple AppendRows calls, each with a maximum of 10000 rows or 5MB of data, to stream the content of each memory buffer into its target table. This method provides higher performance at lower cost than the other two APIs.Legacy streaming API: Incoming data is buffered locally as one memory buffer per target table. Once the upload condition is met, BigQuery Writer will make multiple InsertAllResponse calls, each with a maximum of 10000 rows or 5MB of data, to stream the content of each memory buffer into its target table. By default, up to five such requests are run in parallel; this can be adjusted via the Streaming Configuration property. After upgrading from Striim 4.1.0 or earlier, an application using this API will be switched to the Storage Write API, but you may alter the application to switch back to the legacy API.Load: Incoming data is buffered locally as one CSV file per target table. Once the upload condition for a file is met, BigQuery Writer uses TableDataWriteChannel to upload the content of the file to BigQuery, which writes it to the target table. This method may be a good fit if your uploads are infrequent (for example, once in five minutes).If you have BigQuery Writer applications using the Load method and are spending a lot of time tuning those applications' batch policies or are running up against BigQuery's quotas, the Storage Write API method may work better for you.Regardless of which method you choose, the client uses OAuth2 for authorization and TLS 1.2 to encrypt the connection.Improving performance by partitioning BigQuery tablesWhen using BigQuery Writer's MERGE mode, partitioning the target tables can significantly improve performance by reducing the need for full-table scans. Partition columns must be specified when the tables are created; you cannot partition an existing table. See Introduction to partitioned tables for more information.When BigQuery Writer's input stream is of type WAEvent and a database source column is mapped to a target table's partition column:for INSERT events, the partition column value must be in the WAEvent data array for every INSERT event in the batchfor UPDATE events, the partition column value must be in the WAEvent before array for every UPDATE event in the batchWhen this is not the case, a full-table scan will be required for the batch, and performance will be reduced accordingly.If the partition column values will be updated, specify the source table primary key and partition column names in KeyColumns in the Tables property. For example, If the id is the primary key in the source table and purchase_date is the partition column is BigQuery, the Tables property value would be .
, KeyColumns(id, purchase_date). See the notes for Mode in BigQuery Writer properties for additional discussion of KeyColumns.BigQuery Writer propertiesLimitations:BigQuery Writer's Standard SQL property must be True.The \"Require partition filter\" option (see Set partition filter requirements) must be disabled on all target tables. If it is not, if and when a full-table scan is required as described above, the application will halt.See Google's Limitations and Partitioned table quotas and limits.Typical BigQuery workflowThe most typical workflow when using BigQuery Writer is for streaming integration. Briefly, it works like this:Create tables in the target system corresponding to those in the source. You may do this using an initial load wizard with Auto Schema Creation (see Creating apps using templates) or any other tool you prefer.Creating apps using templatesCreate a Striim initial load application using Database Reader and BigQuery Writer. This application will use the load method and, to avoid potential BigQuery quota limitations, will use large batches. Run it once to load existing data to BigQuery.Create a Striim streaming integration application using the appropriate CDC reader and BigQuery Writer. This application will use the streaming method to minimize the time it takes for new data to arrive at the target. Streaming also avoids the quota limitation issues of the load method. Typically you will want to enable recovery for this application (see Recovering applications). Run this application continuously to stream new data to BigQuery (in other words, to synchronize the source and target.Recovering applicationsBigQuery architecture and terminologySome of BigQuery's terminology may be confusing for users of relational databases that use different terms, or use the same terms to mean different things.Project: contains one or more datasets, similar to the way an Oracle CDB contains one or more databases.Dataset: contains one or more tables, similar to a MySQL, Oracle, PostgreSQL, or SQL Server database.Schema: defines column names and data types for a table, similar to a CREATE TABLE DDL statement in SQL.Table: equivalent to a table in SQL.BigQuery setupBefore you can use BigQuery Writer, you must create a service account (see Service Accounts).\u00a0The service account must have the BigQuery Data Editor, BigQuery Job User, and BigQuery Resource Admin roles for the target tables (see BigQuery predefined Cloud IAM roles). Alternatively, you may create a custom role with the following permissions for the target tables (see BigQuery custom roles):bigquery.datasets.getbigquery.jobs.createbigquery.jobs.getbigquery.jobs.listbigquery.jobs.listAllbigquery.tables.createbigquery.tables.deletebigquery.tables.getbigquery.tables.getDatabigquery.tables.listbigquery.tables.updatebigquery.tables.updateDatabigquery.tables.updateTagAfter you have created the service account, download its key file (see Authenticating with a service account key file) and copy it to the same location on each Striim server that will run this adapter, or to a network location accessible by all servers. You will specify the path and file name in BigQuery Writer's Service Account Key property.BigQuery writer simple applicationThis simple application reads data from a .csv file from the PosApp sample application and writes it to BigQuery. In this case, BigQuery Writer has an input stream of a user-defined type. The sample code assumes that you have created the following table in BigQuery:CREATE SOURCE PosSource USING FileReader (\n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false )\nOUTPUT TO PosSource_Stream;\n \nCREATE CQ PosSource_Stream_CQ\nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET BigQueryTarget USING BigQueryWriter(\n ServiceAccountKey:\"//.json\",\n projectId:\"myprojectid\",\n Tables: 'mydataset.mytable'\n)\nINPUT FROM PosSource_TransformedStream;After running this application, BigQuery, run\u00a0select * from mydataset.mytable; and you will see the data from the file. Since the default timeout is 90 seconds, it may take that long after the application completes before you see all 1160 records in BigQuery.BigQuery Writer propertiesThe adapter properties are:propertytypedefault valuenotesAllow Quoted NewlinesBooleanFalseSet to True to allow quoted newlines in the delimited text files in which BigQueryWriter accumulates batched data.Batch PolicyStringeventCount:1000000, Interval:90The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, data will be written every 90 seconds or sooner if the buffer accumulates 1,000,000 events.When Streaming Upload is False, use Interval:60 so as not to exceed the quota for 1500 a day. When Streaming Upload isTrue, use EventCount = 10000 since that is the quota for one batch. (Quotas are subject to change by Google.)Do not exceed BigQuery's\u00a0quotas or limits (see Load jobs for the load method or Query jobs for the streaming method in the \"Quotas and limits\" section of Google's BigQuery documentation). For example, if you exceed the quota of batches per table per day day, BigQueryWriter will throw an exception such as error code 500, \"An internal error occurred and the request could not be completed,\" and stop the application. To avoid this, reduce the number of batches by increasing the event count and/or interval. Contact Striim support if you need assistance in keeping within Google's quotas.When Optimized Merge is true, when an event includes a primary key update, the batch is sent to the target immediately, without waiting to reach the eventCount or interval.Monitoring reports and MON output for BigQuery Writer targets include Queued Batches Size Bytes, which reports the total current size of the buffer in bytes.CDDL ActionStringProcessSee Handling schema evolution.If TRUNCATE commands may be entered in the source and you do not want to delete events in the target, precede the writer with a CQ with the select statement ELECT * FROM WHERE META(x, OperationName).toString() != 'Truncate'; (replacing with the name of the writer's input stream). Note that there will be no record in the target that the affected events were deleted.If you are using the legacy streaming API to write to template tables, using the default setting of Process may cause the application to halt due to a limitation in BigQuery that does not allow writing for up to 90 minutes after a DDL change (see BigQuery > Documentation > Guides > Use the legacy streaming API > Creating tables automatically using template tables > Changing the template table schema). In this case, supporting schema evolution is impossible, so set CDDL Action to Ignore. This is not an issue if you are using partitioned tables.Column DelimiterString| (UTF-8 007C)The character(s) used to delimit fields in the delimited text files in which the adapter accumulates batched data. If the data will contain the | character, change the default value to a sequence of characters that will not appear in the data.Connection Retry PolicyStringtotalTimeout=600, initialRetryDelay=10, retryDelayMultiplier=2.0, maxRetryDelay=60 , maxAttempts=5, jittered=True, initialRpcTimeout=10, rpcTimeoutMultiplier=2.0, maxRpcTimeout=30Do not change unless instructed to by Striim support.Data LocationStringSpecify the dataset's Data location property value if necessary (see\u00a0Dataset Locations).EncodingStringUTF-8Encoding for the delimited text files in which BigQueryWriter accumulates batched data. Currently the only supported encoding is UTF-8 (see Loading encoded data).Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude from the query. Specify the value exactly as for Tables. For example, to include data from all tables in mydataset except the table named ignore:Tables:'mydataset.%',\nExcludedTables:'mydataset.ignore'Ignorable Exception CodeStringSet to TABLE_NOT_FOUND to prevent the application from terminating when Striim tries to write to a table that does not exist in the target. See Handling \"table not found\" errors for more information.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Include Insert IDBooleanTrueWhen Streaming Upload is False, this setting is ignored, and is not displayed in the Flow Designer.When Mode is APPENDONLY and Streaming Upload is True, with the default setting of True, BigQuery will add a unique ID to every row. Set to False if you prefer that BigQuery not add unique IDs. For more information, see Ensuring data consistency and Disabling best effort de-duplication.When Mode is MERGE, you may set this to False as Striim will de-duplicate the events before writing them to the target.ModeStringAPPENDONLYWith the default value AppendOnly:Updates and deletes from DatabaseReader, IncrementalBatchReader, and SQL CDC sources are handled as inserts in the target.Primary key updates result in two records in the target, one with the previous value and one with the new value. If the Tables setting has a ColumnMap that includes @METADATA(OperationName), the operation name for the first event will be DELETE and for the second INSERT.Data should be available for querying immediately after it has been written, but copying and modification may not be possible for up to 90 minutes (see Checking for data availability).Set to MERGE to handle updates and deletes as updates and deletes instead. When using MERGE:Data will not be written to any target tables that have streaming buffers.Since BigQuery does not have primary keys, you may include the\u00a0keycolumns option in the Tables property to specify a column in the target table that will contain a unique identifier for each row: for example,\u00a0Tables:'SCOTT.EMP,mydataset.employee keycolumns(emp_num)'.You may use wildcards for the source table provided all the tables have the key columns and the target table is specified with its three-part name: for example,\u00a0Tables:'DEMO.%,mydb.mydataset.% KeyColumns(...)'.If you do not specify keycolumns , Striim will use the source table's keycolumns as a unique identifier. If the source table has no keycolumns, Striim will concatenate all column values and use that as a unique identifier.Null MarkerStringNULLWhen Streaming Upload is False, a string inserted into fields in the delimited text files in which BigQueryWriter accumulates batched data to indicate that a field has a null value. These are converted back to nulls in the target tables. If any field might contain the string NULL, change this to a sequence of characters that will not appear in the data.When Streaming Upload is True, this setting has no effect.Optimized MergeBooleanfalseSet to True only when Mode is MERGE and the target's input stream is the output of an HP NonStop reader, MySQL Reader, or Oracle Reader source and the source events will include partial records. For example, with Oracle Reader, when supplemental logging has not been enabled for all columns, partial records are sent for updates. When the source events will always include full records, leave this set to false.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Private Service Connect EndpointStringName of the Private Service Connect endpoint created in the target VPC.This endpoint name will be used to generate the private hostname internally and will be used for all connections.See Private Service Connect support for Google cloud adapters.Project IdStringSpecify the project ID of the dataset's project.Quote CharacterString\" (UTF-8 0022)The character(s) used to quote (escape) field values in the delimited text files in which the adapter accumulates batched data. If the data will contain \", change the default value to a sequence of characters that will not appear in the data.Service Account KeyStringThe path (from root or the Striim program directory) and file name to the .json credentials file downloaded from Google (see BigQuery setup).Standard SQLBooleanTrueWith the default setting of True, BigQueryWriter constrains timestamp values to standard SQL. Set to False to use legacy SQL. See Migrating to Standard SQL for more information.Note that setting this to False may significantly reduce performance. See \"Improving performance by partitioning BigQuery tables\" in BigQuery Writer.BigQuery WriterStreaming ConfigurationStringMaxRequestSizeInMB=5, MaxParallelRequests=5, ApplicationCreatedStreamMode=None, UseLegacyStreamingApi=FalseWhen Streaming Upload is False, this setting is ignored, and is not displayed in the Flow Designer.For best performance, adjust the values of the sub-properties so as not to exceed Google's quotas (see BigQuery > Documentation > Resources > Quotas and limits). If you need assistance in keeping within Google's quotas, Contact Striim support.When using the Storage Write API (that is, when UseLegacyStreamingApi=False):ApplicationCreatedStreamMode is active only when using the Storage Write API. See BigQuery > Documentation > Guides > Batch load and stream data with BigQuery Storage Write API > Overview of the Storage Write API> Application-created streams for more information on \"Committed type\" application-created streams (Striim does not support the other types).With the default value of None, the Storage Write API's default mode is used. In this mode, Striim guarantees at-least-once processing (A1P), which means that after connection retries or recovery, the target may have duplicate records, but none will be missing.If you have frequent retries due to transient connectivity issues resulting in numerous duplicates, set to ApplicationCreatedStreamMode=CommittedMode to reduce the number of duplicates.MaxRequestSizeInMB: size in MB which denotes the maximum size of each streaming request with maximum value of 10 MB. See BigQuery > Documentation > Resources > Quotas and limits > API quotas and limits > Storage Write API.MaxParallelRequests in default mode only (ignored in Committed Mode) sets the maximum number of concurrent connections Striim will create to write to BigQuery. Setting a higher number will decrease the time required to write each streaming request.When the input for BigQuery Writer is from a CDC source and the mode is Append Only, set MaxParallelRequests to 1 to preserve the sequence of events. This will degrade performance, so we do not recommend setting MaxParallelRequests=1 in other situations.When using the legacy streaming API (that is, when UseLegacyStreamingApi=True):MaxRequestSizeInMB: size in MB which denotes the maximum size of each streaming request with maximum value of 10 MB. Changing to a higher value (even below 10Mb) might cause \u201cInvalid Errors\u201d since internal metadata will be added along with the payload. See BigQuery > Documentation > Resources > Quotas and limits > Streaming inserts.MaxRecordsPerRequest: use only when UseLegacyStreamingApi is True. See BigQuery > Documentation > Resources > Quotas and limits > Streaming inserts. Recommended value: 10000.MaxParallelRequests sets the maximum number of concurrent connections Striim will create to write to BigQuery. Setting a higher number will decrease the time required to write each streaming request.When the input for BigQuery Writer is from a CDC source and the mode is Append Only, set MaxParallelRequests to 1 to preserve the sequence of events. This will degrade performance, so we do not recommend setting MaxParallelRequests=1 in other situations.Streaming UploadBooleanFalseWith the default value of False, the writer uses the load method. Set to True to use a streaming API. See discussion in BigQuery Writer.BigQuery WriterTablesStringThe name(s) of the table(s) to write to, in\u00a0the format\u00a0.
. Dataset and table names are case-sensitive. The table(s) must exist when the application is started.When the target's input stream is a user-defined event, specify a single table.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nMySQL and Oracle names are case-sensitive, SQL Server names are not. Specify names as .
for MySQL and Oracle and as\u00a0..
for SQL Server.If the columns in the target are not in the same order as the source, writing will fail, even if the column names are the same. To avoid this, use ColumnMap to map at least one column.See\u00a0Mapping columns for additional options.Transport OptionsStringconnectTimeout=300, readTimeout=120Sets HTTP transport timeout options in seconds (see Java > Documentation > Reference > Class HttpTransportOptions when using the legacy streaming API or Load method. With the default setting, the connect timeout is five minutes and the read timeout is two minutes. This property is ignored when using the Storage Write API.BigQuery data type support and correspondenceOracle typeBigQuery typeBFILEunsupportedBINARY_DOUBLENUMERICBINARY_FLOATFLOAT64BLOBBYTESCHARSTRINGCLOBSTRINGAn insert or update containing a column of this type generates two CDC log entries: an insert or update in which the value for this column is null, followed by an update including the value.DATEDATEFLOATFLOAT64INTERVALDAYTOSECONDSTRINGINTERVALYEARTOMONTHSTRINGLONGunsupportedLONG RAWunsupportedNCHARSTRINGNCLOBSTRINGNESTED TABLEunsupportedNUMBERNUMERICNVARCHAR2STRINGRAWBYTESROWIDunsupportedTIMESTAMPDATETIMETIMESTAMP WITHLOCALTIMEZONETIMESTAMPTIMESTAMP WITHTIMEZONETIMESTAMPUROWIDunsupportedVARCHAR2STRINGVARRAYunsupportedXMLTYPEunsupportedPostgreSQL typeBigQuery typebigintINT64bigserialINT64bitunsupportedbit varyingunsupportedbooleanBOOLEANboxunsupportedbyteaBYTEScharacterSTRINGcharacter varyingSTRINGcidrunsupportedcircleunsupporteddateSTRINGdouble precisionFLOAT64inetINT64integerINT64int2INT64int4INT64int4rangeSTRINGint8LONGint8rangeSTRINGintegerINTEGERintervalSTRINGjsonSTRINGjsonbSTRINGlineunsupportedlsegunsupportedmacaddrunsupportedmoneyunsupportednumericNUMERICpathunsupportedpg_lanunsupportedpointunsupportedpolygonunsupportedrealREALsmallintINT64smallserialINT64serialINT64textSTRINGtimeSTRINGtime with time zoneTIMESTAMPtimestampDATETIMEtimestamp with time zoneTIMESTAMPtsqueryunsupportedtsvectorunsupportedtxid_snapshotunsupporteduuidunsupportedxmlSTRINGIn this section: BigQuery WriterBigQuery upload methods: Streaming vs. LoadImproving performance by partitioning BigQuery tablesTypical BigQuery workflowBigQuery architecture and terminologyBigQuery setupBigQuery writer simple applicationBigQuery Writer propertiesBigQuery data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-09\n", "metadata": {"source": "https://www.striim.com/docs/en/bigquery-writer.html", "title": "BigQuery Writer", "language": "en"}} {"page_content": "\n\nCassandra Cosmos DB WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsCassandra Cosmos DB WriterPrevNextCassandra Cosmos DB WriterWrites to Cosmos DB using the Azure Cosmos DB Cassandra API. This allows you to write to Cosmos DB as if it were Cassandra.NoteIf the writer exceeds the number of Request Units per second provisioned for your Cosmos DB instance (see Request Units in Azure Cosmos DB), the application may halt. The Azure Cosmos DB Capacity Calculator can give you an estimate of the appropriate number of RUs to provision:You may need more RUs during initial load than for continuing replication.See Optimize your Azure Cosmos DB application using rate limiting for more information.Notes:Add a Baltimore root certificate to Striim's Java environment following the instructions in To add a root certificate to the cacerts store.Target tables must have primary keys.Primary keys can not be updated.During recovery (see Recovering applications), events with primary keys that already exist in the target will be updated with the new values.Recovering applicationsWhen the input stream of a Cassandra Cosmos DB Writer target is the output of a SQL CDC source, Compression must be enabled in the source.If the writer exceeds the number of Request Units per second provisioned for your Cosmos DB instance (see Request Units in Azure Cosmos DB), the application will halt. You may use the Azure Cosmos DB Capacity Calculator to determine the appropriate number of RUs to provision. You may need more RUs during initial load than for continuing replication.Data type support and correspondence are the same as for Database Writer (see Database Writer data type support and correspondence).Database Writer data type support and correspondenceCassandra Cosmos DB Writer propertiespropertytypedefault valuenotesAccount EndpointStringContact Point from the Azure Cosmos DB account's Connection String pageAccount Keyencrypted passwordPrimary Password from the Azure Cosmos DB account's Connection String page's Read-write Keys tabCheckpoint TableStringCHKPOINTTo support recovery (see Recovering applications, a checkpoint table must be created in the target keyspace using the following DDL:Recovering applicationsCREATE TABLE chkpoint (\n id varchar PRIMARY KEY,\n sourceposition blob,\n pendingddl int,\n ddl ascii);If necessary you may use a different table name, in which case change the value of this property.Column Name Escape SequenceString\u00a0When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source, you may use this property to specify which characters Striim will use to escape column names that contain special characters or are on the List of reserved keywords. You may specify two characters to be added at the start and end of the name (for example, [] ), or one character to be added at both the start and end (for example, \").Connection RetryStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Consistency LevelStringONEHow many replicas need to respond to the coordinator in order to consider the operation a success. Supported values are ONE, TWO, THREE, ANY, ALL, EACH QUORUM, and LOCAL QUORUM. For more information, see Consistency levels and Azure Cosmos DB APIs.Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude. Specify the value as for Tables.Flush PolicyStringEventCount:1000, Interval:60If data is not flushed properly with the default setting, you may use this property to specify how many events Striim will accumulate before writing and/or the maximum number of seconds that will elapse between writes. For example:flushpolicy:'eventcount:5000'flushpolicy:'interval:10s'flushpolicy:'interval:10s, eventcount:5000'Note that changing this setting may significantly degrade performance.With a setting of 'eventcount:1', each event will be written immediately. This can be useful during development, debugging, testing, and troubleshooting.Ignorable Exception CodeStringBy default, if the Cassandra API returns an error, the application will terminate. Specify a portion of an error message to ignore errors and continue. This property is not case-sensitive.When the input stream is the output of a SQL CDC source, and primary keys will be updated in the source, set this to primary key to ignore primary key errors and continue.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).KeyspaceStringthe Cassandra keyspace containing the specified tablesLoad Balancing PolicyStringTokenAwarePolicy(RoundRobinPolicy())See Specifying load balancing policies for more information.Overload PolicyStringretryInterval=10, maxRetries=3With the default setting, if Cassandra Cosmos DB Writer exceeds the number of Request Units per second provisioned for your Cosmos DB instance (see Request Units in Azure Cosmos DB) and the Cassandra API reports an overload error, the adapter will try again in ten seconds (retryInterval. If the second attempt is unsuccessful, in ten seconds it will try a a second time. If the second attempt is unsuccessful, in ten seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.PortString10350Port from the Azure Cosmos DB account's Connection String pageTablesStringCassandra table names must be lowercase. The tables must exist in Cassandra. Since columns in Cassandra tables are not usually created in the same order they are specified in the CREATE TABLE statement, when the input stream of the DatabaseWriter target is the output of a DatabaseReader or CDC source,\u00a0the ColumnMap option is usually required (see\u00a0Mapping columns) and wildcards are not supported. You may omit ColumnMap if you verify that the Cassandra columns are in the same order as the source columns.Cassandra Cosmos DB Writer sample applicationCREATE TARGET CassandraTarget USING CassandraCosmosDBWriter (\n AccountEndpoint: 'myCosmosDBAccount.cassandra.cosmos.azure.com',\n AccountKey: '**************************************************************************************==',\n Keyspace: 'myKeyspace',\n Tables: ':/See\u00a0cassandra-jdbc-wrapper for additional options and more information.Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude from the query. Specify the value exactly as for Tables. For example, to include data from all tables whose names start with HR except HRMASTER:Tables='HR%',\nExcludedTables='HRMASTER'Ignorable Exception CodeStringBy default, if the target DBMS returns an error, Cassandra Writer terminates the application. Use this property to specify errors to ignore, separated by commas. For example, to ignore \"com.datastax.driver.core.exceptions.InvalidQueryException: PRIMARY KEY part id found in SET part,\" specify:IgnorableExceptionCode: 'PRIMARY KEY'\nWhen an ignorable exception occurs, Striim will write an \"Ignoring VendorExceptionCode\" message to the log, including the error number, and increment the \"Number of exceptions ignored\" value for the target.To view the number of exceptions ignored in the web UI, go to the Monitor page, click the application name, click Targets, and click More Details next to the target.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).When replicating from MySQL/MariaDB, Oracle 12c, PostgreSQL, and SQL Server CDC readers, the following three generic (that is, not corresponding to any database-specific error code) exceptions can be specified:NO_OP_UPDATE: could not update a row in the target (typically because there was no corresponding primary key)NO_OP_PKUPDATE: could not update the primary key of a row in the target (typically because the \"before\" primary key could not be found); not supported when source is PostgreSQLReaderNO_OP_DELETE: could not delete a row in the target (typically because there was no corresponding primary key)These exceptions typically occur when other applications besides Striim are writing to the target database. The unwritten events will be captured to the application's exception store, if one exists (see CREATE EXCEPTIONSTORE).Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Enabling recovery for the application disables parallel threads.Passwordencrypted passwordThe password for the specified user. See Encrypted passwords.TablesStringSpecify the name(s) of the table(s) to write to. Cassandra table names must be lowercase. The tables must exist in Cassandra.Since columns in Cassandra tables are not usually created in the same order they are specified in the CREATE TABLE statement, when the input stream of the target is the output of a DatabaseReader or CDC source,\u00a0the ColumnMap option is usually required (see\u00a0Mapping columns). You may omit ColumnMap if you verify that the Cassandra columns are in the same order as the source columns.If a specified target table does not exist, the application will terminate with an error. To skip writes to missing tables without terminating, specify TABLE_NOT_FOUND as an Ignorable Exception Code.When the target's input stream is a user-defined event, specify a single table.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nMySQL and Oracle names are case-sensitive, SQL Server names are not. Specify names as .
for MySQL and Oracle and as\u00a0..
for SQL Server.UsernameStringThe DBMS user name the adapter will use to log in to the server specified in Connection URL. The specified user must have MODIFY permission on the tables to be written to.Vendor ConfigurationStringReserved.Cassandra Writer sample applicationThe sample application below assumes that you have created the following table in Cassandra:CREATE TABLE mykeyspace.testtable (\n merchantid text PRIMARY KEY,\n datetime timestamp, \n authamount decimal, \n zip text;The following TQL will write to that table:CREATE SOURCE PosSource USING FileReader (\n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false )\nOUTPUT TO PosSource_Stream;\n \nCREATE CQ PosSource_Stream_CQ\nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET CassandraTarget USING CassandraWriter(\n connectionurl: 'jdbc:cassandra://203.0.113.50:9042/mykeyspace',\n Username:'striim',\n Password:'******',\n Tables: 'mykeyspace.testtable'\n)\nINPUT FROM PosSource_TransformedStream;In this section: Cassandra WriterCassandra Writer propertiesCassandra Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/cassandra-writer.html", "title": "Cassandra Writer", "language": "en"}} {"page_content": "\n\nCosmos DB WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsCosmos DB WriterPrevNextCosmos DB WriterWrites to Azure Cosmos DB collections using the Cosmos DB SQL API. (To write to Cosmos DB using the Cassandra API, see Cassandra Cosmos DB Writer.) It may be used in four ways:With an input stream of a user-defined type, CosmosDBWriter writes events as documents to a single collection.In this case, the key field of the input stream is used as the document ID. If there is no key field, the document ID is generated by concatenating the values of all fields. Alternatively, you may specify a subset of fields to be concatenated using the syntax . keycolumns(, , ...) in the Collections property. Target document field names are taken from the input stream's event type.With an input stream of type JSONNodeEvent\u00a0that is the output stream of a source using JSONParser,\u00a0CosmosDBWriter writes events as documents to a single collection.When the JSON event contains an id\u00a0field, its value is used as the Cosmos DB document ID. When the JSON event does not contain an id\u00a0field, a unique ID (for example, 5abcD-56efgh0-ijkl43) is generated for each document. Since Cosmos DB is case-sensitive, when the JSON event includes a field named\u00a0Id,\u00a0iD, or\u00a0ID, it is imported as a separate field.With an input stream of type JSONNodeEvent that is the output stream of a MongoDBReader source, CosmosDBWriter writes each MongoDB collection to a separate Cosmos DB collection.MongoDB collections may be replicated in Cosmos DB by using wildcards in the Collections property (see\u00a0\u00a0Replicating MongoDB data to Azure CosmosDB for details).\u00a0Alternatively, you may manually map MongoDB collections to Cosmos DB collections as discussed in the notes for the Collections property.The MongoDB document ID (included in the JSONNodeEvent metadata map) is used as the Cosmos DB document ID.When a source event contains a field named\u00a0id, in the target it will be renamed\u00a0ID\u00a0to avoid conflict with the Cosmos DB target document ID.With an input stream of type WAEvent that is the output stream of a SQL CDC reader or DatabaseReader source, CosmosDBWriter writes data from each source table to a separate collection. The target collections may be in different databases.Source table data may be replicated to Cosmos DB collections of the same names by using wildcards in the Collections property.\u00a0Note that data will be read only from tables that exist when the source starts. Additional tables added later will be ignored until the source is restarted. Alternatively, you may manually map source tables to Cosmos DB collections as discussed in the notes for the Collections property. When the source is a CDC reader, updates and deletes in source tables can be replicated in the corresponding Cosmos DB target collections. See\u00a0Replicating Oracle data to Azure Cosmos DB.Each source row's primary key value (which may be a composite) is used as the document ID for the corresponding Cosmos DB document. If the table has no primary key, the document ID is generated by concatenating the values of all fields in the row. Alternatively, you may select a subset of fields to be concatenated using the\u00a0keycolumns option as discussed in the notes for the Collections property.Each row in a source table is written to a document in the target collection mapped to the table. Target document field names are taken from the source event's metadata map and their values from its\u00a0data array\u00a0(see\u00a0WAEvent contents for change data).When a source event contains a field named\u00a0id, in the target it will be renamed\u00a0ID\u00a0to avoid conflict with the Cosmos DB target document ID.CautionCosmos DB limits the number of characters allowed in document IDs (see Per-item limits in Microsoft's documentation). When using wildcards or keycolumns, be sure that the generated document IDs will not exceed that limit.Striim provides templates for creating applications that read from various sources and write to Cosmos DB. See\u00a0Creating an application using a template for details.Cosmos DB Writer propertiespropertytypedefault valuenotesAccess Keyencrypted passwordread-write key from the Azure Cosmos DB account's Keys tabBatch PolicyStringEventCount:10000, Interval:30The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, events will be sent every 30 seconds or sooner if the cache contains 10,000 events.CollectionsStringThe collection(s) to write to. The collection(s) must exist when the application is started. Partition keys must match one of the fields of the input and matching is case-sensitive. Unpartitioned collections are not supported (see Migrate non-partitioned containers to partitioned containers).When the input stream is of a user-defined type or of type JSONNodeEvent from JSONParser, specify a single collection with the syntax\u00a0..When the input stream is of type JSONNodeEvent from MongoDBReader, or of type WAEvent from DatabaseReader or a CDC reader, specify one or more source-target pairs using the syntax\u00a0.,.. Note that Cosmos DB collection names are case-sensitive, so must match the case of source collection / table names. You may use wildcards ($ for MongoDB, % for other sources and Cosmos DB) in place of collection and table names.If you are not using wildcards, you may override the default document ID by specifying one or more source column names\u00a0to concatenate as the document ID in the Collections property using the syntax ., . keycolumns(.). You may use wildcards and specify multiple values separated by semicolons. A setting for a specific table overrides a wildcard setting, for example,\u00a0db1.col%(throughput=4000); db1.col5(throughput=2000).Connection Pool SizeInteger1000maximum number of database connections allowed (should not exceed connection pool size in Cosmos DB)Connection Retry PolicyStringRetryInterval=60, MaxRetries=3The connection retry policy includes retryInterval and maxRetries. With the default setting, if a connection attempt is unsuccessful, the adapter\u00a0 will try again in 60 seconds (retryInterval. If the second attempt is unsuccessful, in 60 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Excluded CollectionsStringOptionally, specify one or more collections to exclude from the set defined by the Collections property. Wildcards are not supported.Ignorable Exception CodeStringBy default, if Cosmos DB returns an error, CosmosDBWriter terminates the application. Use this property to specify errors to ignore, separated by commas. Supported values are:PARTITION_KEY_NOT_FOUND (partition key is wrong or missing)RESOURCE_ALREADY_EXISTS (the target collection has a document with the same id and partition key)RESOURCE_NOT_FOUND (id is wrong or missing)Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Key SeparatorString:Inserted between values when generating document IDs by concatenating column or field values. If the values might contain a colon, change this to something that will not occur in those values.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Service EndpointStringread-write connection string from the Azure Cosmos DB account's Keys tabUpsert ModeBooleanFalseSet to True to process inserts and updates as upserts.Note: Cosmos DB does not support updating Id or partition key fields via the upsert API. If one of these fields is updated in a source document and the document is not present in the target, Cosmos DB will throw a \"Resource not found\" exception.In this section: Cosmos DB WriterCosmos DB Writer propertiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/cosmos-db-writer.html", "title": "Cosmos DB Writer", "language": "en"}} {"page_content": "\n\nDatabase WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsDatabase WriterPrevNextDatabase WriterWrites to one of the following:HP NonStop SQL/MX (and SQL/MP via aliases in SQL/MX)MemSQLMariaDB and MariaDB Galera ClusterMySQLOraclePostgreSQLSAP HANASQL ServerSybaseNoteDatabase Writer propertiespropertytypedefault valuenotesBatch PolicyStringeventcount:1000, interval:60The batch policy includes eventcount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to BatchPolicy:'-1'.With the default setting, events will be sent every 60 seconds or sooner if the buffer accumulates 1,000 events.Bidirectional Marker TableStringWhen performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.CDDL ActionStringProcessSee Handling schema evolution.Checkpoint TableStringCHKPOINTThe table where DatabaseWriter will store recovery information when recovery is enabled. See Creating the checkpoint table below for DDL to create the table. Multiple instances of DatabaseWriter may share the same table. If the table is not in the Oracle or SQL/MX schema being written to, or the same MySQL or SQL Server database specified in the connection URL, specify a fully qualified name.Column Name Escape SequenceStringWhen the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source, you may use this property to specify which characters Striim will use to escape column names that are on the List of reserved keywords. You may specify two characters to be added at the start and end of the name (for example, [] ), or one character to be added at both the start and end.If this value is blank, Striim will use the following escape characters for the specified target databases:Oracle: \" (ASCII / UTF-8 22)MySQL: ` (ASCII / UTF-8 60)PostgreSQL: \" (ASCII / UTF-8 22)SQL Server: []Commit PolicyStringeventcount:1000, interval:60The commit policy controls how often transactions are committed in the target database. The syntax is the same as for BatchPolicy. CommitPolicy values must always be equal to or greater than BatchPolicy values. To disable CommitPolicy, set to CommitPolicy:'-1'.If\u00a0BatchPolicy is disabled, each event is sent to the target database immediately and the transactions are committed as specified by CommitPolicy.If BatchPolicy is enabled and CommitPolicy is disabled,\u00a0each batch is committed as soon as it is received by the target database.If BatchPolicy and CommitPolicy are both disabled,\u00a0each event received by DatabaseWriter will be committed immediately. This may be useful in development and testing, but is inappropriate for a production environment.Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringfor HP NonStop SQL/MX: jdbc:t4sqlmx://: or jdbc:t4sqlmx://:/catalog=;schema=for MariaDB: jdbc:mariadb://:/for MariaDB Galera Cluster: specify the IP address and port for each server in the cluster, separated by commas: jdbc:mariadb://:,:,...; optionally, append /for MemSQL: same as MySQLfor MySQL: jdbc:mysql://:/To use an Azure private endpoint to connect to Azure Database for MySQL, see Specifying Azure private endpoints in sources and targets.for Oracle: jdbc:oracle:thin:@:: (using\u00a0Oracle 12c with PDB, use the SID for the PDB service) or jdbc:oracle:thin:@:/; if one or more source tables contain LONG or LONG RAW columns, append ?useFetchSizeWithLongColumn=truefor PostgreSQL, jdbc:postgresql://:/for SAP HANA: jdbc:sap://:/?databaseName=¤tSchema=for SQL Server: jdbc:sqlserver://:;DatabaseName= or jdbc:sqlserver://\\\\:;DatabaseName=for Sybase: jdbc:jtds:sybase::/When writing to MySQL, performance may be improved by appending ?rewriteBatchedStatements=true to the connection URL (see Configuration Properties and MySQL and JDBC with rewriteBatchedStatements=true).Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude from the query. Specify the value exactly as for Tables. For example, to include data from all tables whose names start with HR except HRMASTER:Tables='HR%',\nExcludedTables='HRMASTER'Ignorable Exception CodeStringBy default, if the target DBMS returns an error, DatabaseWriter terminates the application. Use this property to specify errors to ignore, separated by commas. For example, to ignore Oracle ORA-00001 and ORA-00002, you would specify:IgnorableExceptionCode: '1,2'\nIgnored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).When replicating from MySQL/MariaDB, Oracle 12c, PostgreSQL, and SQL Server CDC readers, the following three generic (that is, not corresponding to any database-specific error code) exceptions can be specified:DUPLICATE_ROW_EXISTS: could not insert a row in the target because an identical row it already existsNO_OP_UPDATE: could not update a row in the target (typically because there was no corresponding primary key)NO_OP_PKUPDATE: could not update the primary key of a row in the target (typically because the \"before\" primary key could not be found); not supported when source is PostgreSQLReaderNO_OP_DELETE: could not delete a row in the target (typically because there was no corresponding primary key)These exceptions typically occur when other applications besides Striim are writing to the target database. The unwritten events will be captured to the application's exception store, if one exists (see CREATE EXCEPTIONSTORE).See also Switching from initial load to continuous replication.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Enabling recovery for the application disables parallel threads.Passwordencrypted passwordThe password for the specified user. See Encrypted passwords.Preserve Source Transaction BoundaryBooleanFalseSet to True to ensure that all operations in each transaction are committed together.When the target's input stream is the output of an HP NonStop source or when writing to an HP NonStop database, this setting must be False.This setting interacts with CommitPolicy as follows:When PreserveSourceTransactionBoundary is\u00a0True and CommitPolicy is disabled,\u00a0each transaction will be committed when all of its operations have been received. For example, if you have a series of three transactions containing 300, 400, and 700 operations, there will be three commits.When PreserveSourceTransactionBoundary is\u00a0True and CommitPolicy has a positive EventCount value, that value is the minimum number of operations included in each commit. For example, if CommitPolicy includes\u00a0EventCount=1000 and you have a series of three transactions containing 300, 400, and 700 operations, there will be one commit, after the third transaction (because the first two transactions had a total of only 700 operations, less than the EventCount value).SSL ConfigStringWhen the target is Oracle and it uses SSL, specify the required SSL properties (see the notes on SSL Config in Oracle Reader properties).Statement Cache SizeInteger50The number of prepared statements that Database Writer can cache. When the number of cached statements exceeds this number, the least recently used statement is dropped. When a DatabaseWriter Oracle target in the same application fails with the error \"ORA-01000: maximum open cursors exceeded,\" increasing this value may resolve the problem.TablesStringThe name(s) of the table(s) to write to. The table(s) must exist in the DBMS and the user specified in Username must have insert permission.The table(s) or view(s) to be read. MySQL, Oracle, and PostgreSQL names are case-sensitive, SQL Server names are not. Specify names as .
for MySQL, .
for Oracle, PostgresQL, and\u00a0SQL Server (but see the note below about SQL Server source table names)..If a specified target table does not exist, the application will terminate with an error. To skip writes to missing tables without terminating, specify TABLE_NOT_FOUND as an Ignorable Exception Code.When the target's input stream is a user-defined event, specify a single table.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nIf some of the source table names are mixed-case and the target database's table names are case-sensitive, put the wildcard for the target in double quotes, for example, source.*,target.\"*\".See\u00a0Mapping columns for additional options.UsernameStringthe DBMS user name the adapter will use to log in to the server specified in ConnectionURLVendor ConfigurationStringWhen the target is SQL Server, the following configuration options are supported. If the target table contains an identity, rowversion, or timestamp column and you do not specify the relevant option(s), the application will terminate.enableidentityInsert=true: for insert operations only (not updates), replicate identity column values from the source to the target using identity insertsexcludeColTypes={identity|rowversion|timestamp}: ignore any identity, rowversion, or timestamp values in the source and have the target database supply values; to specify multiple options, separate them with a comma, for example, exludeColTypes=identity,rowversionTo combine both options, separate them with a semicolon. For example, enableidentityInsert=true; exludeColTypes=timestamp would replicate identity column values and have the target database supply timestamp values.NotePostgreSQL does not allow NULL (\\0x00) character values (not to be confused with database NULLs) in text columns. If writing to PostgreSQL from a source that contains such values, Contact Striim support for a workaround.Database Writer sample applicationThe following example uses an input stream of a user-defined type. When the input is the output of a CDC or DatabaseReader source, see Replicating data from one Oracle instance to another.The following TQL will write to a MySQL table created as follows in MySQL database mydb:CREATE TABLE mydb.testtable (merchantId char(36), dateTime datetime, amount decimal(10,2), zip char(5));The striim user must have insert permission on mydb.testtable.CREATE SOURCE PosSource USING FileReader (\n directory:'Samples/PosApp/AppData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:yes\n)\nOUTPUT TO RawStream;\n \nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream partition by merchantId\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM RawStream;\n\nCREATE TARGET WriteMySQL USING DatabaseWriter (\n connectionurl: 'jdbc:mysql://192.168.1.75:3306/mydb',\n Username:'striim',\n Password:'******',\n Tables: 'mydb.testtable'\n) INPUT FROM PosDataStream;Creating the checkpoint tableWhen recovery is not enabled, there is no need to create the checkpoint table.When recovery is enabled, DatabaseWriter uses the table specified by the CheckpointTable\u00a0property to store information used to ensure that there are no missing or duplicate events after recovery (see\u00a0Recovering applications). Before starting DatabaseWriter with recovery enabled, use the following DDL to create the table, and grant insert, update, and delete privileges to the user specified in the Username property. The table and column names are case-sensitive, do not change them.HP NonStop SQL/MX (replace . with the catalog and schema in which to create the table):CREATE TABLE ..CHKPOINT (\n ID VARCHAR(100) NOT NULL NOT DROPPABLE PRIMARY KEY,\n SOURCEPOSITION VARCHAR(30400),\n PENDINGDDL NUMERIC(1),\n DDL VARCHAR(2000)\n) ATTRIBUTES BLOCKSIZE 32768;MySQL:CREATE TABLE CHKPOINT (\n id VARCHAR(100) PRIMARY KEY, \n sourceposition BLOB, \n pendingddl BIT(1), \n ddl LONGTEXT);Oracle:CREATE TABLE CHKPOINT (\n ID VARCHAR2(100) PRIMARY KEY, \n SOURCEPOSITION BLOB, \n PENDINGDDL NUMBER(1), \n DDL CLOB);\nPostgreSQL:create table chkpoint (\n id character varying(100) primary key,\n sourceposition bytea,\n pendingddl numeric(1), \n ddl text);SQL Server:CREATE TABLE CHKPOINT (\n id NVARCHAR(100) PRIMARY KEY,\n sourceposition VARBINARY(MAX), \n pendingddl BIT, \n ddl VARCHAR(MAX));SybaseCREATE TABLE CHKPOINT (\n id VARCHAR(100) PRIMARY KEY NOT NULL,\n sourceposition IMAGE, \n pendingddl NUMERIC, \n ddl TEXT);Database Writer data type support and correspondenceUse the following when the input stream is of a user-defined type. (See the Change Data Capture Guide when the input is the output of a CDC or DatabaseReader source.)Most Striim data types can map to any one of several column types in the target DBMS.TQL typeCassandraMariaDB / MySQLOraclejava. lang. ByteblobBIGINTLONGTEXTMEDIUMINTMEDIUMTEXTSMALLINTTEXTTINYINTINTNUMBERjava. lang. DoubledoubleDOUBLEREALBINARY_DOUBLEBINARY_FLOATFLOATNUMBERjava. lang. FloatfloatFLOATBINARY_DOUBLEBINARY_FLOATFLOATNUMBERjava. lang. IntegerintBIGINTINTMEDIUMINTSMALLINTTINYINTINTNUMBERjava. lang. LongbigintBIGINTSMALLINTTINYINTINTNUMBERjava. lang. ShortintBIGINTSMALLINTTINYINTINTNUMBERjava. lang. StringvarcharCHARTINYTEXTVARCHARCHARNCHARNVARCHARVARCHARVARCHAR2VARRAYorg.joda. time. DateTimetimestampDATEDATETIMETIMESTAMPYEARDATETIMESTAMPTQL typePostgreSQLSAP HANASQL Serverjava. lang. Bytenot supportedBLOBVARBINARYBIGINTSMALLINTTEXTTINYINTjava. lang. Doubledouble precisionDOUBLEFLOATFLOATjava. lang. FloatfloatFLOATREALFLOATREALjava. lang. IntegerintegerserialINTEGERBIGINTNUMERICSMALLINTTINYINTjava. lang. LongbigintbigserialBIGINTEGERBIGINTSMALLINTTINYINTjava. lang. ShortsmallintsmallserialSMALLINTBIGINTSMALLINTTINYINTjava. lang. Stringcharactercharacter varyingdatenumerictexttimestamp with timezonetimestamp without timezoneALPHANUMNVARCHARVARCHARCHARNCHARNVARCHARTEXTUNIQUEIDENTIFERVARCHARXMLorg.joda. time. DateTimetimestamp with timezonetimestamp without timezoneDATESECONDDATETIMETIMESTAMPDATEDATETIMEDATETIME2TIMEIn this section: Database WriterDatabase Writer propertiesDatabase Writer sample applicationCreating the checkpoint tableDatabase Writer data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/en/database-writer.html", "title": "Database Writer", "language": "en"}} {"page_content": "\n\nDatabricks WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsDatabricks WriterPrevNextDatabricks WriterDatabricks Writer writes to Delta Lake tables in Databricks on AWS or Azure. For more information, see:Databricks on AWS and Databricks documentation for Amazon Web Services on databricks.com and Databricks on AWS on aws.amazon.comAzure Databricks on databricks.com and Azure Databricks and Azure Databricks documentation on microsoft.comThe required JDBC driver is bundled with Striim.Delta Lake is an open-source tabular storage. It includes a transaction log that supports features such as ACID transactions and optimistic concurrency control typically associated with relational databases. For more information, see What is Delta Lake? for AWS or What is Delta Lake? for Azure.LimitationsWriting to Databricks requires a staging area. The native Databricks File System (DBFS) has as a 2\u00a0GB cap on storage, which can cause file corruption. To work around that limitation, we strongly recommend using an external stage instead: Azure Data Lake Storage (ADLS) Gen2 for Azure Databricks or Amazon S3 for Databricks on AWS. To use an external stage, your Databricks instance must use Databricks Runtime 10.4 or later.If you will use MERGE mode, we strongly recommend partitioning your target tables as this will significantly improve performance (see Partitions | Databricks on AWS or Learn / Azure / Azure Databricks / Partitions.Data is written in batch mode. Streaming mode is not supported in this release because it is not supported by Databricks Connect (see Databricks Connect - Limitations).Creating a Databricks target using a templateNoteIn this release, Auto Schema Creation is not supported when you are using Databricks' Unity Catalog.When you create a Databricks Writer target using a wizard template (see Creating apps using templates), you must specify three properties: Connection URL, Hostname, and Personal Access Token. The Tables property value will be set based on your selections in the wizard.Creating apps using templatesDatabricks does not have schemas. When the source database uses schemas, the tables will be mapped as ..
,.
, for example, mydb.myschema.%,mydb.%. Each schema in the source will be mapped to a database in the target. If the databases do not exist in the target, Striim will create them.Databricks authentication mechanismsDatabricks authentication supports the use of Personal Access Tokens or Azure Active Directory (Azure AD). Azure AD authentication proceeds through the following phases:Register the Striim app with the Azure AD identity provider (IdP).Note the registered app's Client ID, Client Secret, and Tenant IDMake a request to the /authorize endpoint using the Postman app or the browser.Authenticate to Azure AD.Consent to login at the consent dialog box to obtain the authorization code.Provide the authorization code and Client Secret to the /token endpoint to obtain the access and refresh tokens.Authenticating to Databricks with Azure ADIn the Striim Databricks app, set the Authentication Type drop-down to AzureAD to use Azure AD authentication.Log in to the Azure Portal.Register a new app.Note the Application ID (referred to as Client ID in this procedure), the OAuth v2 authorization endpoint, and the OAuth v2 token endpoint.Generate a new Client secret.Note the Client Secret for future use.Add the AzureDatabricks API permission.(When the external stage is ADLS Gen 2) Add the Azure Storage API permission.The following procedure uses curl and the Web browser to fetch the refresh token.Open the following URL in a Web browser.https://login.microsoftonline.com//oauth2/v2.0/authorize?\n \u00a0 client_id=&\n \u00a0 response_type=code&\n \u00a0 redirect_uri=http%3A%2F%2Flocalhost%3A7734%2Fstriim-callback&\n \u00a0 response_mode=query&\n \u00a0 scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default%20offline_accessReplace with with the tenant ID of the registered app. Replace with the client ID of the registered app. Provide valid authentication credentials if Azure Portal requests authentication.The web browser redirects to the specified redirect URI. The authorization code is the part of the URI after the code= string.Note the authorization code for future use.Execute the following curl command.curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \\\n \u00a0 https://login.microsoftonline.com//oauth2/v2.0/token \\\n \u00a0 -d 'client_id=' \\-d 'client_secret=' \\\n \u00a0 -d 'scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default%20offline_access' \\\n \u00a0 -d 'code=' \\\n \u00a0 -d 'redirect_uri=http%3A%2F%2Flocalhost%3A7734%2Fstriim-callback' \\\n \u00a0 -d 'grant_type=authorization_code'Replace with with the tenant ID of the registered app. Replace with the client ID of the registered app. Replace with the client secret of the registered app. Replace with the previously noted authorization code.The call returns an object that contains an access_token key and a refresh_token key.Note the value of the refresh_token key.The following procedure uses the Postman app to generate an access token.Open the Postman app.In the Authorization tab, set the authorization type to OAuth 2.0.Configure values for the Client ID, Client secret, authorization URL and access token URL.Set the value of the Scope field to 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default offline_access.Set the value of the Callback URL field to the redirect URL determined in earlier procedures.Click Get New Access Token.Sign into Microsoft Azure and accept the app privilege requests at the consent dialog box.The browser sends an access token and a refresh token as a response. Note the value of the refresh token.When the External Stage type is ADLS Gen 2 and the authentication type is Azure AD, you must grant the service principal account the Storage Blob Data Contributor privilege before generating the access and refresh tokens.Example\u00a02.\u00a0TQL Example for Azure AD with ADLS Gen 2 as External Stage typeCREATE OR REPLACE TARGET db USING Global.DeltaLakeWriter (\u00a0\n \u00a0\u00a0tenantID: '71bfeed5-1905-43da-a4a4-49d8490731da',\n\u00a0\u00a0\u00a0connectionUrl: 'jdbc:spark://adb-8073469162361072.12.azuredatabricks.net:443/default;\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 transportMode=http;ssl=1;\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 httpPath=sql/protocolv1/o/8073469162361072/0301-101350-kprc8x3a;\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 AuthMech=3;UID=token;PWD=',\n\u00a0\u00a0\u00a0stageLocation: '/',\n\u00a0\u00a0\u00a0CDDLAction: 'Process',\n\u00a0\u00a0\u00a0adapterName: 'DeltaLakeWriter',\n\u00a0\u00a0\u00a0authenticationType: 'AzureAD',\n\u00a0\u00a0\u00a0ConnectionRetryPolicy: 'initialRetryDelay=10s, retryDelayMultiplier=2, maxRetryDelay=1m, maxAttempts=5, totalTimeout=10m',\n\u00a0\u00a0\u00a0ClientSecret: 'untNjHnQOzsY90BjrKs2napohIP8WebUUcXybRdKVURH0XeklB5+Xw8NZgZUylqn',\n\u00a0\u00a0\u00a0ClientSecret_encrypted: 'true',\n\u00a0\u00a0\u00a0ClientID: 'dcf190e8-a315-42bb-a0b1-86063ff1c340',\n\u00a0\u00a0\u00a0RefreshToken_encrypted: 'true',\n\u00a0\u00a0\u00a0Mode: 'APPENDONLY',\n\u00a0\u00a0\u00a0externalStageType: 'ADLSGen2',\n\u00a0\u00a0\u00a0Tables: 'public.sample_pk,default.testoauth',\n\u00a0\u00a0\u00a0azureAccountName: 'samplestorage',\n\u00a0\u00a0\u00a0RefreshToken: '',\n\u00a0\u00a0\u00a0azureContainerName: 'striim-deltalakewriter-container',\n\u00a0\u00a0\u00a0uploadPolicy: 'eventcount:10000,interval:60s' )\n\u00a0INPUT FROM sysout;Example\u00a03.\u00a0TQL Example using Personal Access Token and ADLS Gen 2 as External Stage typeCREATE TARGET db USING Global.DeltaLakeWriter (\n\u00a0\u00a0\u00a0connectionUrl: 'jdbc:spark://adb-8073469162361072.12.azuredatabricks.net:443/default;\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 transportMode=http;ssl=1;\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 httpPath=sql/protocolv1/o/8073469162361072/0301-101350-kprc8x3a;\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 AuthMech=3;UID=token;PWD=',\n\u00a0\u00a0\u00a0azureAccountAccessKey: '2YoK5czZpmPjxSiSe7uFVXrb9jt9P4xrWp+NNKxWzjU=',\n\u00a0\u00a0\u00a0stageLocation: '/',\n\u00a0\u00a0\u00a0CDDLAction: 'Process',\n\u00a0\u00a0\u00a0ConnectionRetryPolicy: 'initialRetryDelay=10s, retryDelayMultiplier=2, maxRetryDelay=1m, maxAttempts=5, totalTimeout=10m',\n\u00a0\u00a0\u00a0authenticationType: 'PersonalAccessToken',\n\u00a0\u00a0\u00a0Mode: 'APPENDONLY',\n\u00a0\u00a0\u00a0externalStageType: 'ADLSGen2',\n\u00a0\u00a0\u00a0Tables: 'public.sample_pk,default.testoauth',\n\u00a0\u00a0\u00a0azureAccountName: 'samplestorage',\n\u00a0\u00a0\u00a0azureAccountAccessKey_encrypted: 'true',\n\u00a0\u00a0\u00a0personalAccessToken: 'GGR/zQHfh7wQa3vJhP6dcWtejN1UL+E8YEXc13g9+UZdTQmYN1h3E0d0jabboJsd',\n\u00a0\u00a0\u00a0personalAccessToken_encrypted: 'true',\n\u00a0\u00a0\u00a0uploadPolicy: 'eventcount:10000,interval:60s' )\n\u00a0INPUT FROM sysout;Databricks Writer propertiesWhen creating a Databricks Writer target in TQL, you must specify values for the Connection URL, Hostname, Personal Access Token, and Tables properties. If not specified, the other properties will use their default values.propertytypedefault valuenotesAuthentication TypeenumPersonalAccessTokenWith the default setting PersonalAccessToken, Striim's connection to Databricks is authenticated using the token specified in Personal Access Token.Set to AzureAD to authenticate using Azure Active Directory. In this case, specify Client ID, Client Secret, Refresh Token, and Tenant ID. See Databricks authentication mechanisms for details.CDDL ActionenumProcessSee Handling schema evolution.If TRUNCATE commands may be entered in the source and you do not want to delete events in the target, precede the writer with a CQ with the select statement ELECT * FROM WHERE META(x, OperationName).toString() != 'Truncate'; (replacing with the name of the writer's input stream). Note that there will be no record in the target that the affected events were deleted.Client IDstringThis property is required when AzureAD authentication is selected as the value of the Authentication Type property.Client Secretencrypted passwordThis property is required when AzureAD authentication is selected as the value of the Authentication Type property.Connection Retry PolicyStringinitialRetryDelay=10s, retryDelayMultiplier=2, maxRetryDelay=1m, maxAttempts=5, totalTimeout=10mDo not change unless instructed to by Striim support.Connection URLStringProvide the JDBC URL from the JDBC/ODBC tab of the Databricks cluster's Advanced options (see Get connection details for a cluster). If the URL starts with jdbc:spark:// change that to jdbc:databricks:// (this is required by the upgraded driver bundled with Striim).External Stage TypeenumDBFSROOTWith the default value (not recommended), events are staged to DBFS storage at the path specified in Stage Location. To use an external stage, your Databricks instance should be using Databricks Runtime 11.0 or later.If running Databricks on AWS, set to S3 and set the S3 properties as detailed below.If running Azure Databricks, set to ADLSGen2 and set the ADLS properties as detailed below.HostnameStringthe Server Hostname from the JDBC/ODBC tab of the Databricks cluster's Advanced options (see Get connection details for a cluster)Ignorable Exception CodeStringSet to TABLE_NOT_FOUND to prevent the application from terminating when Striim tries to write to a table that does not exist in the target. See Handling \"table not found\" errors for more information.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).ModeenumAppendOnlyWith the default value AppendOnly:Updates and deletes from DatabaseReader, IncrementalBatchReader, and SQL CDC sources are handled as inserts in the target.Primary key updates result in two records in the target, one with the previous value and one with the new value. If the Tables setting has a ColumnMap that includes @METADATA(OperationName), the operation name for the first event will be DELETE and for the second INSERT.Set to Merge to handle updates and deletes as updates and deletes instead. In Merge mode:Since Delta Lake tables do not have primary keys, you may include the\u00a0keycolumns option in the Tables property to specify a column in the target table that will contain a unique identifier for each row: for example,\u00a0Tables:'SCOTT.EMP,mydatabase.employee keycolumns(emp_num)'.You may use wildcards for the source table provided key columns are specified for all the target tables. For example,\u00a0Tables:'DEMO.%,mydatabase.% KeyColumns(...)'.If you do not specify keycolumns , Striim will use the source table's keycolumns as a unique identifier. If the source table has no keycolumns, Striim will concatenate all column values and use that as a unique identifier.Optimized MergeBooleanfalseIn Flow Designer, this property will be displayed only when Mode is Merge.Set to True only when Mode is MERGE and the target's input stream is the output of an HP NonStop reader, MySQL Reader, or Oracle Reader source and the source events will include partial records. For example, with Oracle Reader, when supplemental logging has not been enabled for all columns, partial records are sent for updates. When the source events will always include full records, leave this set to false.Parallel ThreadsIntegerNot supported when Mode is Merge.See\u00a0Creating multiple writer instances.Personal Access Tokenencrypted passwordUsed to authenticate with the Databricks cluster (see Generate a personal access token). The user associated with the token must have read and write access to DBFS (see Important information about DBFS permissions). If table access control has been enabled, the user must also have MODIFY and READ_METADATA (see Data object privileges - Data governance model).Refresh Tokenencrypted passwordThis property is required when AzureAD authentication is selected as the value of the Authentication Type property.Stage LocationString/When the External Stage Type is DBFSROOT, the path to the staging area in DBFS, for example, /StriimStage/.TablesStringThe name(s) of the table(s) to write to. The table(s) must exist in the database.Specify target table names in uppercase as ..
. Not specifying the catalog (.
) may result in errors if a table in another catalog has the same name.When the target's input stream is a user-defined event, specify a single table.The only special character allowed in target table names is underscore (_).When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target_database.emp\nsource_schema.%,target_catalog.target_database.%\nsource_database.source_schema.%,target_database.%\nsource_database.source_schema.%,target_catalog.target_database.%MySQL and Oracle names are case-sensitive, SQL Server names are not. Specify names as .
for MySQL and Oracle and as\u00a0..
for SQL Server.See\u00a0Mapping columns for additional options.Tenant IDStringThis property is required when AzureAD authentication is selected as the value of the Authentication Type property.Upload PolicyStringeventcount:100000, interval:60sThe upload policy may include eventcount and/or interval (see Setting output names and rollover / upload policies for syntax). Buffered data is written to the storage account every time any of the specified values is exceeded. With the default value, data will be written every 60 seconds or sooner if the buffer contains 100,000 events. When the app is quiesced, any data remaining in the buffer is written to the storage account; when the app is undeployed, any data remaining in the buffer is discarded.Azure Data Lake Storage (ADLS) Gen2 properties for Databricks WriterTo use ADLS Gen2, your Databricks instance should be using Databricks Runtime 11.0 or later.propertytypedefault valuenotesAzure Account Access Keyencrypted passwordWhen Authentication Type is set to ServiceAccountKey, specify the account access key from Storage accounts > > Access keys.When Authentication Type is set to AzureAD, this property is ignored in TQL and not displayed in the Flow Designer.Azure Account NameStringthe name of the Azure storage account for the blob containerAzure Container NameStringstriim-deltalakewriter-containerthe blob container name from Storage accounts > > ContainersIf it does not exist, it will be created.Amazon S3 properties for Databricks WriterTo use Amazon S3, your Databricks instance should be using Databricks Runtime 11.0 or later.propertytypedefault valuenotesS3 Access KeyStringan AWS access key ID (created on the AWS Security Credentials page) for a user with read and write permissions on the bucketS3 Bucket NameStringstriim-deltalake-bucketSpecify the S3 bucket to be used for staging. If it does not exist, it will be created.S3 RegionStringus-west-1the AWS region of the bucketS3 Secret Access Keyencrypted passwordthe secret access key for the access keySample TQL application using Databricks WriterSample TQL in AppendOnly mode:CREATE TARGET DatabricksAppendOnly USING DeltaLakeWriter ( \n personalAccessToken: '*************************', \n hostname:'adb-xxxx.xx.azuredatabricks.net',\n tables: 'mydb.employee,mydatabase.employee', \n stageLocation: '/StriimStage/', \n connectionUrl:'jdbc:xxx.xx;transportMode=http;ssl=1;httpPath=xxx;AuthMech=3;UID=token;'\n)\nINPUT FROM ns1.sourceStream;Sample TQL in Merge mode with Optimized Merge set to True:CREATE TARGET DatabricksAppendOnly USING DeltaLakeWriter ( \n personalAccessToken: '*************************', \n hostname:'adb-xxxx.xx.azuredatabricks.net',\n tables: 'mydb.employee,mydatabase.employee', \n stageLocation: '/StriimStage/', \n connectionUrl:'jdbc:xxx.xx;transportMode=http;ssl=1;httpPath=xxx;AuthMech=3;UID=token;',\n mode: 'MERGE',\n optimizedMerge: 'true'\n)\nINPUT FROM ns1.sourceStream;Databricks Writer data type support and mappingTQL typeDelta Lake typejava.lang.Bytebinaryjava.lang.Doubledoublejava.lang.Floatfloatjava.lang.Integerintjava.lang.Longbigintjava.lang.Shortsmallintjava.lang.Stringstringorg.joda.time.DateTimetimestampFor additional data type mappings, see Data type support & mapping for schema conversion & evolution.In this section: Databricks WriterLimitationsCreating a Databricks target using a templateDatabricks authentication mechanismsAuthenticating to Databricks with Azure ADDatabricks Writer propertiesAzure Data Lake Storage (ADLS) Gen2 properties for Databricks WriterAmazon S3 properties for Databricks WriterSample TQL application using Databricks WriterDatabricks Writer data type support and mappingSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-16\n", "metadata": {"source": "https://www.striim.com/docs/en/databricks-writer.html", "title": "Databricks Writer", "language": "en"}} {"page_content": "\n\nFile WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsFile WriterPrevNextFile WriterWrites to files.File Writer propertiespropertytypedefault valuenotesData Encryption Key Passphraseencrypted passwordSee Setting encryption policies.DirectoryStringIf no directory is specified, the file will be written to the Striim program directory.If a directory name is specified without a path, it will be created in the the Striim program directory.If the specified directory does not exist, it will be created, provided the Striim server process has the necessary permissions.In a multi-server environment, if the directory is local to the Striim server, each server's file will contain only the events processed on that server.If the source is a File Reader and its Include Subdirectories property is True, the files will be written to subdirectories (with the same names as in the source) of the specified directory.See Setting output names and rollover / upload policies for advanced options.Encryption PolicyStringSee Setting encryption policies.File NameStringThe base name of the files to be written. See Setting output names and rollover / upload policies.Flush PolicyIntegerEventCount:10000, Interval:30sIf data is not flushed properly with the default setting, you may use this property to specify how many events FileWriter will accumulate before it writes to disk and/or the maximum number of seconds that will elapse between writes. For example:'eventcount:5000''interval:10''interval:10,eventcount:5000'With a setting of 'eventcount:1', each event will be written to disk immediately. This can be useful during development, debugging, testing, and troubleshooting.Rollover on DDLBooleanTrueHas effect only when the input stream is the output stream of a CDC reader source. With the default value of True, rolls over to a new file when a DDL event is received. Set to False to keep writing to the same file.Rollover PolicyStringEventCount:10000, Interval:30sSee Setting output names and rollover / upload policies.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsIn this section: File WriterFile Writer propertiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/file-writer.html", "title": "File Writer", "language": "en"}} {"page_content": "\n\nGCS WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsGCS WriterPrevNextGCS WriterReads from Google Cloud Storage.GCS Writer propertiespropertytypedefault valuenotesBucket NameStringThe GCS bucket name. If it does not exist, it will be created (provided Location and Storage Class are specified).See Setting output names and rollover / upload policies for advanced options.Note the limitations in Google's Bucket and Object Naming Guidelines. Note particularly that bucket names must be unique not just within your project or account but across all Google Cloud Storage accounts.See Setting output names and rollover / upload policies for advanced Striim options.Client ConfigurationStringOptionally, specify one or more of the following property-value pairs, separated by commas,\u00a0to override Google's defaults (see\u00a0Class RetrySettings):connectionTimeout=: default is 30000msxErrorRetry=: default is 3retryDelay=: default is 30000Compression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.Data Encryption Key Passphraseencrypted passwordSee Setting encryption policies.Encryption PolicyStringSee Setting encryption policies.Folder NameStringOptionally, specify a folder within the specified bucket. If it does not exist, it will be created.See Setting output names and rollover / upload policies for advanced options.LocationStringThe location of the bucket, which you can find on the bucket's overview tab (see\u00a0Bucket Locations).Object NameStringThe base name of the files to be written.\u00a0See\u00a0Google's Bucket and Object Naming Guidelines.See Setting output names and rollover / upload policies for advanced Striim options.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Partition KeyStringIf you enable ParallelThreads, specify a field to be used to partition the events among the threads.\u00a0 Events will be distributed among multiple folders based on this field's values.\u00a0If the input stream is of any type except WAEvent, specify the name of one of its fields.If the input stream is of the WAEvent type, specify a field in the METADATA map (see WAEvent contents for change data) using the syntax\u00a0@METADATA(), or a field in the USERDATA map (see\u00a0Adding user-defined data to WAEvent streams), using the syntax\u00a0@USERDATA(). If appropriate, you may concatenate multiple METADATA and/or USERDATA fields.WAEvent contents for change dataPrivate Service Connect EndpointStringName of the Private Service Connect endpoint created in the target VPC.This endpoint name will be used to generate the private hostname internally and will be used for all connections.See Private Service Connect support in Google cloud adapters.Project IdStringThe Google Cloud Platform project for the bucket.Roll Over on DDLBooleanTrueHas effect only when the input stream is the output stream of a CDC reader source. With the default value of True, rolls over to a new file when a DDL event is received. Set to False to keep writing to the same file.Service Account KeyStringThe path (from root or the Striim program directory) and file name to the .json credentials file downloaded from Google (see Service Accounts).\u00a0This file must be copied to the same location on each Striim server that will run this adapter, or to a network location accessible by all servers.\u00a0The associated service account must have the Storage Legacy Bucket Writer role for the specified bucket.Storage ClassStringThe storage class of the bucket, which you can find on the bucket's overview tab (see\u00a0Bucket Locations).Upload PolicyStringeventcount:10000, interval:5mThe upload policy may include eventcount, interval, and/or filesize (see Setting output names and rollover / upload policies for syntax). Cached data is written to GCS every time any of the specified values is exceeded. With the default value, data will be written every five minutes or sooner if the cache contains 10,000 events. When the app is undeployed, all remaining data is written to GCS.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsGCS Writer sample applicationCREATE APPLICATION testGCS;\n\nCREATE SOURCE PosSource USING FileReader ( \n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false ) \nOUTPUT TO PosSource_Stream;\n\nCREATE CQ PosSource_Stream_CQ \nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET GCSOut USING GCSWriter ( \n bucketName: 'mybucket',\n objectName: 'myobjectname',\n serviceAccountKey: 'conf/myproject-ec6f8b0e3afe.json',\n projectId: 'myproject' ) \nFORMAT USING DSVFormatter () \nINPUT FROM PosSource_TransformedStream;\n\nEND APPLICATION testGCS;In this section: GCS WriterGCS Writer propertiesGCS Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/en/gcs-writer.html", "title": "GCS Writer", "language": "en"}} {"page_content": "\n\nGoogle PubSub WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsGoogle PubSub WriterPrevNextGoogle PubSub WriterWrites to an existing topic in Google Cloud Pub/Sub.Only async mode is supported. Consequently, events may be written out of order.Google PubSub Writer propertiespropertytypedefault valuenotesBatch PolicyStringEventCount:1000, Interval:1m, Size:1000000Cached data is written to the target every time one of the specified values is exceeded. With the default value, data will be written once a minute or sooner if the buffer contains 1000 events or 1,000,000 bytes of data. When the application is stopped any remaining data in the buffer is discarded.Due to google-cloud-java issue #4757, the maximum supported value for EventCount is 1000.The MaxOutstandingRequestBytes value in PubSub Config must be equal to or higher than the Batch Policy size.;Message AttributesStringThe Message Attributes property allows you to specify one or more key-value pairs sent as part of the PubSub message attributes to filter by subscriber.\u00a0Setting this property produces messages with the Message Attributes for each event.You can specify the value as a static string or a dynamic value from the incoming stream.\u00a0For a WAEvent stream, you can extract the value from metadata or user data, while for a Typed stream, you can use any of the fields of the typed stream.Note: The keys do not support special characters. They can be only alphanumeric.Examples of static and dynamic values:Static value:CName=\"Striim\"Dynamic value:Table=@metadata(TableName)See Message Attributes and Ordering Key sample and client configuration.Ordering KeyStringThe Ordering Key property takes a single-string value known as a key, which is used to deliver messages to subscribers in the order in which the Pub/Sub system receives them. Setting this property produces messages with an OrderingKey for each event.You can specify the value as a static string or a dynamic value from the incoming stream.\u00a0The Ordering Key only supports a dynamic value event lookup for @metadata and @userdata.For a WAEvent stream, you can extract the value from metadata or user data, while for a Typed stream, you can use any of the fields of the typed stream.Note: The keys do not support special characters. They can be only alphanumeric.Examples of static and dynamic values:Static value:OrderingKey: \"Test1\"Dynamic value:OrderingKey : @metadata(TableName)See Message Attributes and Ordering Key sample and client configuration.Project IDStringthe project to which the PubSub instance belongsPubSub ConfigStringRetryDelay: 1, MaxRetryDelay:60, TotalTimeout:600, InitialRpcTimeout:10, MaxRpcTimeout:600, RetryDelayMultiplier:2.0, NumThreads:10, MaxOutstandingElementCount:1000, MaxOutstandingRequestBytes:1000000Do not change these values except as instructed by Striim support.Service Account KeyStringThe path (from root or the Striim program directory) and file name to the .json credentials file downloaded from Google (see Service Accounts).\u00a0This file must be copied to the same location on each Striim server that will run this adapter, or to a network location accessible by all servers.If a value for this property is not specified, Striim will use the $GOOGLE_APPLICATION_CREDENTIALS environment variable.The service account must have the PubSub Publisher or higher\u00a0role for the topic (see Access Control).TopicStringthe topic to publish toThis adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsGoogle PubSub Writer sample applicationCREATE APPLICATION GooglePubSubWriterTest;\n\nCREATE SOURCE PosSource USING FileReader (\n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false )\nOUTPUT TO PosSource_Stream;\n\nCREATE CQ PosSource_Stream_CQ\nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET GooglePubSubTarget USING GooglePubSubWriter (\n ServiceAccountKey:'my-pubsub-cb179721c223.json',\n ProjectId:'my-pubsub',\n Topic:'mytopic'\n)\nFORMAT USING JSONFormatter ()\nINPUT FROM PosSource_TransformedStream;\n\nEND APPLICATION GooglePubSubWriterTest;Message Attributes and Ordering Key sample and client configurationThis sample adds the primary key value of the source table in Message Attributes and OrderingKey:CREATE OR REPLACE TARGET pubsubtest USING\u00a0Global.GooglePubSubWriter (\u00a0\n\u00a0 BatchPolicy: 'EventCount:1000,Interval:1m,Size:1000000',\u00a0\u00a0 \n MessageAttributes: 'CompanyName=\\\"Example.com Inc.\\\"',\u00a0\u00a0 \n Topic: 'OrderingKeyTest2',\u00a0\n ServiceAccountKey: '/Users/example/Documents/striimdev-612345678a5b.json',\u00a0\n \u00a0ProjectId: 'striimdev',\u00a0\u00a0 \n adapterName: 'GooglePubSubWriter',\u00a0\u00a0 \n OrderingKey: 'Test1',\u00a0\n\u00a0 PubSubConfig: 'RetryDelay:1,MaxRetryDelay:60,TotalTimeout:600,\n InitialRpcTimeout:10,MaxRpcTimeout:10,RetryDelayMultiplier:2.0,\n RpcTimeoutMultiplier:1.0,NumThreads:10,MaxOutstandingElementCount:1000,\n MaxOutstandingRequestBytes:1000000' )\u00a0\n\nFORMAT USING Global.JSONFormatter\u00a0 (\u00a0\u00a0 \n handler: 'com.webaction.proc.JSONFormatter',\u00a0\u00a0 \n jsonMemberDelimiter: '\\n',\u00a0\u00a0 \n EventsAsArrayOfJsonObjects: 'true',\u00a0\u00a0 \n formatterName: 'JSONFormatter',\u00a0\u00a0 \n jsonobjectdelimiter: '\\n' )\u00a0INPUT FROM DSVOP;\n\nEND APPLICATION FileReadertopubsub;You can set the message ordering property when you create a subscription using the Google Cloud console, the Google Cloud CLI, or the Pub/Sub API (see Ordering messages). For example:Subscription subscription =\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \n subscriptionAdminClient.createSubscription(\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \n Subscription.newBuilder()\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \n .setName(subscriptionName.toString())\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \n .setTopic(topicName.toString())\u00a0 \u00a0\n // Set message ordering to true for ordered messages in the subscription. \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\n .setEnableMessageOrdering(true)\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \n .build());In this section: Google PubSub WriterGoogle PubSub Writer propertiesGoogle PubSub Writer sample applicationMessage Attributes and Ordering Key sample and client configurationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/google-pubsub-writer.html", "title": "Google PubSub Writer", "language": "en"}} {"page_content": "\n\nHazelcast WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsHazelcast WriterPrevNextHazelcast WriterWrites to Hazelcast maps.The following describes use of HazelcastWriter with an input stream of a user-defined type. For use with a CDC reader or DatabaseReader input stream of type WAEvent, see\u00a0Replicating Oracle data to a Hazelcast \"hot cache\".Hazelcast Writer propertiespropertytypedefault valuenotesBatchPolicyStringeventCount:10000, Interval:30This property is used only when the Mode is InitialLoad.The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, data will be written every 30 seconds or sooner if the buffer accumulates 10,000 events.ClusterNameStringthe Hazelcast cluster group-name value, if requiredConnectionURLString: of the Hazelcast server (cannot be on the same host as Striim)MapsStringWith an input stream of a user-defined type, the name of the Hazelcast map to write to. See the example below.With an input stream of type WAEvent,\u00a0,;,;...\u00a0. For an example, see\u00a0Replicating Oracle data to a Hazelcast \"hot cache\".ModeStringincrementalWith an input stream of a user-defined type, do not change the default. See\u00a0Replicating Oracle data to a Hazelcast \"hot cache\" for more information.ORMFileStringfully qualified filename of the Hazelcast object-relational mapping (ORM) filePasswordencrypted passwordthe group-password value corresponding to the group-name value specified in ClusterNameUsing Hazelcast WriterTo use Hazelcast Writer, you must:Write a Java class defining the Plain Old Java Objects (POJOs) corresponding to the input stream's type (see\u00a0http://stackoverflow.com/questions/3527264/how-to-create-a-pojo for more information on POJOs), compile the Java class to a .jar file, copy it to the Striim/lib\u00a0directory of each Striim server that will run the HazelcastWriter target, and restart the server. If the class is missing, a \"ClassNotFound\" error will be written to the log.Write an XML file defining the object-relational mapping to be used to map stream fields to Hazelcast maps (the \"ORM file\") and save it in a location accessible to the Striim cluster.The following example assumes the following stream definition:CREATE TYPE invType (\n SKU String key,\n STOCK String,\n NAME String,\n LAST_UPDATED DateTime);\nCREATE STREAM invStream OF invType;The following Java class defines a POJO corresponding to the stream:package com.customer.vo;\nimport java.io.Serializable;\nimport java.util.Date;\npublic class ProductInvObject implements Serializable {\n\n public Long sku = 0;\n public double stock = 0;\n public String name = null;\n public Date lastUpdated = null;\n\n public ProductInvObject ( ) { }\n\n @Override\n public String toString() {\n return \"sku : \" + sku + \", STOCK:\" + stock + \", NAME:\" + name + \", LAST_UPDATED:\" + lastUpdated ;\n }\n}\nThe following ORM file maps the input stream fields to Hazelcast maps (see the discussion of data type support below):\n\n\n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\nData types are converted as specified in the OML file. Supported types on the Hazelcast side are:binary (byte[])Character, charDouble, doubleFloat, floatint, Integerjava.util.DateLong, longShort, shortStringAssuming that the ORM file has been saved to Striim/Samples/TypedData2HCast/invObject_orm.xml, the\u00a0following TQL will write the input stream events to Hazelcast:CREATE TARGET HazelOut USING HazelcastWriter (\n ConnectionURL: '203.0.1113.50:5702',\n ormFile:\"Samples/TypedData2HCast/invObject_orm.xml\",\n mode: \"incremental\",\n maps: 'invCache'\n)\nINPUT FROM invStream;If the application terminates, after recovery (see Recovering applications)\u00a0the Hazelcast map may contain some duplicate events. If Hazelcast crashes, stop the application, restart Hazelcast, then restart the application.In this section: Hazelcast WriterHazelcast Writer propertiesUsing Hazelcast WriterSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/hazelcast-writer.html", "title": "Hazelcast Writer", "language": "en"}} {"page_content": "\n\nHBase WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsHBase WriterPrevNextHBase WriterWrites to an HBase database.The following describes use of HBaseWriter with an input stream of a user-defined type. For use with a CDC reader or DatabaseReader input stream of type WAEvent, see Replicating Oracle data to HBase.For information on which firewall ports must be open, see the hbase-site.xml file specified by HBaseConfigurationPath.HBase Writer propertiespropertytypedefault valuenotesAuthentication PolicyStringIf the target HBase instance is unsecured, leave this blank. If it uses Kerberos authentication, provide credentials in the format Kerberos, Principal:, KeytabPath:. For example: authenticationpolicy:'Kerberos, Principal:nn/ironman@EXAMPLE.COM, KeytabPath:/etc/security/keytabs/nn.service.keytab'Batch PolicyStringeventCount:1000, Interval:30The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, data will be written every 30 seconds or sooner if the buffer accumulates 1000 events.HBase Configuration PathStringFully-qualified name of the hbase-site.xml file. Contact your HBase administrator to obtain a valid copy of the file if necessary, or to mark the host as a Hadoop client so that the file gets distributed automatically.PK Update Handling ModeStringERRORWith the default value, when the input stream contains an update to a primary key, the application stops and its status is ERROR. With the value IGNORE, primary key update events are ignored and the application continues.To support primary key updates, set to\u00a0DELETEANDINSERT. The Compression property in the CDC source reader must be set to False.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.TablesStringthe input stream type and name of the HBase table to write to, in the format\u00a0
. (case-sensitive; multiple tables are supported only when\u00a0Replicating Oracle data to HBase)The input stream's type must define a key field.HBase Writer sample applicationThe following TQL will write to the HBase table posdata of ColumnFamily striim:CREATE SOURCE PosSource USING FileReader (\n directory:'Samples/PosApp/AppData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:yes\n)\nOUTPUT TO RawStream;\n\nCREATE TYPE PosData(\n merchantId String KEY, \n dateTime DateTime, \n amount Double, \n zip String\n);\nCREATE STREAM PosDataStream OF PosData;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream \nSELECT TO_STRING(data[1]), \n TO_DATEF(data[4],'yyyyMMddHHmmss'),\n TO_DOUBLE(data[7]),\n TO_STRING(data[9])\nFROM RawStream;\n\nCREATE TARGET WriteToHBase USING HBaseWriter(\n HBaseConfigurationPath:\"/usr/local/HBase/conf/hbase-site.xml\",\n Tables: 'posdata.striim'\nINPUT FROM PosDataStream;In HBase, merchantId, dateTime, hourValue, amount, and zip columns will automatically be created under the striim ColumnFamily if they do not already exist.In this section: HBase WriterHBase Writer propertiesHBase Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/hbase-writer.html", "title": "HBase Writer", "language": "en"}} {"page_content": "\n\nHDFS WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsHDFS WriterPrevNextHDFS WriterWrites to files in the Hadoop Distributed File System (HDFS).\u00a0WarningIf your version of Hadoop does not include the fix for\u00a0HADOOP-10786, HDFSWriter may terminate due to Kerberos ticket expiration.To write to MapR-FS, use MapRFSWriter. HDFSWriter and MapRFSWriter use the same properties except for the difference in hadoopurl noted below and the different names for the configuration path property.HDFS Writer propertiespropertytypedefault valuenotesauthentication policyStringIf the HDFS cluster uses Kerberos authentication, provide credentials in the format Kerberos, Principal:, KeytabPath:. Otherwise, leave blank. For example: authenticationpolicy:'Kerberos, Principal:nn/ironman@EXAMPLE.COM, KeytabPath:/etc/security/keytabs/nn.service.keytab'DirectoryStringThe full path to the directory in which to write the files. See Setting output names and rollover / upload policies for advanced options.File NameString\u00a0The base name of the files to be written. See Setting output names and rollover / upload policies.flush policyStringeventcount:10000, interval:30sIf data is not flushed properly with the default setting, you may use this property to specify how many events Striim will accumulate before writing and/or the maximum number of seconds that will elapse between writes. For example:flushpolicy:'eventcount:5000'flushpolicy:'interval:10s'flushpolicy:'interval:10s, eventcount:5000'Note that changing this setting may significantly degrade performance.With a setting of 'eventcount:1', each event will be written immediately. This can be useful during development, debugging, testing, and troubleshooting.hadoopConfigurationPathStringIf using Kerberos authentication, specify the path to Hadoop configuration files such as core-site.xml and hdfs-site.xml. If this path is incorrect or the configuration changes, authentication may fail.hadoopurlStringThe URI for the HDFS cluster NameNode. See below for an example. The default HDFS NameNode IPC port is 8020 or 9000 (depending on the distribution). Port 50070 is for the web UI and should not be specified here.For an HDFS cluster with high availability, use the value of the dfs.nameservices property from hdfs-site.xml with the syntax hadoopurl:'hdfs://', for example,\u00a0hdfs://'mycluster'.\u00a0 When the current NameNode fails, Striim will automatically connect to the next one.When using MapRFSWriter, you may start the URL with\u00a0hdfs:// or\u00a0maprfs:/// (there is no functional difference).MapRDBConfigurationPathStringsee notes for hadoopConfigurationPathRollover on DDLBooleanTrueHas effect only when the input stream is the output stream of a CDC reader source. With the default value of True, rolls over to a new file when a DDL event is received. Set to False to keep writing to the same file.Rollover PolicyStringinterval:60sSee Setting output names and rollover / upload policies.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsHDFS Writer sample applicationThe following sample writes some of the PosApp sample data to the file /output/hdfstestOut in the specified HDFS instance:CREATE SOURCE CSVSource USING FileReader (\n directory:'Samples/PosApp/AppData',\n WildCard:'posdata.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:'yes'\n)\nOUTPUT TO CsvStream;\n\nCREATE TYPE CSVType (\n merchantId String,\n dateTime DateTime,\n hourValue Integer,\n amount Double,\n zip String\n);\nCREATE STREAM TypedCSVStream OF CSVType;\n\nCREATE CQ CsvToPosData\nINSERT INTO TypedCSVStream\nSELECT data[1],\n TO_DATEF(data[4],'yyyyMMddHHmmss'),\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')),\n TO_DOUBLE(data[7]),\n data[9]\nFROM CsvStream;\n\nCREATE TARGET hdfsOutput USING HDFSWriter(\n filename:'hdfstestOut.txt',\n hadoopurl:'hdfs://node8057.example.com:8020',\n flushpolicy:'interval:10,eventcount:5000',\n authenticationpolicy:'Kerberos,Principal:striim/node8057.example.com@STRIIM.COM,\n KeytabPath:/etc/security/keytabs/striim.service.keytab',\n hadoopconfigurationpath:'/etc/hadoop/conf',\n directory:'/user/striim/PosAppOutput' \n)\nFORMAT USING DSVFormatter (\n)\nINPUT FROM TypedCSVStream;In this section: HDFS WriterHDFS Writer propertiesHDFS Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/hdfs-writer.html", "title": "HDFS Writer", "language": "en"}} {"page_content": "\n\nHive WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsHive WriterPrevNextHive WriterWrites to one or more tables in Apache Hive.When the input stream of a HiveWriter target is of a user-defined type, it can write to Hive tables that use Avro, ORC, Parquet, or text file storage formats, and writes use SQL APPEND or INSERT INTO.When the input stream is the output stream of a DatabaseReader or CDC source:and the Mode is initialload, the storage format may be Avro, ORC, Parquet, or text file, and writes use SQL APPEND or INSERT INTO.the Mode is incremental, and the storage format is Avro or Parquet, writes use SQL APPEND or INSERT INTO.the Mode is incremental, and the storage format is ORC, Hive ACID transactions must be enabled and writes use SQL MERGE (which your version of Hive must support). In this case, there will be no duplicate events written to Hive (\"exactly-once processing\") after recovery (Recovering applications), as may happen when using SQL APPEND or INSERT INTO.Limitations:When the input stream is the output steam of a CDC reader, the reader's Compression property must be False.DDL is not supported. If you need to alter the source tables, quiesce the application, change the source and target tables, and restart.Columns specified in the Tables property's keycolumns option may not be updated. Any attempted update will be silently discarded.Bucketed or partitioned columns may not be updated. This is a limitation of Hive, not HiveWriter.Multiple instances of HiveWriter cannot write to the same table. When a HiveWriter target is deployed on multiple Striim servers, partition the input stream or use an environment variable in table mappings to ensure that they do not write to the same tables.Hive Writer propertiespropertytypedefault valuenotesAuthentication PolicyStringIf the HDFS cluster uses Kerberos authentication, provide credentials in the format Kerberos, Principal:, KeytabPath:. Otherwise, leave blank. For example: authenticationpolicy:'Kerberos, Principal:nn/ironman@EXAMPLE.COM, KeytabPath:/etc/security/keytabs/nn.service.keytab'Connection URLStringthe JDBC connection URL, for example,\u00a0ConnectionURL= 'jdbc:hive2:@192.0.2.5:10000'DirectoryStringBy default, Striim will create an HDFS directory on the Hive server to use as a staging area. If Striim does not have permission to create the necessary directory, HiveWriter will terminate with a \"File Not Found\" exception. To resolve that issue, create a staging directory manually and specify it here..Hadoop Configuration PathStringIf using Kerberos authentication, specify the path to Hadoop configuration files such as core-site.xml and hdfs-site.xml. If this path is incorrect or the configuration changes, authentication may fail.Hadoop URLStringThe URI for the HDFS cluster NameNode. See below for an example. The default HDFS NameNode IPC port is 8020 or 9000 (depending on the distribution). Port 50070 is for the web UI and should not be specified here.For an HDFS cluster with high availability, use the value of the dfs.nameservices property from hdfs-site.xml with the syntax hadoopurl:'hdfs://', for example,\u00a0hdfs://'mycluster'.\u00a0 When the current NameNode fails, Striim will automatically connect to the next one.Ignorable Exception CodeStriimSet to TABLE_NOT_FOUND to prevent the application from terminating when Striim tries to write to a table that does not exist in the target. See Handling \"table not found\" errors for more information.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Merge PolicyStringeventcount:10000, interval:5mWith the default setting, events are written every five minutes or sooner if there are 10,000 events.ModeStringincrementalWith an input stream of a user-defined type, do not change the default. See\u00a0Replicating Oracle data to Hive.Passwordencrypted passwordThe password for the specified user. See Encrypted passwords.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.TablesStringThe name(s) of the table(s) to write to. The table(s) must exist in Hive.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nSince HIve does not have primary keys, you must use the\u00a0keycolumns option to define a unique identifier for each row in the target table: for example,\u00a0Tables:'DEMO.EMPLOYEE,employee keycolumns(emp_id)'. If necessary to ensure uniqueness, specify multiple columns with the syntax\u00a0keycolumns(,,...). You may use wildcards for the source table provided all the tables have the key columns: for example,\u00a0ables:'DEMO.Ora%,HIVE.Hiv% KeyColumns(...)'.See\u00a0Mapping columns for additional options.UsernameStringA Hive user for the server specified in ConnectionURL. The user must have\u00a0INSERT, UPDATE, DELETE, TRUNCATE, CREATE, DROP, and ALTER privileges on the specified tables.Hive Writer sample applicationThe following sample code writes data from\u00a0PosDataPreview.csv \u00a0to Hive\u00a0 (to run this code, you must first create the target table in Hive):CREATE SOURCE PosSource USING FileReader (\n directory:'Samples/PosApp/AppData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:yes\n)\nOUTPUT TO RawStream;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM RawStream;\n\nCREATE TARGET HiveSample USING HiveWriter (\n ConnectionURL:'jdbc:hive2://192.0.2.76:10000',\n Username:'hiveuser', \n Password:'********',\n hadoopurl:'hdfs://192.0.2.76:9000/',\n Tables:'posdata'\n)\nFORMAT USING DSVFormatter ()\nINPUT FROM PosDataStream;HiveWriter data type support and correspondenceWhen the input stream is of a user-defined type:TQL typeHive typejava.lang.ByteBINARYjava.lang.DoubleDOUBLEjava.lang.FloatFLOATjava.lang.IntegerINTEGERjava.lang. LongBIGINTjava.lang.ShortSMALLINT, TINYINTjava.lang.StringCHAR, DECIMAL, INTERVAL, NUMERIC, STRING, VARCHARorg.joda.time.DateTimeTIMESTAMPWhen the input stream of a HiveWriter target is the output of an Oracle source (DatabaseReader or OracleReader):Oracle typeHive typeBINARY_DOUBLEDOUBLEBINARY_FLOATFLOATBLOBBINARYCHARCHAR, STRING, VARCHARCLOBSTRINGDATETIMESTAMPDECIMALDECIMAL, DOUBLE, FLOATFLOATFLOATINTEGERBIGINT, INT, SMALLINT TINYINTLONGBIGINTNCHARSTRING, VARCHARNUMBERINT when the scale is 0 and the precision is less than 10BIGINT when the scale is 0 and the precision is less than 19DECIMAL when the scale is greater than 0 or the precision is greater than 19NVARCHAR2STRING, VARCHARSMALLINTSMALLINTTIMESTAMPTIMESTAMPTIMESTAMP WITH LOCAL TIME ZONETIMESTAMPTIMESTAMP WITH TIME ZONETIMESTAMPVARCHAR2STRING, VARCHARCloudera Hive WriterClouderaHiveWriter is identical to HiveWriter except that, since\u00a0Cloudera's Hive distribution does not support SQL MERGE, there is no\u00a0Mode property.\u00a0ClouderaHiveWriter is always in InitialLoad mode, and\u00a0writes always use SQL APPEND or INSERT INTO.See\u00a0Hive Writer for further discussion and documentation of the properties.Hortonworks Hive WriterExcept for the name of the adapter, HortonworksHiveWriter is identical to HiveWriter. See\u00a0Hive Writer for documentation of the properties.In this section: Hive WriterHive Writer propertiesHive Writer sample applicationHiveWriter data type support and correspondenceCloudera Hive WriterHortonworks Hive WriterSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/hive-writer.html", "title": "Hive Writer", "language": "en"}} {"page_content": "\n\nHTTP WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsHTTP WriterPrevNextHTTP WriterSends a custom response to an HTTP Reader source when the source's Defer Response property is set to True. If HTTPWriter does not return a response before the Defer Response Timeout specified in HTTPReader, HTTPReader will respond with error 408 with the body, \"Request timed out. Make sure that there is an HTTPWriter in the current application with property Mode set to RESPOND and HTTPWriter property RequestContextKey is mapped correctly, or set HTTPReader property DeferResponse to FALSE, or check Striim server log for details.\"HTTP Writer propertiespropertytypedefault valuenotesModeStringRESPONDDo not change default value.Request Context FieldString@metadata(RequestContext)The name of the input stream field that contains the UUID of the HTTP Reader source that will send the response. This UUID is the value of the RequestContextField metadata field of the HTTPReader's output stream (see HttpCacheResponseApp.tql for an example).Response Code FieldString\"200\"Status code for the custom response. Typically the default value of \"200\" will be appropriate unless your application has multiple instances of HTTP Writer that will handle events with various characteristics (see HttpCacheResponseApp.tql for an example).Response HeadersStringOptionally, specify one or more header fields to be added to the custom response using the format
=. Separate multiple headers with semicolons. The value can be a static string (for example, Server=\"Striim\") or the value of a specified input stream field (for example, Table=@metadata(TableName); Operation=@metadata{OperationName)).HTTP Writer sample applicationYou can download the following example TQL files as HTTPWriter.zip from https://github.com/striim/doc-downloads.To see how this works, run striim/docs/HTTPWriter/HttpCacheResponseApp.tql, open a terminal or command prompt, and enter the following (if Striim is not running on your local system, change 127.0.0.1 to the IP address of the Striim server):curl --location --request POST '127.0.0.1:8765' \\\n--header 'Content-Type: text/csv' \\\n--data-raw 'COMPANY 1'That will return the cache entry for Company 1:{\n \"BUSINESS_NAME\":\"COMPANY 1\",\n \"MERCHANT_ID\":\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",\n \"PRIMARY_ACCOUNT_NUMBER\":\"6705362103919221351\",\n \"POS_DATA_CODE\":0,\n \"DATETIME\":\"2607-11-27T09:22:53.210-08:00\",\n \"EXP_DATE\":\"0916\",\n \"CURRENCY_CODE\":\"USD\",\n \"AUTH_AMOUNT\":2.2,\n \"TERMINAL_ID\":\"5150279519809946\",\n \"ZIP\":\"41363\",\n \"CITY\":\"Quicksand\"\n }\nThen enter the following:curl --location --request POST '127.0.0.1:8765' \\\n--header 'Content-Type: text/csv' \\\n--data-raw 'COMPANY 99'Since there is no entry for Company 99 in the cache, that will return:{\n\u00a0 \"MESSAGE\":\"Requested entry was not found in cache.\"\n}See the comments in HttpCacheResponseApp.tql for more information.In this section: HTTP WriterHTTP Writer propertiesHTTP Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/http-writer.html", "title": "HTTP Writer", "language": "en"}} {"page_content": "\n\nHP NonStopSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsHP NonStopPrevNextHP NonStopSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-01\n", "metadata": {"source": "https://www.striim.com/docs/en/hp-nonstop-readers-old.html", "title": "HP NonStop", "language": "en"}} {"page_content": "\n\nJMS WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsJMS WriterPrevNextJMS WriterWrites data using the JMS API.JMS Writer propertiespropertytypedefault valuenotesConnection Factory NameStringthe name of the ConnectionFactory containing the queue or topicCtxStringthe JNDI initial context factory nameMessage TypeStringTextMessagethe other supported value is BytesMessagePasswordencrypted passwordsee Encrypted passwordsProviderStringQueue NameStringleave blank if Topic is specifiedTopicStringleave blank if QueueName is specifiedUsernameStringa messaging system user with the necessary permissionsThis adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsJMS Writer sample applicationSample code using DSVFormatter:CREATE SOURCE JMSCSVSource USING FileReader (\n directory:'/opt/Striim/Samples/PosApp/appData',\n WildCard:'posdata.csv',\n positionByEOF:false,\n charset:'UTF-8'\n)\nPARSE USING DSVParser (\n header:'yes'\n)\nOUTPUT TO CsvStream;\n\nCREATE TYPE CSVType (\n merchantName String,\n merchantId String,\n dateTime DateTime,\n hourValue Integer,\n amount Double,\n zip String\n);\n\nCREATE STREAM TypedCSVStream of CSVType;\n\nCREATE CQ CsvToPosData\nINSERT INTO TypedCSVStream\nSELECT data[0],data[1],\n TO_DATEF(data[4],'yyyyMMddHHmmss'),\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')),\n TO_DOUBLE(data[7]),\n data[9]\nFROM CsvStream;\n\nCREATE TARGET JmsTarget USING JMSWriter (\n Provider:'tcp://192.168.123.101:61616',\n Ctx:'org.apache.activemq.jndi.ActiveMQInitialContextFactory',\n UserName:'striim',\n Password:'******',\n Topic:'dynamicTopics/Test'\n)\nFORMAT USING DSVFormatter (\n)\nINPUT FROM TypedCSVStream;In this section: JMS WriterJMS Writer propertiesJMS Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/jms-writer.html", "title": "JMS Writer", "language": "en"}} {"page_content": "\n\nJPA WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsJPA WriterPrevNextJPA WriterUse only as directed by Striim support.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-16\n", "metadata": {"source": "https://www.striim.com/docs/en/jpa-writer.html", "title": "JPA Writer", "language": "en"}} {"page_content": "\n\nKafka WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsKafka WriterPrevNextKafka WriterWrites to a topic in\u00a0Apache Kafka 0.11, 2.1, or 3.3.2. Support for Kafka 0.8, 0.9, and 0.10 is deprecated.Use the version of KafkaWriter that corresponds to the target Kafka broker. For example, to write against a Kafka 2.1 or 3.3 cluster, the syntax is\u00a0CREATE TARGET USING KafkaWriter VERSION '2.1.0'.\u00a0If writing to the internal Kafka instance, use 0.11.0.Kafka Writer propertiespropertytypedefault valuenotesBroker AddressStringKafka ConfigStringSpecify any properties required by the authentication method used by the specified Kafka broker (see Configuring authentication in Kafka Config.Optionally, specify Kafka producer properties, separated by semicolons. See the table below for details.When writing to a topic in Confluent Cloud, specify the appropriate SASL properties.To send messages in Confluent wire format, specify value.deserializer=io.confluent.kafka.serializers.KafkaAvroSrializer. When Mode is Sync, also specify batch.size=-1 (not necessary when Mode is Async).Kafka Config Property SeparatorString;Available only in Kafka 0.11 and later versions. Specify a different separator if one of the producer property values specified in KafkaConfig contains a semicolon.Kafka Config Value SeparatorString=Available only in Kafka 0.11 and later versions. Specify a different separator if one of the producer property values specified in KafkaConfig contains an equal symbol.Message HeaderStringOptionally, if using Kafka 0.11 or later in async mode, or in sync mode with KafkaConfig batch.size=-1, specify one or more custom headers to be added to messages as key-value pairs. Values may be:a field name from an in put stream of a user-defined type: for example, MerchantID=merchantIDa static string: for example, Company=\"My Company\"a function: for example, to get the source table name from a WAEvent input stream that is the output of a CDC reader, Table Name=@metadata(TableName) (for more information, see WAEvent functions)To specify multiple custom headers, separate them with semicolons.Message KeyStringOptionally, if using Kafka 0.11 or later in async mode, or in sync mode with KafkaConfig batch.size=-1, specify one or more keys to be added to messages as key-value pairs. The property value may be a static string, one or more fields from the input stream, or a combination of both. Examples:MessageKey : CName=\u201dStriim\u201d\n\nMessageKey : Table=@metadata(TableName);\n Operation=@metadata(OperationName);key1=@userdata(key1)\n\nMessageKey : CityName=City; Zipcode=zip\n\nMessageKey : CName=\u201dStriim\u201d;Table=@metadata(TableName);\n Operation=@metadata(OperationName)Among other possibilities, you may use this property to support log compaction or to allow downstream applications to use queries based on the message payload..ModeStringSyncsee\u00a0Setting KafkaWriter's mode property: sync versus asyncParallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Partition KeyStringThe name of a field in the input stream whose values determine how events are distributed among multiple partitions. Events with the same partition key field value will be written to the same partition.If the input stream is of any type except WAEvent, specify the name of one of its fields.If the input stream is of the WAEvent type, specify a field in the METADATA map (see WAEvent contents for change data) using the syntax\u00a0@METADATA(), or a field in the USERDATA map (see\u00a0Adding user-defined data to WAEvent streams), using the syntax\u00a0@USERDATA(). If appropriate, you may concatenate multiple METADATA and/or USERDATA fields.WAEvent contents for change dataTopicStringThe existing Kafka topic to write to (will not be created if it does not exist). If more than one Kafka Writer writes to the same topic, recovery is not supported (see Recovering applications. (Recovery is supported when using Parallel Threads.)Recovering applicationsThis adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsNotes on the KafkaConfig propertyWith the exceptions noted in the following table, you may specify any Kafka producer property in KafkaConfig.\u00a0Kafka producer propertynotesacksin sync mode, may be set to 1 or allin async mode, may be set to 0, 1, or allbatch.sizelinger.msretriesIn sync mode, to prevent out-of-order events, the producer properties set in Kafka with will be unchanged and ignored, and Striim will handle these internally.In async mode, Striim will update the Kafka producer properties and these will be handled by Kafka.In sync mode, you may set batch.size=-1 to write one event per Kafka message. This will seriously degrade performance so is not recommended in a production environment. With this setting, messages will be similar to those in async mode.enable.idempotenceWhen using version 2.1.0 and async mode, set to true to write events in order.value.serializerdefault value is org.apache.kafka.common.serialization.ByteArrayDeserializer; to write messages in Confluent wire format, set to io.confluent.kafka.serializers.KafkaAvroSerializerInternally,\u00a0KafkaWriter invokes KafkaConsumer for various purposes, and the WARNING from the consumer API due to passing KafkaConfig\u00a0 properties can be safely ignored.KafkaWriter sample applicationThe following sample code writes data from PosDataPreview.csv to the Kafka topic KafkaWriterSample. This topic already exists in Striim's internal Kafka instance. If you are using an external Kafka instance, you must create the topic before running the application.CREATE SOURCE PosSource USING FileReader (\n directory:'Samples/PosApp/AppData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:yes\n)\nOUTPUT TO RawStream;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM RawStream;\n\nCREATE TARGET KW11Sample USING KafkaWriter VERSION '0.11.0'(\n brokeraddress:'localhost:9092',\n topic:'KafkaWriterSample'\n)\nFORMAT USING DSVFormatter ()\nINPUT FROM PosDataStream;\nYou can verify that data was written to Kafka by running the\u00a0Kafka Reader sample application.The first field in the output (position) stores information required to avoid lost or duplicate events after recovery (see\u00a0Recovering applications). If recovery is not enabled, its value is NULL.mon output (see Using the MON command)\u00a0for targets using KafkaWriter includes:in async mode only, Sent Bytes Rate: how many megabytes per second were sent to the brokersin both sync and async mode, Write Bytes Rate: how many megabytes per second were written by the brokers and acknowledgement received by StriimEnabling compressionWhen you enable compression in KafkaWriter, the broker and consumer should handle the compressed batches automatically. No additional configuration should be required in Kafka.To enable compression for version 0.11, include the\u00a0compression.type property in KafkaConfig. Supported values are\u00a0gzip,\u00a0lz4,\u00a0snappy. For example:KafkaConfig:'compression.type=snappy'Writing to multiple Kafka partitionsIf the INPUT FROM stream is partitioned, events will be distributed among Kafka partitions based on the values in the input stream's PARTITION BY property. All events with the same value in the PARTITION BY field will be written to the same randomly selected partition. Striim will distribute the data evenly among the partitions to the extent allowed by the frequency of the various PARTITION BY field values (for example, if 80% of the events have the same value, then one of the Kafka partitions will contain at least 80% of the events). In the example above, to enable partitioning by city, you would revise the definition of TransformedDataStream as follows:CREATE STREAM TransformedDataStream OF TransformedDataType PARTITION BY City;To override this default behavior and send events to specific partitions based on the PARTITION BY field values, see\u00a0Creating a custom Kafka partitioner.Setting KafkaWriter's mode property: sync versus asyncKafkaWriter performs differently depending on whether the mode property value is sync or async and whether recovery (see Recovering applications)\u00a0is enabled for the application. The four possibilities are:notessync with recovery\u00a0Provides the most accurate output. Events are written in order with no duplicates (\"exactly-once processing,\" also known as E1P), provided that you do not change the partitioner logic, number of partitions, or IP address used by Striim while the application is stopped.To avoid duplicate events after recovery, the Kafka topic's retention period must be longer than the amount of time that elapses before recovery is initiated (see Recovering applications) and, if writing to multiple partitions, the brokers must be brought up in reverse of the order in which they went down.With this configuration, two KafkaWriter targets (even if in the same application) cannot write to the same topic. Instead, use a single KafkaWriter, a topic with multiple partitions (see\u00a0Writing to multiple Kafka partitions), and, if necessary, parallel threads (see\u00a0Creating multiple writer instances).async with recoveryProvides higher throughput with the tradeoff that events are written out of order (unless using KafkaWriter version 2.1 with enable.idempotence set to true in KafkaConfig) and recovery may result in some duplicates (\"at-least-once processing,\" also known as A1P).sync without recoveryAppropriate with non-recoverable sources (see Recovering applications) when you need events to be written in order. Otherwise, async will give better performance.async without recoveryAppropriate with non-recoverable sources when you don't care whether events are written in order. Throughput will be slightly faster than async with recovery.When using sync, multiple events are batched in a single Kafka message. The number of messages in a batch is controlled by the\u00a0batch.size parameter, which by default is 1 million bytes. The maximum amount of time KafkaWriter will wait between messages is set by the linger.ms\u00a0parameter, which by default is 1000 milliseconds. Thus, by default, KafkaWriter will write a message after it has received a million bytes or one second has elapsed since the last message was written, whichever occurs first.\u00a0Batch.size must be larger than the largest event KafkaWriter will receive, but must not exceed the\u00a0max.message.bytes\u00a0size in the Kafka topic configuration.The following setting would write a message every time KafkaWriter has received 500,000 bytes or two seconds has elapsed since the last message was written:KafkaConfig:'batch.size=500000,linger.ms=2000'\nKafkaWriter output with DSVFormatterEach output record will begin with a field containing information Striim can use to ensure that no duplicate records are written during recovery (see Recovering applications).For\u00a0input events of this user-defined Striim type:CREATE TYPE emptype ( id int, name string);output would be similar to:1234002350,1,User OneFor input events of type WAEvent, output (Position, TableName, CommitTimestamp, Operation, data) would be similar to:SCN:1234002344,Employee,12-Dec-2016 19:13:00,INSERT,1,User OneKafkaWriter output with JSONFormatterEach JSON output node will contain a field named __striimmetadata with a nested field position\u00a0containing information Striim can use to ensure that no duplicate records are written during recovery (see Recovering applications).For\u00a0input events of this user-defined Striim type:CREATE TYPE emptype ( id int, name string);output would be similar to:{\n \"ID\": 1,\n \"Name\": \"User One\", \n \"__striimmetadata\" : {\"position\" : \"SCN:1234002344\" } \n}\nFor input events of type WAEvent, output would be similar to:{\n\"metadata\" : { \"TABLENAME\" : \"Employee\",\"CommitTimestamp\" : \"12-Dec-2016 19:13:00\", \n \"OperationName\" : \"INSERT\" }\n\"data\" : { \n \"ID\" : \"1\",\n \"NAME\" : \"User One\"},\n\"__striimmetadata\" : { \"position\" : \"SCN:1234002350\" } // but in binary format\n}KafkaWriter output with XMLFormatterEach output record will include an extra element __striimmetadata with a nested element position\u00a0containing information Striim can use to ensure that no duplicate records are written during recovery (see Recovering applications).For\u00a0input events of this user-defined Striim type:CREATE TYPE emptype ( id int, name string);output would be similar to:...\n 1 \n User One \n<__striimmetadata>\n\u00a0\u00a01234002344\n ...In this section: Kafka WriterKafka Writer propertiesNotes on the KafkaConfig propertyKafkaWriter sample applicationEnabling compressionWriting to multiple Kafka partitionsSetting KafkaWriter's mode property: sync versus asyncKafkaWriter output with DSVFormatterKafkaWriter output with JSONFormatterKafkaWriter output with XMLFormatterSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/kafka-writer.html", "title": "Kafka Writer", "language": "en"}} {"page_content": "\n\nTesting KafkaWriter performanceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsKafka WriterTesting KafkaWriter performancePrevNextTesting KafkaWriter performanceStriim includes two utility scripts that may be useful in tuning your Kafka configuration.\u00a0Striim/tools/enhanced-producer-perf.sh writes to Kafka directly from the Striim host.\u00a0Striim/tools/standalone-kafkawriter-perf.sh writes with KafkaWriter.Script argumentsThe arguments are the same for both scripts:argumentnotesfileThe file to be used for testing.Specify\u00a0none to use an in-memory data generator, in which case you must also specify record-size and num-records.\u00a0input-formatIf using a file, set to\u00a0avro or\u00a0dsv, or omit or set to none\u00a0to read the file as is.topicThe Kafka topic to write to.producer-propsA file containing the Kafka server properties. See\u00a0striim/tools/props for an example.output-formatSet to\u00a0json to format the data or omit or set to\u00a0none to write as is.record-sizeIf using\u00a0--file none, the size of each generated record in bytes.num-recordsIf using\u00a0--file none, the number of records to write.modeSet to\u00a0sync\u00a0for synchronous writes, or omit or set to\u00a0async\u00a0for asynchronous writes (see\u00a0Setting KafkaWriter's mode property: sync versus async).single-partitionSet to true\u00a0to write to all partitions of the topic. Omit or set to\u00a0false to write only to partition 0.Script usage examplesFor example, the following will generate one million 120-byte records and write them directly to Kafka../enhanced-producer-perf.sh --file none --topic test01\u00a0 --producer-props props\n --record-size 120 --num-records 1000000\u00a0Output from that command would be similar to:Configuration used for testing Apache Kafka Producer:\nFile : none\nTopic : test01\nPartitions : 1\nSingle Partition : false\nMode : async\nOutput format : none\nNum Records : 1000000\nRecord Size : 120\n---------------------------------\nFinal Stats\nRecords sent : 1000000\nRecords/sec : 521920.6680584551\nMB/sec : 62.63048016701461\nAvg Latency(ms) : 559.206501\nMax Latency(ms) : 1054\nAvg message size (bytes): 120.0\nNumber of times Producer send called : 1000000\nAvg time between 2 Producer send calls(ms) : 0.003199727455364971\nMax time between 2 Producer send calls(ms) : 210\n---------------------------------The following will generate the same number of records of the same size but write them using KafkaWriter:./standalone-kafkawriter-perf.sh --file none --topic test02\u00a0 --producer-props props\n --record-size 120 --num-records 1000000Output would be similar to:Configuration used for testing Striim Kafka Writer:\nFile : none\nTopic : test02\nPartitions : 1\nSingle Partition : false\nMode : async\nOutput format : none\nNum Records : 1000000\nRecord Size : 120\n---------------------------------\nFinal Stats\nRecords sent : 1000000\nRecords/sec : 84918.47826086957\nMB/sec : 3.9817331861413043\nAvg Latency(ms) : 0.912115\nMax Latency(ms) : 10018\nAvg message size (bytes): 46.0\nNumber of times Producer send called : 1000000\nAvg time between 2 Producer send calls(ms) : 0.011775\nMax time between 2 Producer send calls(ms) : 10011\nIn this section: Testing KafkaWriter performanceScript argumentsScript usage examplesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/testing-kafkawriter-performance.html", "title": "Testing KafkaWriter performance", "language": "en"}} {"page_content": "\n\nKafkaWriter output with AvroFormatterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsKafka WriterKafkaWriter output with AvroFormatterPrevNextKafkaWriter output with AvroFormatterIn async mode, each Kafka message will contain a single event. The event will start with four bytes containing its length.In sync mode, one message can contain multiple events. How many depends on the\u00a0batch.size setting (see\u00a0Setting KafkaWriter's mode property: sync versus async. Each event in the message will start with four bytes containing its length.In sync mode, output will contain a nested record named __striimmetadata with a field position. With recovery on, this field will contain\u00a0information Striim can use to ensure that no duplicate records are written during recovery (see Recovering applications). With recovery off, the value of this field will be null.Schema example for input events of the user-defined Striim typeFor\u00a0input events of this user-defined Striim type:Create Type PERSON (\n \u00a0ID int,\n \u00a0City String,\n \u00a0Code String,\n \u00a0Name String);the schema would be:{\n \"type\": \"record\",\n \"name\": \"PERSON\",\n \"namespace\": \"AVRODEMO\",\n \"fields\": [{\n \"name\": \"ID\",\n \"type\": [\"null\", \"int\"]\n }, {\n \"name\": \"CITY\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"CODE\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"NAME\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [{\n \"name\": \"position\",\n \"type\": [\"null\", \"string\"]\n }]\n }\n }]\n}\nSchema example for input events of type WAEventFor input events of type WAEvent, the schema would be similar to:{\n \"namespace\": \"WAEvent.avro\",\n \"type\": \"record\",\n \"name\": \"WAEvent\",\n \"fields\": [{\n \"name\": \"metadata\",\n \"type\": [\"null\",\n {\"type\": \"map\", \"values\": [\"null\", \"string\"] }\n ]\n },\n {\n \"name\": \"data\",\n \"type\": [\"null\",\n {\"type\": \"map\",\"values\": [\"null\", \"string\"] }\n ]\n },\n {\n \"name\": \"before\",\n \"type\": [\"null\",\n {\"type\": \"map\", \"values\": [\"null\", \"string\"] }\n ]\n },\n {\n \"name\": \"userdata\",\n \"type\": [\"null\",\n {\"type\": \"map\", \"values\": [\"null\", \"string\"] }\n ]\n },\n {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [\n {\"name\": \"position\", \"type\": [\"null\", \"string\"] }\n ]\n }\n }\n ]\n}\nand output would be similar to:{\n\"data\" : { \"ID\" : \"1\" , \"NAME\" : \"User One\" },\n\"before\":{ \"null\" },\n\"metadata\" : {\n \"TABLENAME\" : \"Employee\",\n \"CommitTimestamp\" : \"12-Dec-2016 19:13:01\",\n \"OperationName\" : \"UPDATE\" },\n\"userdata\":{ \"null\" },\n\"__striimmetadata\" : { \"position\" : SCN:1234002356\" }\n}\nIn this section: KafkaWriter output with AvroFormatterSchema example for input events of the user-defined Striim typeSchema example for input events of type WAEventSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/kafkawriter-output-with-avroformatter.html", "title": "KafkaWriter output with AvroFormatter", "language": "en"}} {"page_content": "\n\nUsing the Confluent or Hortonworks schema registrySkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsKafka WriterUsing the Confluent or Hortonworks schema registryPrevNextUsing the Confluent or Hortonworks schema registryNoteThis feature requires Kafka 0.10 or later (0.11 or later recommended), except when the schema registry is in Confluent Cloud it requires 0.11 or later.You may use the Confluent or Hortonworks schema registry by selecting AvroFormatter and specifying its schemaRegistryURL. property, for example,\u00a0schemaRegistryURL:'http://198.51.100.55:8081.Tracking schema evolution of database tablesWhen a KafkaWriter target's input stream is\u00a0the output stream of a DatabaseReader or CDC reader source,\u00a0this allows you to track the evolution of database tables over time. The first time the Striim application is run, KafkaWriter creates a record in the schema registry for each table being read. Except when using Confluent's wire format each schema record has a unique ID which which is stored in the next four bytes (bytes 4-7) after the record length (bytes 0-3) of the data records for the associated table.Each time KafkaWriter receives a DDL event, it writes a new schema record for the referenced table. From then until the next schema change, that schema record's unique ID is stored in data records for the associated table.In the source, if it has the CDDL Capture property, it must be set to True and the CDDL Acton must be Process.In this case, in addition to specifying AvroFormatter's\u00a0schemaRegistryURL\u00a0property, you must set its\u00a0formatAs property to\u00a0table\u00a0or\u00a0native.When the \u00a0formatAs property is\u00a0table\u00a0or\u00a0native and any of the special characters listed in Using non-default case and special characters in table identifiers are used in source column names, they will be included as aliases in the Avro schema fields.Using non-default case and special characters in table identifiersTracking evolution of Striim stream typesWhen a\u00a0KafkaWriter target's input stream is of a user-defined type, the schema registry allows you to track the evolution of that type over time. The first time\u00a0the Striim application is run, KafkaWriter creates a record in the schema registry for the input stream's type. Each time\u00a0the application the KafkaWriter's input stream's type is changed using ALTER and RECOMPILE, KafkaWriter will create a new schema record. As with database tables, the schema records' unique IDs associate them with their corresponding data records.In this case, specify AvroFormatter's\u00a0schemaRegistryURL\u00a0property and leave\u00a0formatAs unspecified or set to default.Reading schema and data records togetherConsuming applications can use the following code to combine schema and data records into Avro recordspackage test.kafka.avro;\n \nimport org.apache.avro.generic.GenericRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecords;\nimport org.apache.kafka.clients.consumer.KafkaConsumer;\n \nimport org.apache.kafka.common.TopicPartition;\n \nimport java.io.FileInputStream;\nimport java.io.InputStream;\nimport java.util.ArrayList;\nimport java.util.Properties;\nimport java.util.List;\n \npublic class KafkaAvroConsumerUtilWithDeserializer {\n \n private KafkaConsumer consumer;\n \n public KafkaAvroConsumerUtilWithDeserializer(String configFileName) throws Exception {\n \n Properties props = new Properties();\n InputStream in = new FileInputStream(configFileName);\n props.load(in);\n \n this.consumer = new KafkaConsumer(props);\n \n TopicPartition tp = new TopicPartition(props.getProperty(\"topic.name\"), 0);\n List tpList = new ArrayList();\n tpList.add(tp);\n this.consumer.assign(tpList);\n this.consumer.seekToBeginning(tpList);\n }\n \n public void consume() throws Exception {\n while(true) {\n ConsumerRecords records = consumer.poll(1000);\n for(ConsumerRecord record : records) {\n System.out.println(\"Topic \" + record.topic() + \" partition \" + record.partition()\n + \" offset \" + record.offset() + \" timestamp \" + record.timestamp());\n List avroRecordList = (List) record.value();\n for(GenericRecord avroRecord : avroRecordList) {\n System.out.println(avroRecord);\n }\n }\n }\n } \n \n public void close() throws Exception {\n if(this.consumer != null) {\n this.consumer.close();\n this.consumer = null;\n }\n }\n \n public static void help() {\n System.out.println(\"Usage :\\n x.sh {path_to_config_file}\");\n }\n \n public static void main(String[] args) throws Exception {\n if(args.length != 1) {\n help();\n System.exit(-1);\n }\n String configFileName = args[0];\n System.out.println(\"KafkaConsumer config file : \" + configFileName);\n KafkaAvroConsumerUtilWithDeserializer consumerutil = null;\n try {\n consumerutil = new KafkaAvroConsumerUtilWithDeserializer(configFileName);\n consumerutil.consume();\n } finally {\n if(consumerutil != null) {\n consumerutil.close();\n consumerutil = null;\n }\n }\n }\n}The pom.xml for that class must include the following dependencies. Adjust the Kafka version to match your environment.\n \n org.apache.kafka\n kafka-clients\n 0.11.0.2\n \n \n org.apache.avro\n avro\n 1.7.7\n \nStriimKafkaAvroDeserializer.jar must be in the classpath when you start your application. You can download StriimKafkaAvroDeserializer.jarrom https://github.com/striim/doc-downloads.The config file for your Kafka consumer should be similar to:bootstrap.servers=192.168.1.35:9092\ntopic.name=test_sync_registry\nschemaregistry.url=http://192.168.1.35:8081/\nkey.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer\nvalue.deserializer=com.striim.kafka.deserializer.KafkaAvroDeserializer\ngroup.id=KafkaAvroDemoConsumerThe command to start the consumer with the deserializer should be something like:java -Xmx1024m -Xms256m -Djava.library.path=/usr/local/lib:/usr/bin/java \\\n -cp \"target/classes:target/dependency/*:\" com.striim.kafka.KafkaAvroConsumerUtil $*Schema registry REST API callsThe following Kafka schema registry REST API calls may be useful in using the schema registry.To list all the subjects in the schema registry, use\u00a0curl -X GET http://localhost:8081/subjects. This will return a list of subjects:[\"AVRODEMO.EMP\",\"AVRODEMO.DEPT\",\"DDLRecord\"]This shows that there are schemas for the EMP and DEPT tables. DDLRecord defines the schemas for storing DDL events.To list the versions for a particular schema, use\u00a0curl -X GET http://localhost:8081/subjects//versions. If there were three versions of the EMP table's schema,\u00a0curl -X GET http://localhost:8081/subjects/AVRODEMO.EMP/versions would return:[1,2,3]To see the second version of that schema, you would use\u00a0curl -X GET http://localhost:8081/subjects/AVRODEMO.EMP/versions/2:{\n \"subject\": \"AVRODEMO.EMP\",\n \"version\": 2,\n \"id\": 261,\n \"schema\": {\n \"type\": \"record\",\n \"name\": \"EMP\",\n \"namespace\": \"AVRODEMO\",\n \"fields\": [{ ...The first three lines are Kafka metadata. The\u00a0subject property, which identifies the table, is the same for all schema records associated with that table. The\u00a0 version\u00a0property records the order in which the schema versions were created.\u00a0id is the schema's unique identifier. The rest of the output is the schema definition in one of the formats shown below.Record formats for database schema evolutionFor example, say you have the following Oracle table:CREATE TABLE EMP( \n EMPNO NUMBER(4,0), \n ENAME VARCHAR2(10), \n JOB VARCHAR2(9), \n HIREDATE TIMESTAMP, \n SAL BINARY_DOUBLE, \n COMM BINARY_FLOAT, \n DEPTNO INT ...Let's start by looking at the default format of an Avro-formatted Kafka record for an input type of WAEvent:{\n \"type\": \"record\",\n \"name\": \"WAEvent\",\n \"namespace\": \"WAEvent.avro\",\n \"fields\": [{\n \"name\": \"metadata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"data\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"before\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"userdata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [{\n \"name\": \"position\",\n \"type\": [\"null\", \"string\"]\n }]\n }\n }]\n}\nFor more information, see\u00a0WAEvent contents for change data\u00a0and\u00a0Parsing the fields of WAEvent for CDC readers.AvroFormatter's\u00a0formatAs property allows two other formats,\u00a0table and\u00a0native. When you use one of these, the first time an event is received from the table, a record is written in the schema registry in the following format:formatAs: 'table'formatAs: 'native'{\n \"type\": \"record\",\n \"name\": \"EMP\",\n \"namespace\": \"AVRODEMO\",\n \"fields\": [{\n \"name\": \"EMPNO\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"ENAME\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"JOB\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"HIREDATE\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"SAL\",\n \"type\": [\"null\", \"double\"]\n }, {\n \"name\": \"COMM\",\n \"type\": [\"null\", \"float\"]\n }, {\n \"name\": \"DEPTNO\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [{\n \"name\": \"position\",\n \"type\": [\"null\", \"string\"]\n }]\n }\n }]\n}\n{\n \"type\": \"record\",\n \"name\": \"EMP\",\n \"namespace\": \"AVRODEMO\",\n \"fields\": [{\n \"name\": \"data\",\n \"type\": [\"null\", {\n \"type\": \"record\",\n \"name\": \"data_record\",\n \"namespace\": \"data_record\",\n \"fields\": [{\n \"name\": \"EMPNO\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"ENAME\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"JOB\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"HIREDATE\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"SAL\",\n \"type\": [\"null\", \"double\"]\n }, {\n \"name\": \"COMM\",\n \"type\": [\"null\", \"float\"]\n }, {\n \"name\": \"DEPTNO\",\n \"type\": [\"null\", \"string\"]\n }]\n }]\n }, {\n \"name\": \"before\",\n \"type\": [\"null\", {\n \"type\": \"record\",\n \"name\": \"before_record\",\n \"namespace\": \"before_record\",\n \"fields\": [{\n \"name\": \"EMPNO\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"ENAME\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"JOB\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"HIREDATE\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"SAL\",\n \"type\": [\"null\", \"double\"]\n }, {\n \"name\": \"COMM\",\n \"type\": [\"null\", \"float\"]\n }, {\n \"name\": \"DEPTNO\",\n \"type\": [\"null\", \"string\"]\n }]\n }]\n }, {\n \"name\": \"metadata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"userdata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"datapresenceinfo\",\n \"type\": [\"null\", {\n \"type\": \"record\",\n \"name\": \"datapresenceinfo_record\",\n \"namespace\": \"datapresenceinfo_record\",\n \"fields\": [{\n \"name\": \"EMPNO\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"ENAME\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"JOB\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"HIREDATE\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"SAL\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"COMM\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"DEPTNO\",\n \"type\": \"boolean\"\n }]\n }]\n }, {\n \"name\": \"beforepresenceinfo\",\n \"type\": [\"null\", {\n \"type\": \"record\",\n \"name\": \"beforepresenceinfo_record\",\n \"namespace\": \"beforepresenceinfo_record\",\n \"fields\": [{\n \"name\": \"EMPNO\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"ENAME\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"JOB\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"HIREDATE\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"SAL\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"COMM\",\n \"type\": \"boolean\"\n }, {\n \"name\": \"DEPTNO\",\n \"type\": \"boolean\"\n }]\n }]\n }, {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [{\n \"name\": \"position\",\n \"type\": [\"null\", \"string\"]\n }]\n }\n }]\n}\nThe\u00a0native format includes more WAEvent data than the\u00a0table format.Records are also created for ALTER TABLE and CREATE TABLE DDL operations. The schema for DDL events is:{\n \"type\": \"record\",\n \"name\": \"DDLRecord\",\n \"namespace\": \"DDLRecord.avro\",\n \"fields\": [{\n \"name\": \"metadata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"data\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"userdata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [{\n \"name\": \"position\",\n \"type\": [\"null\", \"string\"]\n }]\n }\n }]\n}Records for DDL events have this format:{\n \"metadata\": {\n \"CURRENTSCN\": \"386346466\",\n \"OperationName\": \"ALTER\",\n \"OperationSubName\": \"ALTER_TABLE_ADD_COLUMN\",\n \"TimeStamp\": \"2018-04-17T04:31:48.000-07:00\",\n \"ObjectName\": \"EMP\",\n \"COMMITSCN\": \"386346473\",\n \"TxnID\": \"9.10.60399\",\n \"COMMIT_TIMESTAMP\": \"2018-04-17T04:31:48.000-07:00\",\n \"CatalogName\": null,\n \"CatalogObjectType\": \"TABLE\",\n \"OperationType\": \"DDL\",\n \"SchemaName\": \"AVRODEMO\",\n \"STARTSCN\": \"386346462\",\n \"SCN\": \"386346466\"\n },\n \"data\": {\n \"DDLCommand\": \"ALTER TABLE AVRODEMO.EMP\\n ADD phoneNo NUMBER(10,0)\"\n },\n \"userdata\": null,\n \"__striimmetadata\": {\n \"position\": \"TQAAQA=\"\n }\n}\nThis example is for an\u00a0ALTER TABLE EMP ADD phoneNo NUMBER(10,0); command.Records are also created for\u00a0BEGIN, COMMIT, and ROLLBACK control events. The schema for control events is:{\n \"type\": \"record\",\n \"name\": \"DDLRecord\",\n \"namespace\": \"DDLRecord.avro\",\n \"fields\": [{\n \"name\": \"metadata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"data\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"userdata\",\n \"type\": [\"null\", {\n \"type\": \"map\",\n \"values\": [\"null\", \"string\"]\n }]\n }, {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [{\n \"name\": \"position\",\n \"type\": [\"null\", \"string\"]\n }]\n }\n }]\n}Records for control events have this format:{\n\t\"metadata\": {\n\t\t\"RbaSqn\": \"2031\",\n\t\t\"AuditSessionId\": \"439843\",\n\t\t\"MockedBegin\": null,\n\t\t\"CURRENTSCN\": \"8328113\",\n\t\t\"OperationName\": \"COMMIT\",\n\t\t\"SQLRedoLength\": \"6\",\n\t\t\"BytesProcessed\": \"424\",\n\t\t\"ParentTxnID\": \"5.17.9129\",\n\t\t\"SessionInfo\": \"UNKNOWN\",\n\t\t\"RecordSetID\": \" 0x0007ef.000001e3.00e0 \",\n\t\t\"TimeStamp\": \"2018-06-16T17:41:57.000-07:00\",\n\t\t\"TxnUserID\": \"QATEST\",\n\t\t\"RbaBlk\": \"483\",\n\t\t\"COMMITSCN\": \"8328113\",\n\t\t\"TxnID\": \"5.17.9129\",\n\t\t\"Serial\": \"2943\",\n\t\t\"ThreadID\": \"1\",\n\t\t\"SEQUENCE\": \"1\",\n\t\t\"COMMIT_TIMESTAMP\": \"2018-06-16T17:41:57.000-07:00\",\n\t\t\"TransactionName\": \"\",\n\t\t\"STARTSCN\": \"8328112\",\n\t\t\"SCN\": \"832811300005716756777309964480000\",\n\t\t\"Session\": \"135\"\n\t},\n\t\"userdata\": null\n}Record format for Striim stream type evolutionCREATE TYPE Person_TYpe (\n ID int KEY, \n CITY string,\n CODE string,\n NAME string);The schema record for the Striim type above would be:{\n \"type\": \"record\",\n \"name\": \"Person_Type\",\n \"namespace\": \"AVRODEMO\",\n \"fields\": [{\n \"name\": \"ID\",\n \"type\": [\"null\", \"int\"]\n }, {\n \"name\": \"CITY\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"CODE\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"NAME\",\n \"type\": [\"null\", \"string\"]\n }, {\n \"name\": \"__striimmetadata\",\n \"type\": {\n \"type\": \"record\",\n \"name\": \"StriimMeta_Record\",\n \"fields\": [{\n \"name\": \"position\",\n \"type\": [\"null\", \"string\"]\n }]\n }\n }]\n}This shows the input stream's type name (Person_Type), which will be the same in all schema registry records for this KafkaWriter target, and\u00a0 the names, default values, and types of its four fields at the time this record was created.Sample record in sync mode:{\n \"ID\": \"1\",\n \"Code\": \"IYuqAbAQ07NS3lZO74VGPldfAUAGKwzR2k3\",\n \"City\": \"Chennai\",\n \"Name\": \"Ramesh\",\n \"__striimmetadata\": {\n \"position\": \"iUAB6JoOLYlCsaKn8nWYw\"\n }\n}Schema registry sample application: OracleReader to KafkaWriterThe following example assumes you have the following tables in Oracle:create table EMP(\n empno number(4,0),\n ename varchar2(10),\n job varchar2(9),\n hiredate Timestamp,\n sal binary_double,\n comm binary_float,\n deptno int,\n constraint pk_emp primary key (empno),\n constraint fk_deptno foreign key (deptno) references dept (deptno)\n);\ncreate table DEPT(\n deptno int,\n dname varchar2(30),\n loc varchar2(13),\n constraint pk_dept primary key (deptno)\n);This application will read from those tables and write to Kafka in native format:CREATE APPLICATION Oracle2Kafka;\n\nCREATE OR REPLACE SOURCE OracleSource USING OracleReader (\n DictionaryMode: 'OfflineCatalog',\n FetchSize: 1,\n Username: 'sample',\n Password: 'sample',\n ConnectionURL: '10.1.10.11:1521:orcl',\n Tables: 'sample.emp;sample.dept'\n )\nOUTPUT TO OracleStream ;\n \nCREATE OR REPLACE TARGET WriteToKafka USING KafkaWriter VERSION '0.11.0' (\n Mode: 'Sync',\n Topic: 'test',\n brokerAddress: 'localhost:9093',\n KafkaConfig: 'request.timeout.ms=60001;session.timeout.ms=60000'\n )\nFORMAT USING AvroFormatter (\n schemaregistryurl: 'http://localhost:8081/',\n formatAs: 'Native'\n )\nINPUT FROM OracleStream;\n\nEND APPLICATION Oracle2Kafka;See\u00a0Avro Parser for a sample application that reads from that Kafka topic.In this section: Using the Confluent or Hortonworks schema registryTracking schema evolution of database tablesTracking evolution of Striim stream typesReading schema and data records togetherSchema registry REST API callsRecord formats for database schema evolutionRecord format for Striim stream type evolutionSchema registry sample application: OracleReader to KafkaWriterSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/using-the-confluent-or-hortonworks-schema-registry.html", "title": "Using the Confluent or Hortonworks schema registry", "language": "en"}} {"page_content": "\n\nKinesis WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsKinesis WriterPrevNextKinesis WriterWrites to an Amazon Kinesis stream.Limitations:Only sync mode is supported.Parallel threads are not supported.Do not write to a target stream with more than 249 shards.Do not merge, split, start or stop encryption on, or change the shard count\u00a0of the target stream.QUIESCE is not supported in this release.If multiple writers write to the same shard, there may be duplicate events after\u00a0Recovering applications (in other words, exactly-once processing cannot be guaranteed).The maximum record length cannot exceed 1MB, including the metadata Striim adds to support recovery. (This is a limitation in Kinesis, not Striim.)Kinesis Writer propertiespropertytypedefault valuenotesAccess Key IDStringSpecify an AWS access key ID (created on the AWS Security Credentials page) for a user with write permission on the stream.When Striim is running in Amazon EC2 and there is an IAM role with that permission associated with the VM, leave accesskeyid and secretaccesskey blank to use the IAM role.Batch PolicyStringSize:900000, Interval:1The batch policy includes size and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to Size:-1.With the default setting, data will be written every second or sooner if the an event\u00a0pushes the buffer past 900,000 bytes. The buffer will expand as necessary to include that last event in the batch.ModeStringSyncDo not change from default.Partition KeyStringOptionally, specify a field to be used to partition the events among multiple shards. See\u00a0Kafka Writer for more details.Region NameStringthe AWS region of the stream (see\u00a0AWS Regions and Endpoints)Secret Access KeyStringthe secret access key for the streamSession TokenStringIf you are using a session token (see\u00a0GetSessionToken), specify it here. See also\u00a0Temporary Security Credentials.Stream NameStringThe existing Kinesis stream to write to (will not be created if it does not exist). If more than one Kinesis Writer writes to the same stream, recovery is not supported (see Recovering applications.Recovering applicationsThis adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsKinesis Writer sample applicationExample:CREATE TARGET KinesisTest USING KinesisWriter (\n regionName:'us-east-1',\n streamName:'myStream',\n accesskeyid:'********************',\n secretaccesskey:'****************************************',\n partitionKey: 'merchantId'\n)\nFORMAT USING JSONFormatter ()\nINPUT FROM PosSource_TransformedStream;In this section: Kinesis WriterKinesis Writer propertiesKinesis Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/kinesis-writer.html", "title": "Kinesis Writer", "language": "en"}} {"page_content": "\n\nKudu WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsKudu WriterPrevNextKudu WriterWrites to Apache Kudu 1.4 or later.Kudu Writer propertiespropertytypedefault valuenotesBatch PolicyStringEventCount:1000, Interval:30The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, events will be sent every 30 seconds or sooner if the buffer accumulates 1000 events.Each batch may include events for only one table. When writing to multiple tables, the current batch will be sent and a new one started every time an event is received for a different table.Checkpoint TableStringCHKPOINTA table with the specified value will be created automatically in Kudu and used by Striim for internal purposes.Connection Retry PolicyStringretryInterval=30, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Ignorable ExceptionStringBy default, if the target returns an error, KuduWriter terminates the application. Use this property to specify errors to ignore, separated by commas. Supported values are\u00a0ALREADY_PRESENT and NOT_FOUND.For example, to ignore ALREADY_PRESENT and NOT_FOUND errors, you would specify:IgnorableExceptionCode: 'ALREADY_PRESENT,NOT_FOUND'\nIgnored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Kudu Client ConfigStringSpecify the master address, socket read timeout, and operation timeout properties for Kudu. For example:master.addresses->192.168.56.101:7051;\nsocketreadtimeout->10000;\noperationtimeout->30000In a high availability environment, specify multiple master addresses, separated by commas. For example:master.addresses->192.168.56.101:7051, \n192.168.56.102:7051; ...Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.PK Update Handling ModeStringERRORThis property controls how KuduWriter will handle events that update the primary key, which is not supported by Kudu.With the default setting of ERROR, the application will terminate.Set to IGNORE to ignore such events and continue.Set to DELETEANDINSERT to drop the existing row and insert the one with the updated primary key. When using this setting, the Compression property in the CDC reader must be set to False.TablesStringThe name(s) of the table(s) to write to. The table(s) must exist in Kudu and the user specified in Username must have access. Table names are case-sensitive. The columns must have only supported data types as described below.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nMySQL and Oracle names are case-sensitive, SQL Server names are not. Specify names as .
for MySQL and Oracle and as\u00a0..
for SQL Server.Primary key columns must be first in Kudu (see\u00a0Known Issues and Limitations), so you may need to map columns if the source table columns are not in the same order (see\u00a0Mapping columns).Update As UpsertBooleanFalseWith the default value of False, if an update fails, KuduWriter will terminate. When set to True, if an update fails, KuduWriter will insert the row instead. Do not set to True when a source table has no primary key.Kudu Writer sample applicationThe following TQL will replicate data for the specified tables from Oracle to Kudu:CREATE SOURCE OracleCDCIn USING OracleReader (\n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'MYSCHEMA.NAME,MYSCHEMA.DEPT'\n)\nOUTPUT TO OracleCDCStream;\n\nCREATE TARGET KuduOut USING KuduWriter(\n KuduClientConfig:\"master.addresses->203.0.113.88:7051;\n socketreadtimeout->10000;operationtimeout->30000\",\n Tables: 'MYSCHEMA.NAME,name;MYSCHEMA.DEPT,dept'\nINPUT FROM OracleCDCStream;Kudu Writer data type support and correspondenceColumns in target tables must use only the following supported data types.If using Cloudera's Kudu, see Apache Kudu Schema Design and\u00a0Apache Kudu Usage Limitations.Striim data typeKudu data typejava.lang.Byte[]binaryjava.lang.Doubledoublejava.lang.Floatfloatjava.lang.Integerint32, int64java.lang.Longint64java.lang.Shortint16java.lang.Stringstringorg.joda.time.DateTimeunixtime_microsWhen the input stream for a KuduwriterTarget is the output of an OracleReader source, the following combinations are supported:Oracle typeKudu typeBINARY_DOUBLEdoubleBINARY_FLOATfloatBLOBbinary, stringCHARstringCHAR(1)boolCLOBstringDATEunixtime_microsDECfloatDECIMALfloatFLOATfloatINTint32INTEGERint32LONGint64, stringNCHARstringNUMBERint64NUMBER(1,0)boolNUMBER(10)int64NUMBER(19,0)int64NUMERICfloatNVARCHAR2stringSMALLINTint16TIMESTAMPunixtime_microsTIMESTAMP WITH LOCAL TIME ZONEunixtime_microsTIMESTAMP WITH TIME ZONEunixtime_microsVARCHAR2stringWhen the input stream for a KuduwriterTarget is the output of an MSSQLReader source, the following combinations are supported:SQL Server typeKudu typebigintint64bitbool or int64charstringdateunixtime_microsdatetimeunixtime_microsdatetime2unixtime_microsdecimalfloatfloatdouble or floatimagebinaryintint64moneydoublencharstringntextstringnumericfloatnvarcharstringnvarchar(max)stringrealfloatsmalldatetimeunixtime_microssmallintint64smallmoneydoubletextstringtinyintint64varbinarybinaryvarbinary(max)binaryvarcharstringvarchar(max)stringxmlstringIn this section: Kudu WriterKudu Writer propertiesKudu Writer sample applicationKudu Writer data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/kudu-writer.html", "title": "Kudu Writer", "language": "en"}} {"page_content": "\n\nMapR DB WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMapR DB WriterPrevNextMapR DB WriterWrites to a table in\u00a0MapR Converged Data Platform version 5.1.Except for the name of the adapter and the name of the configuration path property, MapRDBWriter is identical to HBaseWriter. See HBase Writer\u00a0for documentation of the properties.CREATE TARGET Target2 using MapRDBWriter(\n MapRDBConfigurationPath:\"/opt/mapr/hbase/hbase-1.1.8/conf\",\n\u00a0\u00a0BatchPolicy: \"eventCount:1\"),\n \u00a0Tables: 'SCOTT.BUSINESS,/tables/business.data'\nINPUT FROM DataStream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-09-27\n", "metadata": {"source": "https://www.striim.com/docs/en/mapr-db-writer.html", "title": "MapR DB Writer", "language": "en"}} {"page_content": "\n\nMapR FS WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMapR FS WriterPrevNextMapR FS WriterExcept for the name of the adapter, MapRFSWriter is identical to HDFSWriter. See\u00a0HDFS Writer for documentation of the properties.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-07-16\n", "metadata": {"source": "https://www.striim.com/docs/en/mapr-fs-writer.html", "title": "MapR FS Writer", "language": "en"}} {"page_content": "\n\nMapR Stream WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMapR Stream WriterPrevNextMapR Stream WriterWrites to a stream in\u00a0MapR Converged Data Platform version 5.1.MapR Stream Writer propertiespropertytypedefault valuenotesMapR Stream ConfigStringOptionally specify Kafka producer properties, separated by semicolons.ModeStringSyncSee\u00a0Setting KafkaWriter's mode property: sync versus async for discussion of this property.TopicStringSpecify the fully-qualified topic name. The syntax is /:, where \u00a0includes its path (which may be displayed using the command hadoop fs -ls /), for example, /tmp/myfirststream:topic1. The stream must already exist. Striim will create the topic if it does not exist.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsMapR Stream Writer sample applicationCREATE TARGET MapRStreamSample USING MapRStreamWriter (\n MapRStreamConfig: \"acks=0;batch.size=250000;max.request.size=40000000\",\n Mode: \"ASync\",\n Topic:'/striim/myfirststream:topic1'\n)\nFORMAT USING JSONFormatter ()\nINPUT FROM TypedCSVStream;\nIf the MapR topic is partitioned, events will be distributed among the partitions based on the target's input stream's\u00a0PARTITION BY field. If the input stream is not partitioned, all events will be written to partition 0.MapRStreamWriter is based on Kafka Writer\u00a0and the KafkaWriter sample code can be used as a model by replacing the target with the sample code above and creating a MapR stream named\u00a0striim. See\u00a0Apache Kafka and MapR Streams: Terms, Techniques and New Designs for more information about MapR's Kafka implementation.In this section: MapR Stream WriterMapR Stream Writer propertiesMapR Stream Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/mapr-stream-writer.html", "title": "MapR Stream Writer", "language": "en"}} {"page_content": "\n\nMariaDBSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMariaDBPrevNextMariaDBSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-01\n", "metadata": {"source": "https://www.striim.com/docs/en/mariadb-readers-old.html", "title": "MariaDB", "language": "en"}} {"page_content": "\n\nMemSQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMemSQLPrevNextMemSQLSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-08-31\n", "metadata": {"source": "https://www.striim.com/docs/en/memsql.html", "title": "MemSQL", "language": "en"}} {"page_content": "\n\nMongoDB WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMongoDB WriterPrevNextMongoDB WriterWrites to MongoDB collections.Using the adapterThis adapter may be used in four ways:With an input stream of a user-defined type, MongoDB Writer writes events as documents to a single collection.Target document field names are taken from the input stream's event type.The value of the key field of the input event is used as the document key (_id field value). If the input stream's type has no key, the target document's key is generated by concatenating the values of all fields, separated by the Key Separator string. Alternatively, you may specify a subset of fields to be concatenated using the syntax . keycolumns(, , ...) in the Collections property.With an input stream of type JSONNodeEvent\u00a0that is the output stream of a source using JSONParser,\u00a0MongoDB Writer writes events as documents to a single collection.Target document field names are taken from the input events' JSON field names.When the JSON event contains an _id\u00a0field, its value is used as the MongoDB Writer document key. Otherwise, MongoDB will generate an ObjectId for the document key.With an input stream of type JSONNodeEvent that is the output stream of a MongoDBReader source, MongoDB Writer writes each MongoDB collection to a separate MongoDB collection. Inserts, updates, and deletes in the source are handled as inserts, updates, and deletes in the target.MongoDB collections may be replicated in another MongoDB instance by using wildcards in the Collections property.\u00a0Alternatively, you may manually map source collections to target collections as discussed in the notes for the Collections property.The source document's primary key and field names are used as the target document's key and field names.With an input stream of type WAEvent that is the output stream of a SQL CDC reader or DatabaseReader source, MongoDB Writer writes data from each source table to a separate collection. The target collections may be in different databases. In order to process updates and deletes, compression must be disabled in the source adapter (that is, WAEvents for insert and delete operations must contain all values, not just primary keys and, for inserts, the modified values)..Each row in a source table is written to a document in the target collection mapped to the table. Target document field names are taken from the source event's metadata map and their values from its\u00a0data array\u00a0(see\u00a0WAEvent contents for change data).Source table data may be replicated to MongoDB collections of the same names by using wildcards in the Collections property.\u00a0Note that data will be read only from tables that exist when the source starts. Additional tables added later will be ignored until the source is restarted. Alternatively, you may manually map source tables to MongoDB collections as discussed in the notes for the Collections property. When the source is a CDC reader, updates and deletes in source tables are replicated in the corresponding MongoDB target collections.Each source row's primary key value (which may be a composite) is used as the key (_id field value) for the corresponding MongoDB document. If the table has no primary key, the target document's key is generated by concatenating the values of all fields in the row, separated by the Key Separator string. Alternatively, you may select a subset of fields to be concatenated using the\u00a0keycolumns option as discussed in the notes for the Collections property.MongoDB Writer propertiespropertytypedefault valuenotesAuth DBStringadminSpecify the authentication database for the specified username. If not specified, uses the\u00a0admin database.Auth TypeStringSCRAM_SHA_1Specify the authentication mechanism used by your MongoDB instance. The default setting uses MongoDB's default authentication mechanism, SCRAMSHA1. Other supported choices are\u00a0KERBEROS_GSSAPI, MONGODB_CR, SCRAM_SHA_256, and X_509.\u00a0Set to NoAuth if authentication is not enabled.\u00a0Set to KERBEROS_GSSAPI if you are using Kerberos.Batch PolicyStringEventCount:1000, Interval:30The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, data will be written every 30 seconds or sooner if the buffer accumulates 1,000 events.Checkpoint CollectionStringOptionally, specify the fully-qualified name of an existing empty MongoDB collection in the target. The user specified in Username must have the readwrite role on this collection.When no checkpoint collection is specified, Striim guarantees at-least-once processing (A1P) with MongoDB Writer. That is, after recovery, there may be some duplicate events, but none will be missing.When writing to replica sets in MongoDB 4.0 or later or to sharded clusters in Mongo 4.2 or later, specifying a checkpoint collection enables exactly-once processing (E1P). That is, after recovery, there will be no duplicate or missing events.CollectionsStringThe fully-qualified name(s) of the MongoDB collection(s) to write to, for example, mydb.mycollection. Separate multiple collections by commas.You may use the % wildcard, for example, mydb.%. Note that data will be written only to collections that exist when the Striim application starts. Additional collections added later will be ignored until the application is restarted.When multiple source collections are mapped to the target, the combination of shard key and _id field values must be unique across all source collections.For MongoDB 4.2 or later only: When the source is MongoDB Reader, the source colleciton is unsharded, and the target collection is sharded, the shard key fields in the source and target must match.When the source is MongoDB Reader and the source and target are both unsharded, in MongoDB Reader FullDocumentUpdateLookup must be True.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source, it can write to multiple collections. In this case, specify the names of both the source tables and target collections (schema.table,database.collection). You may use the % wildcard only for tables and documents, not for schemas or databases (schema.%,collection.%). If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,collection.%) but in three parts when the source is Oracle Reader or OJet (database.schema.%,collection.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,collection.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,collection.%).Connection RetryStringretryInterval=60, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringWhen connecting to MongoDB with DNS SRV, specify mongodb+srv:///, for example, mongodb+srv://abcdev3.gcp.mongodb.net/mydb. If you do not specify a database, the connection will use admin.When connecting to a sharded MongoDB instance with mongos, specify : of the mongos instance.When connecting to a sharded instance of MongoDB without mongos, specify : for all instances of the replica set, separated by commas. For example: 192.168.1.1:27107, 192.168.1.2:27107, 192.168.1.3:27017.To use an Azure private endpoint to connect to MongoDB Atlas, see Specifying Azure private endpoints in sources and targets.Excluded CollectionsStringAny collections to be excluded from the set specified in the Collections property. Specify as for the Collections property.Ignorable Exception CodeStringBy default, if the target returns an error, the application will terminate. Specify DUPLICATE_KEY or KEY_NOT_FOUND to ignore such errors and continue.By default, if MongoDB Writer attempts to update a shard key field without providing the previous value of the field, the Striim application will halt with a No Operation Exception error. To instead ignore such errors and continue without updating the shard key field, specify SHARD_KEY_UPDATE.To specify multiple ignorable exception codes, separate them with a comma.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Key SeparatorString:Inserted between values when generating document keys by concatenating column or field values. If the values might contain a colon, change this to something that will not occur in those values.Ordered WritesBooleanTrueIf you do not care that documents may be written out of order (typically the case during initial load), set to False to improve performance.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Passwordcom. webaction. security. PasswordThe password for the specified Username.Retriable Error CodesStringSpecify any error codes for which you want to trigger a connection retry rather than a halt or termination, for example, {\"ConnectRetryCodes\" : [\"301\"]Each version of MongoDB has its own error code reference, for example, https://github.com/mongodb/mongo/blob/v2.6/docs/errors.md for 2.6 and https://github.com/mongodb/mongo/blob/r5.0.7/src/mongo/base/error_codes.yml for 5.0.Security ConfigStringSee Using SSL or Kerberos or X.509 authentication with MongoDB.SSL ConfigStringTo enable SSL for the connection, or if individual Atlas MongoDB instances are specified in the Connection URL, set to public. To enable SSL and create a TLS protocol SSL context, set to TLS.Upsert ModeBooleanFalseSet to True to process inserts and updates as upserts. This is required if the input stream of this writer is the output stream of a Cosmos DB Reader or Mongo Cosmos DB Reader source.UsernameStringA MongoDB user with the readwrite role on the target collection(s).MongoDB sample applicationsThis application writes data from a CSV file to MongoDB. It has an input stream of a user-defined type.CREATE SOURCE FileSource USING FileReader ( \n directory: '/Users/user/Desktop', \n wildcard: 'data.csv', \n positionbyeof: false ) \nPARSE USING DSVParser() \nOUTPUT TO FileStream;\n\nCREATE TYPE CqStream_Type (\n uid java.lang.Integer, \n name java.lang.String , \n zip java.lang.Long, \n city java.lang.String);\nCREATE STREAM CqStream OF CqStream_Type;\n\nCREATE CQ Cq1 INSERT INTO CqStream\n SELECT TO_INT(data[0]) as uid,\n data[1] as name, \n TO_LONG(data[2]) as zip, \n data[3] as city \nFROM FileStream;\n\nCREATE TARGET MongoTarget USING MongoDBWriter ( \n Collections: 'test.emp keycolumns(uid,name)', \n ConnectionURL: 'localhost:27017', \n AuthDB: 'admin', \n UserName: 'waction', \n keyseparator: ':', \n Password: '********') \nINPUT FROM CqStream;This application writes data from a JSON file to MongoDB. It has an input stream of type JSONNodeEvent from JSONParser.CREATE SOURCE JsonSource USING FileReader ( \n directory: '/Users/user/Desktop', \n wildcard: 'jsondata.txt', \n positionbyeof: false\n) \nPARSE USING JSONParser() \nOUTPUT TO JsonStream;\n\nCREATE TARGET MongoTgt USING MongoDBWriter ( \nAuthType: 'SCRAM_SHA_1', \n ConnectionURL: 'localhost:27017', \n AuthDB: 'admin', \n Collections: 'test.emp1', \n UserName: 'waction', \n Password: '********', \n) \nINPUT FROM JsonStream;This initial load application writes data from one MongoDB collection to another. It has an input stream of type JSONNodeEvent from MongoDB Reader.CREATE SOURCE Mongoource USING MongoDBReader (\n Mode: 'InitialLoad',\n collections: 'qatest.col1',\n connectionUrl: 'localhost:27017'\n )\nOUTPUT TO Mongostream ;\n\nCREATE TARGET MongoTarget USING MongoDBWriter (\n Collections: 'qatest.col1,db2.TEST',\n ConnectionURL: 'localhost:27017'\n)\nINPUT FROM Mongostream;This streaming integration application writes data from Oracle to MongoDB. It has an input stream type of WAEvent from Oracle Reader (a SQL CDC source).CREATE SOURCE Oracle_Source USING OracleReader ( \n Username: 'miner',\n Password: 'miner',\n ConnectionURL: 'jdbc:oracle:thin:@//192.168.1.49:1521/orcl',\n Tables: 'QATEST.%'\n ) \nOUTPUT TO DataStream;\n\nCREATE TARGET MongoDBTarget1 USING MongoDBWriter ( \n Username: 'admin',\n Password: 'admin',\n ConnectionURL : 'localhost:27017',\n Collections: 'QATEST.%,MongoRegrDB01.%'\n ) \nINPUT FROM DataStream;In this section: MongoDB WriterUsing the adapterMongoDB Writer propertiesMongoDB sample applicationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-07\n", "metadata": {"source": "https://www.striim.com/docs/en/mongodb-writer.html", "title": "MongoDB Writer", "language": "en"}} {"page_content": "\n\nMongoDB Cosmos DB WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMongoDB Cosmos DB WriterPrevNextMongoDB Cosmos DB WriterWrites to Cosmos DB using the Azure Cosmos DB API for MongoDB version 3.6 or 4.0, allowing you to write to a CosmosDB target as if it were a MongoDB target. For general information, see Azure Cosmos DB API for MongoDB and Connect a MongoDB application to Azure Cosmos DB.Azure Cosmos DB API for MongoDB 3.2 is not supported.NoteIf the writer exceeds the number of Request Units per second provisioned for your Cosmos DB instance (see Request Units in Azure Cosmos DB), the application may halt. The Azure Cosmos DB Capacity Calculator can give you an estimate of the appropriate number of RUs to provision:You may need more RUs during initial load than for continuing replication.See Optimize your Azure Cosmos DB application using rate limiting and Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations for more information.Using the adapterThis adapter may be used in four ways:With an input stream of a user-defined type, MongoDB CosmosDB Writer writes events as documents to a single Cosmos DB collection.Target document field names are taken from the input stream's event type.The value of the key field of the input event is used as the document key (_id field value). If the input stream's type has no key, the target document's key is generated by concatenating the values of all fields, separated by the Key Separator string. Alternatively, you may specify a subset of fields to be concatenated using the syntax . keycolumns(, , ...) in the Collections property.With an input stream of type JSONNodeEvent\u00a0that is the output stream of a source using JSONParser,\u00a0MongoDB Cosmos DB Writer writes events as documents to a single Cosmos DB collection.Target document field names are taken from the input events' JSON field names.When the JSON event contains an _id\u00a0field, its value is used as the document key. Otherwise, Cosmos DB will generate an ObjectId for the document key.With an input stream of type JSONNodeEvent that is the output stream of a MongoDB Reader source, MongoDB Cosmos DB Writer writes each MongoDB collection to a separate Cosmos DB collection.MongoDB collections may be replicated in a Cosmos DB instance by using wildcards in the Collections property.\u00a0Alternatively, you may manually map source collections to target collections as discussed in the notes for the Collections property.The source document's primary key and field names are used as the target document's key and field names.With an input stream of type WAEvent that is the output stream of a SQL CDC reader or Database Reader source, MongoDB Cosmos DB Writer writes data from each source table to a separate collection. The target collections may be in different databases. In order to process updates and deletes, compression must be disabled in the source adapter (that is, WAEvents for update and delete operations must contain all values, not just primary keys and, for updates, the modified values)..Each row in a source table is written to a document in the target collection mapped to the table. Target document field names are taken from the source event's metadata map and their values from its\u00a0data array\u00a0(see\u00a0WAEvent contents for change data).Source table data may be replicated to Cosmos DB collections of the same names by using wildcards in the Collections property.\u00a0Note that data will be read only from tables that exist when the source starts. Additional tables added later will be ignored until the source is restarted. Alternatively, you may manually map source tables to Cosmos DB collections as discussed in the notes for the Collections property. When the source is a CDC reader, updates and deletes in source tables are replicated in the corresponding Cosmos DB target collections.Each source row's primary key value (which may be a composite) is used as the key (_id field value) for the corresponding Cosmos DB document. If the table has no primary key, the target document's key is generated by concatenating the values of all fields in the row, separated by the Key Separator string. Alternatively, you may select a subset of fields to be concatenated using the\u00a0KeyColumns option as discussed in the notes for the Collections property.Cosmos DB limits the number of characters allowed in document IDs (see Per-item limits in Microsoft's documentation). When using wildcards or keycolumns, be sure that the generated document IDs will not exceed that limit.MongoDB Cosmos DB Writer propertiespropertytypedefault valuenotesBatch PolicyStringEventCount:1000, Interval:30The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, data will be written every 30 seconds or sooner if the buffer accumulates 1,000 events.CollectionsStringThe fully-qualified name(s) of the CosmosDB collection(s) to write to, for example, mydb.mycollection. Separate multiple collections by commas.You may use the % wildcard, for example, mydb.%. Note that data will be written only to collections that exist when the Striim application starts. Additional collections added later will be ignored until the application is restarted.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source, it can write to multiple collections. In this case, specify the names of both the source tables and target collections (schema.table,database.collection). You may use the % wildcard only for tables and documents, not for schemas or databases (schema.%,collection.%). If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,collection.%) but in three parts when the source is Oracle Reader or OJet (database.schema.%,collection.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,collection.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,collection.%).Connection RetryStringretryInterval=60, maxRetries=3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Connection URLStringSpecify :, for example, mymongcos.mongo.cosmos.azure.com:10255. Copy the host and port values from the Connection String page under Settings for your Azure Cosmos DB API for MongoDB account.Excluded CollectionsStringAny collections to be excluded from the set specified in the Collections property. Specify as for the Collections property.Ignorable Exception CodeStringBy default, if the target returns an error, the application will terminate. Specify DUPLICATE_KEY, KEY_NOT_FOUND, or NO_OP_UPDATE to ignore such errors and continue. To specify both, separate them with a comma.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Key SeparatorString:Inserted between values when generating document keys by concatenating column or field values. If the values might contain a colon, change this to something that will not occur in those values.Ordered WritesBooleanTrueIf you do not care that documents may be written out of order (typically the case during initial load), set to False to improve performance.Overload Retry PolicyStringretryInterval=1, maxRetries=10With the default setting, if CosmosDB rejects a write because it exceeds the throughput limit, the adapter will try again in one second (retryInterval. If the second attempt is unsuccessful, in one second it will try a third time, and so on through ten attempts (maxRetries). If the tenth retry is unsuccessful, the adapter will halt and log an exception.Negative values are not supported.See Prevent rate-limiting errors for Azure Cosmos DB API for MongoDB operations.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Passwordcom. webaction. security. PasswordThe password for the specified Username.Retriable Error CodesString{\"ThrottlingErrorCodes\" : [16500,50]}Specify any error codes for which you want to trigger a connection retry or overload retry rather than a halt or termination.The default value {\"ThrottlingErrorCodes\" : [16500,50]} specifies error codes 16500 and 50 will result in an overload retry. {\"ConnectRetryCodes\" : [\"301\"], \"ThrottlingErrorCodes\" : [16500, 50]}\u201d would also specify that error code 301 will result in a connection retry.For information about these and other MongoDB error codes, see Common errors and solutions.Upsert ModeBooleanFalseSet to True to process inserts and updates as upserts. This is required if the input stream of this writer is a Cosmos DB Reader JSONNodeEvent stream.UsernameStringA MongoDB user with the readwrite role on the target collection(s).In this section: MongoDB Cosmos DB WriterUsing the adapterMongoDB Cosmos DB Writer propertiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/mongodb-cosmos-db-writer.html", "title": "MongoDB Cosmos DB Writer", "language": "en"}} {"page_content": "\n\nMQTT WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMQTT WriterPrevNextMQTT WriterWrites messages to an MQTT broker.See the MQTT FAQ for information on firewall settings.MQTT Writer propertiespropertytypedefault valuenotesBroker URIStringformat is tcp://:Client IDStringMQTT client ID (maximum 23 characters). Must be unique (not used by any other client) in order to identify this instance of MQTTWriter. The MQTT broker will use this ID close the connection when MQTTWriter goes offline and resend events after it restarts.QoSInteger00:\u00a0at most once1: at least once2: exactly onceTopicStringThis adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsMQTT Writer sample applicationCREATE CQ ConvertTemperatureToControl\nINSERT INTO controlstream\nSELECT TO_STRING(data.get('roomName') ),\n TO_STRING(data.get('temperature'))\nFROM tempstream;\nCREATE TARGET accontrol USING MQTTWriter(\n brokerUri:'tcp://m2m.eclipse.org:1883',\n Topic:'/striim/room687/accontrol',\n QoS:0,\n clientId:\"StriimMqttSource\"\n) \nFORMAT USING JSONFormatter (\n members:'roomName,targetTemperature'\n) \nINPUT FROM controlstream;In this section: MQTT WriterMQTT Writer propertiesMQTT Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/mqtt-writer.html", "title": "MQTT Writer", "language": "en"}} {"page_content": "\n\nMySQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsMySQLPrevNextMySQLSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-01\n", "metadata": {"source": "https://www.striim.com/docs/en/mysql-readers-old.html", "title": "MySQL", "language": "en"}} {"page_content": "\n\nOracle DatabaseSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsOracle DatabasePrevNextOracle DatabaseSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-06-21\n", "metadata": {"source": "https://www.striim.com/docs/en/oracle-database-writers.html", "title": "Oracle Database", "language": "en"}} {"page_content": "\n\nPostgreSQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsPostgreSQLPrevNextPostgreSQLSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-01\n", "metadata": {"source": "https://www.striim.com/docs/en/postgresql-writers.html", "title": "PostgreSQL", "language": "en"}} {"page_content": "\n\nRedshift WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsRedshift WriterPrevNextRedshift WriterWrites to one or more table(s) in a Amazon Redshift store via an Amazon S3 staging area.Before you create a RedshiftWriter target, we suggest you first create an S3Writer for the staging area (see S3 Writer) and verify that Striim can write to it. We recommend that the Redshift cluster's zone be in the same region as the S3 bucket (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). If it is not, you must set the S3region property.After the data has been written to Redshift, the files in the S3 bucket are moved to a subdirectory called archive. They are not deleted automatically, so you should periodically delete them. This may be automated (see https://aws.amazon.com/code/Amazon-S3/943).Specify either the access key and secret access key or an IAM role.Redshift Writer propertiespropertytypedefault valuenotesAccess Key IDStringan AWS access key ID (created on the AWS Security Credentials page) for a user with read permission on the S3 bucket (leave blank if using an IAM role)Bucket NameStringthe S3 bucket nameColumn DelimiterString| (UTF-8 007C)The character(s) used to delimit fields in the delimited text files in which the adapter accumulates batched data. If the data will contain the | character, change the default value to a sequence of characters that will not appear in the data.Connection URLStringcopy this from the JDBC URL field on the AWS Dashboard cluster-details page for the target clusterConversion ParamsStringOptionally, specify one or more of the following Redshift\u00a0Data Conversion Parameters, separated by commas:ACCEPTINVCHARS=\u201d\"EMPTYASNULLENCODING=EXPLICIT_IDSFILLRECORDIGNOREBLANKLINESIGNOREHEADER=NULL AS=\"\"ROUNDECTIMEFORMAT=\"\"TRIMBLANKSTRUNCATECOLUMNSFor example, ConversionParams: 'IGNOREHEADER=2, NULL AS=\"NULL\", ROUNDEC'ModeStringincrementalWith an input stream of a user-defined type, do not change the default. See\u00a0Replicating Oracle data to Amazon Redshift for more information.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Passwordencrypted passwordthe password for the Redshift userQuote CharacterString\" (UTF-8 0022)The character(s) used to quote (escape) field values in the delimited text files in which the adapter accumulates batched data. If the data will contain \", change the default value to a sequence of characters that will not appear in the data.S3 IAM RoleStringan AWS IAM role with\u00a0read write permission on the bucket (leave blank if using an access key)S3 RegionStringIf the S3 staging area is in a different AWS region (not recommended), specify it here (see\u00a0AWS Regions and Endpoints). Otherwise, leave blank.Secret Access Keyencrypted passwordthe secret access key for the S3 staging areaTablesStringThe name(s) of the table(s) to write to. The table(s) must exist in Redshift and the user specified in Username must have insert permission.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nSee\u00a0Replicating Oracle data to Amazon Redshift for an example.Upload PolicyStringeventcount:10000, interval:5msee S3 WriterUsernameStringa Redshift userThe staging area in S3 will be created at the path / / /
.Redshift Writer sample applicationThe following describes use of RedshiftWriter with an input stream of a user-defined type. When the input is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source, see Replicating Oracle data to Amazon Redshift.The following example would write to a table called MyTable:CREATE SOURCE PosSource USING FileReader (\n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false )\nOUTPUT TO PosSource_Stream;\n \nCREATE CQ PosSource_Stream_CQ\nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n \nCREATE TARGET testRedshiftTarget USING RedshiftWriter(\n ConnectionURL: 'jdbc:redshift://mys3bucket.c1ffd5l3urjx.us-west-2.redshift.amazonaws.com:5439/dev',\n Username:'mys3user',\n Password:'******',\n bucketname:'mys3bucket',\n/* for striimuser */\n accesskeyid:'********************',\n secretaccesskey:'****************************************',\n Tables:'mytable'\n)\nINPUT FROM PosSource_TransformedStream;If this application were deployed to the namespace RS1, the staging area in S3 would be mys3bucket / RS1 / PosSource_TransformedStream_Type / mytable.The target table must match PosSource_TransformedStream_Type:create table mytable(\nMerchantId char(35),\nDateTime timestamp,\nAuthAmount float,\nZip char(5));After the data is written to Redshift, the intermediate files will be moved to mys3bucket / RS1 / PosSource_TransformedStream_Type / mytable / archive.Redshift data type correspondenceStriim data typeRedshift data typejava.lang.BooleanBOOLEANjava.lang.DoubleDOUBLE PRECISIONjava.lang.FloatREALjava.lang.IntegerINTEGERjava.lang.LongBIGINTjava.lang.ShortSMALLINTjava.lang.StringCHAR or VARCHARorg.joda.time.DateTimeTIMESTAMPIn this section: Redshift WriterRedshift Writer propertiesRedshift Writer sample applicationRedshift data type correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/redshift-writer.html", "title": "Redshift Writer", "language": "en"}} {"page_content": "\n\nS3 WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsS3 WriterPrevNextS3 WriterWrites to Amazon S3 or Dell EMC ECS Enterprise Object Storage.See Port Requirements for information on firewall settings.S3 Writer propertiespropertytypedefault valuenotesAccess Key IDStringSpecify an AWS access key ID (created on the AWS Security Credentials page) for a user with \"Write objects\" permission on the bucket.When Striim is running in Amazon EC2 and there is an IAM role with that permission associated with the VM, leave accesskeyid and secretaccesskey blank to use the IAM role.For Dell EMC ECS, specify the S3 Access Key string from the All Credentials page.Bucket NameStringThe S3 bucket name. If you specify the Region property and the bucket does not already exist, S3 Writer will create it. Otherwise, you must create the bucket manually before running S3 Writer.See Setting output names and rollover / upload policies for advanced options. To use dynamic bucket names, you must specify a value for the Region property.Note the limitations in Amazon's\u00a0Rules for Bucket Naming.Client ConfigurationStringOptionally, specify one or more of the following property-value pairs, separated by commas.If you access S3 through a proxy server, specify it here using the syntax\u00a0ProxyHost=,ProxyPort=,ProxyUserName=,ProxyPassword=. Omit the user name and password if not required by your proxy server.Specify any of the following to override Amazon's defaults:ConnectionTimeout=: how long to wait to establish the HTTP connection, default is 50000MaxErrorRetry=: the number of times to retry failed requests (for example, 5xx errors), default is 3SocketErrorSizeHints=: TCP buffer size, default is 2000000See\u00a0http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/section-client-configuration.html for more information about these settings.For Dell EMC ECS, specify endpointConfiguration= followed by the S3 End Point string from the All Credentials page.Compression TypeStringSet to gzip when the input is in gzip format. Otherwise, leave blank.Folder NameStringOptionally, specify a folder within the specified bucket. If it does not exist, it will be created.See Setting output names and rollover / upload policies for advanced options.Object NameStringThe base name of the files to be written. See Setting output names and rollover / upload policies.Object TagsStringOptionally, specify one or more object tags (see Object Tagging) to be associated with the file as key-value pairs = separated by commas. Values may include field, metadata, and/or userdata values (see Setting output names and rollover / upload policies) and/or environment variables (specified as $).Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Partition KeyStringIf you enable ParallelThreads, specify a field to be used to partition the events among the threads.\u00a0 Events will be distributed among multiple S3 folders based on this field's values.\u00a0If the input stream is of any type except WAEvent, specify the name of one of its fields.If the input stream is of the WAEvent type, specify a field in the METADATA map (see WAEvent contents for change data) using the syntax\u00a0@METADATA(), or a field in the USERDATA map (see\u00a0Adding user-defined data to WAEvent streams), using the syntax\u00a0@USERDATA(). If appropriate, you may concatenate multiple METADATA and/or USERDATA fields.WAEvent contents for change dataRegionStringOptionally, specify an AWS region, for example, us-west-1. This is required to use dynamic bucket names (see\u00a0Setting output names and rollover / upload policies).Rollover on DDLBooleanTrueHas effect only when the input stream is the output stream of a CDC reader source. With the default value of True, rolls over to a new file when a DDL event is received. Set to False to keep writing to the same file.Secret Access Keyencrypted passwordSpecify the AWS secret access key for the specified access key.For Dell EMC ECS, specify the S3 Secret Key 1 string from the All Credentials page.Upload PolicyStringeventcount:10000, interval:5mThe upload policy may include eventcount, interval, and/or filesize (see Setting output names and rollover / upload policies for syntax). Cached data is written to S3 every time any of the specified values is exceeded. With the default value, data will be written every five minutes or sooner if the cache contains 10,000 events. When the app is undeployed, all remaining data is discarded.When uploading configurations to a bucket protected by Object Lock, specify AWSS3ObjectLockEnabled=true in the request.This adapter has a choice of formatters. See Supported writer-formatter combinations for more information.Supported writer-formatter combinationsS3 Writer sample applicationCREATE APPLICATION testS3;\n\nCREATE SOURCE PosSource USING FileReader ( \n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false ) \nOUTPUT TO PosSource_Stream;\n\nCREATE CQ PosSource_Stream_CQ \nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET testS3target USING S3Writer (\n bucketname:'mybucket',\n objectname:'myfile.json',\n accesskeyid:'********************',\n secretaccesskey:'******************************',\n foldername:'myfolder')\nFORMAT USING JSONFormatter ()\nINPUT FROM PosSource_TransformedStream;\n\nEND APPLICATION tests3;Note that since the test data set is less than 10,000 events, and the application is using the default upload policy, the data will be uploaded to S3 after five minutes, or when you undeploy the application.In this section: S3 WriterS3 Writer propertiesS3 Writer sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/s3-writer.html", "title": "S3 Writer", "language": "en"}} {"page_content": "\n\nSalesforce WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsSalesforce WriterPrevNextSalesforce WriterWrites data to standard or custom Salesforce objects using either single-row or bulk REST APIs. Your Salesforce instance must support access to the API you use.Limitations:Changes to a parent object do not affect its child objects.The maximum integer value supported by Salesforce is 100000000000000000 (100 quadrillion). Any insert of an integer over that value is written as 100000000000000000. Any update of an integer over that value will cause Salesforce Writer to halt with a MALFORMED_QUERY exception.A Saleforce row error occurs when a batch of data being written succeeds, but Salesforce discards one or more rows. These errors do not generate an exception and the events discarded by Salesforce are not recoverable, though Striim stores them in the application's exception store.Row errors are typically caused by the following conditions:invalid data type conversionsduplicate values in unique fields when in APPENDONLY modea unique field has a null valuefield delimiter characters are in use but unescaped due to Use Quotes being FalseSalesforce Writer propertiesProperty NameProperty TypeDefault ValueDescriptionAPI End PointStringEndpoint of the Salesforce REST APIApplication Error Count ThresholdInteger0Application will halt if the number of ignored errors exceeds this number (see Ignorable Error Codes, below).Auth Tokenencrypted passwordIf autoAuthTokenRenewal is set to false , specify your Salesforce access token (see\u00a0Set Up Authorization on\u00a0developer.salesforce.com: the first section, \"Setting Up OAuth 2.0,\" explains how to create a \"connected app\"; the second section, \"Session ID Authorization,\" explains how to get the token using curl).If autoAuthTokenRenewal is set to\u00a0true, leave blank.Auto Auth Token RenewalBooleanfalseWith the default value of False, when the specified Auth Token expires the application will halt and you will need to modify it to update the auth token before restarting. This setting is recommended only for development and testing, not in a production environment. When this property is False, you must specify Auth Token, Password, and Username.Set to True to renew the auth token automatically. In this case, leave Auth Token blank and set the Consumer Key, Consumer Secret, Password, Security Token, and Username properties.Batch PolicyStringeventCount:100000, Interval:300When Use Bulk API is False, this property is ignored and not shown in the UI.The batch policy includes eventCount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, any remaining data in the buffer is discarded.\u00a0To disable batching, set to EventCount:1,Interval:0.Connection Retry PolicyStringretryInterval = 30maxRetries = 3With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.Consumer KeyStringIf Auto Auth Token Renewal is set to true, specify the Consumer Key (see\u00a0Set Up Authorization\u00a0on\u00a0developer.salesforce.com).Consumer Secretencrypted passwordIf Auto Auth Token Renewal is set to true, specify the Consumer Key (see\u00a0Set Up Authorization\u00a0on\u00a0developer.salesforce.com).Field DelimiterenumCOMMAWhen Use Bulk API is False, this property is ignored and not shown in the UI.Supported values: BACKQUOTE, CARET, COMMA, PIPE, SEMICOLONHard DeleteBooleanFalseWhen Use Bulk API is False, this property is ignored and not shown in the UI.With the default value of False, deleted objects will be moved to Salesforce's recycle bin (see Salesforce Help> Docs > Extend Salesforce with Clicks, Not Code > Manage Deleted Custom Objects).Set to True to bypass the recycle bin and permanently delete the objects.Ignorable Error CodesStringBlankBy default, if Salesforce returns an error, Striim halts the application. Use this property to specify errors (such as INVALID_FIELD) to ignore , separated by commas. For example:IgnorableErrorCode: 'INVALID_FIELD'When you specify an ignorable error, also set the Application Error Count Threshold property to an appropriate number.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).In MemoryBooleanTrueWhen Use Bulk API is False, this property is ignored and not shown in the UI.With the default value of True, batches are buffered in memory.Set to False to buffer batches on disk. Use this setting if batches consume too much memory resulting in lower performance or out-of-memory errors.JWT Certificate NameStringWhen OAuth Authorization Flows is not JWT_BEARER, this property is ignored and does not appear in the UI.See Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.JWT Keystore Passwordencrypted passwordWhen OAuth Authorization Flows is not JWT_BEARER, this property is ignored and does not appear in the UI.See Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.JWT Keystore PathStringWhen OAuth Authorization Flows is not JWT_BEARER, this property is ignored and does not appear in the UI.See Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration.ModeenumAPPENDONLYWith the default mode of APPENDONLY, update and delete operations in the source are handled as inserts in Salesforce. The input stream's type may be user-defined or WAEvent from a Database Reader, Incremental Batch Reader, or SQL CDC source.Set to MERGE to handle insert and delete operations as inserts and deletes in Salesforce. The input stream's type must be of type WAEvent from a Database Reader, Incremental Batch Reader, or SQL CDC source. The source events must contain at least one field (such as a primary key) that uniquely identifies them, and that field must be mapped to an External ID field in the target object using ColumnMap in the Tables property. If an External ID matches an object in the target, it will be updated. If an External ID field is not present in the Salesforce object, Salesforce Writer will halt.OAuth Authorization FlowsenumPASSWORDWith the default value of PASSWORD, Salesforce Writer will authorize using OAuth 2.0 username and password (see Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 Username-Password Flow for Special Scenarios). In this case, you must specify values for the Consumer Key, Consumer Secret, Password, Security Token, and Username properties.Set to JWT_BEARER to authorize using OAuth 2.0 JWT bearer tokens instead (see Salesforce Help> Docs> Identify Your Users and Manage Access > OAuth 2.0 JWT Bearer Flow for Server-to-Server Integration). In this case, you must specify the Consumer Key, JWT Certificate Name, JWT Keystore Password, JWT Keystore Path, and Username properties.ObjectsStringSee SObjects.Parallel ThreadsIntegerWhen Use Bulk API is False, this property is ignored and not shown in the UI.See Creating multiple writer instances.Passwordencrypted passwordWhen Auto Auth Token Renewal is set to true, specify the password for the specified\u00a0Username (see Encrypted passwords).Security Tokenencrypted passwordWhen Auto Auth Token Renewal is set to true, specify the security token for the specified username (see Reset Your Security Token on help.salesforce.com).SObjectsStringIn the Flow Designer this property is shown as Objects.The name of an object or objects that Striim will write to. Objects must exist at the time of application start. Multiple objects can be specified when the input stream is replicating from one database to another. Object names can use the wildcard % and are case insensitive.Changes to a parent object do not affect its child objects.If the Salesforce user specified in Username is not an admin, it should have the Read, Create, Edit, Delete, View All, and Modify All permissions.Use Bulk APIBooleanTrueWith the default of True, the adapter uses Bulk API 2.0 calls. Incoming records are batched using the specified Batch Policy. We recommend this API for all purposes, particularly:when you have more than 1000 events per secondwhen batching is desirablewhen you want Hard Delete or Parallel ThreadsA single batch may include up to 150MB of data. Each upload uses the following API calls:/services/data/v/jobs/ingest creates a job./services/data/v/jobs/ingest//batches uploads job data./services/data/v/jobs/ingest// closes a job and submits the data for processing.You may check the output of the /services/data/v/jobs/ingest//successfulResults/ and /services/data/v/jobs/ingest//failedResults/ API calls to determine the status of a given job.When using the Bulk API, when the Striim application is stopped, any batches already sent to Salesforce will continue processing until their jobs are complete.When set to False, the adapter uses the Force.com REST API and a separate synchronous API call handles each record. We recommend this API only during development, for row-by-row troubleshooting and debugging.Use QuotesBooleanFalseWhen Use Bulk API is False, this property is ignored and not shown in the UI.With the default value of False, the input must contain no special characers.Set to True to specify that this data contains special characters that must be escaped with double quotes. See Introduction to Bulk API 2.0 and Bulk API / Bulk API 2.0 / Bulk API 2.0 Ingest / Sample CSV Files for details on how the source data must be escaped.UsernameStringWhen autoAuthTokenRenewal is set to true, specify an appropriate user name.Salesforce Writer sample applicationsThe following section lists sample TQL code to accomplish several common tasks that involve the Salesforce writer.Oracle Initial load to Salesforce using auth token onlyCREATE SOURCE ORA_SRC1 USING DatabaseReader (\n Tables: 'qatest.SRC_TEST',\n QuiesceOnILCompletion: true,\n Username: 'qatest',\n ConnectionURL: 'jdbc:oracle:thin:@localhost:1521:XE',\n Password: ''\n )\nOUTPUT TO sf1;\n\nCREATE TARGET Salesforce_write1 USING SalesforceWriter\n( \u00a0\n sObjects: 'qatest.SRC_TEST,TGT__c',\n apiEndPoint: 'https://ap2.salesforce.com',\n authToken: ''\n )\nINPUT FROM sf1;\nOracle CDC to Salesforce using auth token onlyCREATE SOURCE oracle_cdc_src2 USING OracleReader (\n ConnectionURL: 'jdbc:oracle:thin:@localhost:1521:XE',\n Password: '',\n Username: 'qatest',\n Tables: 'qatest.CDC1'\n)\nOUTPUT TO sf_cdc2;\n\nCREATE TARGET SF_target2 USING SalesforceWriter (\n sObjects: 'qatest.CDC1,CDC_TARGET1__c COLUMNMAP(externalID__c=Id)',\n apiEndPoint: 'https://ap2.salesforce.com',\n authToken: '',\n Mode: 'MERGE'\n)\nINPUT FROM sf_cdc2;\nUsing the Force.com REST APICREATE TARGET SF_target2 USING SalesforceWriter (\n sObjects: 'qatest.CDC1,CDC_TARGET1__c COLUMNMAP(externalID__c=Id)',\n apiEndPoint: 'https://ap2.salesforce.com',\n authToken: '',\n Mode: 'MERGE',\n useBulkApi: 'false'\n)\nINPUT FROM sf_cdc2;\nUsing Auth Token renewalCREATE TARGET SF_target3 USING SalesforceWriter (\n autoAuthTokenRenewal: 'true', \u00a0\n Username: '',\n consumerSecret: '',\n sObjects: 'qatest.CDC1,CDC_TARGET1__c COLUMNMAP(externalID__c=Id)',\n Password: '',\n consumerKey: '',\n securityToken: '',\n apiEndPoint: 'https://ap2.salesforce.com'\n)\nINPUT FROM sf_cdc3;\nKafka to SalesforceCREATE STREAM KafkaDSVStream OF WAEvent;\nCREATE SOURCE KafkaSource USING KafkaReader VERSION '0.11.0' (\n brokerAddress:'localhost:9092',\n Topic:'salesforceTest7',\n startOffset:0\n)\nPARSE USING DSVParser ()\nOUTPUT TO KafkaDSVStream;\n\nCREATE TYPE AccessLogType (\n merchantName String,\n area string\n);\nCREATE STREAM TypedAccessLogStream OF AccessLogType;\n\nCREATE CQ AccesslogCQ\nINSERT INTO TypedAccessLogStream\nSELECT data[0],\ndata[1]\nFROM KafkaDSVStream;\n\nCREATE TARGET SF_target2 USING SalesforceWriter (\n sObjects: 'kafkatest__c',\n apiEndPoint: 'https://ap2.salesforce.com',\n authToken: '',\n batchpolicy: 'eventcount:4,interval:10s',\n InMemory: false\n)\nINPUT FROM TypedAccessLogStream;\nSalesforce Writer data type support and correspondenceThe following apply when the input stream is of a user-defined type.Striim typeSalesforce data typejava.lang.Objectbase64StringbooleanBytebyteorg.joda.time.LocalDatedateorg.joda.time.DateTimedateTimeDoubledoubleLongintStringstringStringtimeThe following apply when the input stream is the output of an Oracle Reader source.Oracle typeSalesforce typeADTnot supportedARRAYnot supportedBFILEnot supportedBINARY_DOUBLEDoubleBINARY_FLOATDoubleBFILEnot supportedBLOBStringCHARStringCLOBStringDATEDatetimeFLOATDoubleINTERVALDAYTOSECONDStringINTERVALYEARTOMONTHStringLONGfor Oracle Reader, results may be inconsistent, Oracle recommends using CLOB insteadfor OJet, StringLONG RAWnot supportedNCHARStringNCLOBStringNESTED TABLEnot supportedNUMBERDoubleNVARCHAR2StringRAWStringREFnot supportedROWIDStringTIMESTAMPDatetimeTIMESTAMP WITHLOCALTIMEZONEDatetimeTIMESTAMP WITHTIMEZONEDatetimeUDTnot supportedUROWIDnot supportedVARCHAR2StringVARRAYnot supportedXMLTYPEnot supportedIn this section: Salesforce WriterSalesforce Writer propertiesSalesforce Writer sample applicationsSalesforce Writer data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-16\n", "metadata": {"source": "https://www.striim.com/docs/en/salesforce-writer.html", "title": "Salesforce Writer", "language": "en"}} {"page_content": "\n\nSAP HanaSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsSAP HanaPrevNextSAP HanaSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-08-31\n", "metadata": {"source": "https://www.striim.com/docs/en/sap-hana.html", "title": "SAP Hana", "language": "en"}} {"page_content": "\n\nServiceNow WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsServiceNow WriterPrevNextServiceNow WriterThe Striim ServiceNow Writer enables Striim to write data to ServiceNow instances.User configurationIn order to read data from a ServiceNow instance, Striim requires a user account on the ServiceNow instance with API access and specific roles.Creating a ServiceNow user accountCreating a user account requires credentials to an admin account on the ServiceNow instance that can use session-limited elevated privileges.Log in to an admin account on the ServiceNow instance.Apply the security_admin elevated role to the current session.Create the user account according to the ServiceNow documentation.Confirm that the user account is active, has Web access, and has the Internal Integration User role.Assign the following roles to the user account:snc_platform_rest_api_accessadminStore the username and password for the account for future use.ServiceNow Writer Propertiespropertytypedefault valuenotesConnection URLText boxURL for the ServiceNow instance.UsernameEncrypted textUser ID for the ServiceNow account.PasswordEncrypted textPassword for the ServiceNow account.OAuthRadio buttonAuthentication mechanism.Client IDEncrypted textClient ID of the ServiceNow account user for third-party access.Client secretEncrypted textClient secret of the ServiceNow account user for third-party access.Batch APIToggleSelectedWhen selected, the adapter joins multiple requests for different tables into a single Batch API request. Otherwise, the adapter fetches data from the ServiceNow instance using single-table API requests.ModeDrop-down:APPENDONLYMERGEAPPENDONLYIn APPENDONLY mode, all Update or Delete events from the source are treated as Insert operations.In MERGE mode, Update and Delete operations are supported when the target object has an External ID field that uniquely identifies a record.Batch PolicyTextThe Striim server buffers events and sends a batch of events to the target whenever a specified event count is exceeded or a specified interval (in seconds) elapses. Event buffers are discarded when the app halts. Set EventCount to 1 and Interval to 0 to disable batching.TablesStringThe names of the tables to write to. These tables must exist at the time the application starts.When the input stream for the target is the output of a Database Reader, Incremental Batch Reader, or a SQL CDC source, this adapter can write the stream to multiple objects. To write to multiple objects, specify the name of both source and target objects. Object names support wildcards, but not partial wildcards. When readers use three-part names, use the three-part format to specify the objects as well.SQL Server source table names require the three-part format when the source is a Database Reader or Incremental Batch Reader source. SQL Server source table names require the two-part format when the source is a MS SQL Reader or MS Jet source.Table and Column names are case-insensitive.Example\u00a04.\u00a0Table name specification examplesource.emp,Employee_cMatches a specific table from source to a specific object at the target.source.mydatabase.Emp%,%Writes to all objects starting with Emp to target instance. All objects must exist in target.source1.%,%Attempts to write all matching objects at the target instance.source1.tab1,Tab1;source2.tab2,Tab2Writes to multiple Salesforce tables.For information on how to map columns in a source table to columns in a target table, see Mapping columns.Connection retriesTextBox3Specifies the number of retry attempts after a request failure.Connection timeoutTextBox60Specifies the timeout for creating a socket connection, in seconds.Max connectionsTextBox40Specifies the number of connections used for the HTTP client pool.Parallel threadsIntegerWhen Use Bulk API is False, this property is ignored and not shown in the UI.See Creating multiple writer instances.Application error count thresholdInteger0Specifies a number of errors. The application halts when the number of errors exceeds the specified value.Ignorable error codesStringFORBIDDEN, NOTFOUNDA comma-separated list of error codes. The listed error codes do not increment the total error count for purposes of halting the application. The errors are logged and stored in the Exception Store as usual for errors. By default, applications ignore the FORBIDDEN and NOTFOUND error codes. A FORBIDDEN error code occurs when an attempt to insert a record in ServiceNow encounters a duplicate of that record. A NOTFOUND error code occurs when an attempt to update or delete a record is unable to find that record.Configuring OAuthThe OAuth plugin is active by default on new and upgraded ServiceNow instances. If your ServiceNow instance requires activation or installation, consult ServiceNow documentation. Create an OAuth API endpoint at the instance. Creating the endpoint requires the following information:NameA unique identifier for the application.Client IDGenerated by the ServiceNow instance.Client secretGenerated by the ServiceNow instance.Refresh Token LifespanSpecifies an interval in seconds. Refresh tokens expire after the interval elapses. By default, this value is 8,640,000 (100 days).Access Token LifespanSpecifies an interval in minutes. Access tokens expire after the interval elapses. By default, this value is 30.Generating the initial access and refresh token pairUse the following command to generate the initial pair of access and refresh tokens:curl --location --request POST 'https://dev9679.service-now.com/oauth_token.do' \\\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Cookie: BIGipServerpool_dev96679=2592364554.41278.0000; JSESSIONID=DD71F5EA2CB0D8D7F8921E81F05925D8; glide_session_store=F371B3362F10111005D5837CF699B660; glide_user_route=glide.ac850c71294cc9d30599c382c534f414' \\\n--data-urlencode 'grant_type=password' \\\n--data-urlencode 'client_id=fb03f7b6f101110d70314f8a47a5a9c' \\\n--data-urlencode 'client_secret=LmSB[O$zI' \\\n--data-urlencode 'username=rest.user' \\\n--data-urlencode 'password=Test1234'This command can also be used to renew the refresh token.Using the refresh token to renew the access token.Use the following command to renew the access token:curl --location --request POST 'https://{instance ID}.service-now.com/oauth_token.do' \\\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--header 'Cookie: {cookie values}' \\\n--data-urlencode 'grant_type=refresh_token' \\\n--data-urlencode 'client_id={client ID}' \\\n--data-urlencode 'client_secret={client secret}' \\\n--data-urlencode 'refresh_token={refresh token value}'Managing data in existing tablesUse a POST request to add data to an existing table on the ServiceNow instance. Use a PUT request to update data. Use a DELETE request to delete data. The URI for a given table is of the form {instance-id}.service-now.com/api/now/table/{table name}. Use the following command to insert data:curl --location --request POST 'https://{instance ID}.service-now.com/api/now/table/{table name}/{unique ID}' \\\n--header 'Content-Type: application/json' \\\n--header 'Accept: application/json' \\\n--header 'Authorization: Bearer {auth token}' \\\n--header 'Cookie: {cookie values}' \\\n--data-raw '{'\\''short_description'\\'':'\\''Unable to connect to office wifi 5'\\'','\\''assignment_group'\\'':'\\''287ebd7da9fe198100f92cc8d1d2154e'\\'','\\''urgency'\\'':'\\''2'\\'','\\''impact'\\'':'\\''2'\\''}'For data update or delete operations, add a unique ID to the URI after the table.curl --location --request PUT 'https://{instance ID}.service-now.com/api/now/table/{table name}/{unique ID}' \\\n--header 'Content-Type: application/json' \\\n--header 'Accept: application/json' \\\n--header 'Authorization: Bearer {auth token}' \\\n--header 'Cookie: {cookie values}' \\\n--data-raw '{'\\''short_description'\\'':'\\''Unable to connect to office wifi 8'\\'','\\''assignment_group'\\'':'\\''287ebd7da9fe198100f92cc8d1d2154e'\\'','\\''urgency'\\'':'\\''2'\\'','\\''impact'\\'':'\\''2'\\''}'RecoveryServiceNow supports hard deletes, which cannot be recovered, and soft deletes, which are recoverable. Hard deletes can be audited. By default, hard delete operations for records beginning with the sys prefix are not held for audit. Consult ServiceNow documentation for instructions on auditing hard deletes.Soft deletes are stored in the sys_audit_delete table for up to seven days. The ServiceNow documentation has instructions on disabling audits of soft deletes and on recovering soft deleted information.TQL ExamplesThis sample TQL writes the output of a DBReader into ServiceNow.STOP APPLICATION admin.MySQL2SNWriter;\nUNDEPLOY APPLICATION admin.MySQL2SNWriter;\nDROP APPLICATION admin.MySQL2SNWriter CASCADE;\n\nCREATE OR REPLACE APPLICATION MySQL2SNWriter;\n\nCREATE OR REPLACE SOURCE MySQL2SNWriter_src USING Global.DatabaseReader ( \n DatabaseProviderType: 'Default', \n FetchSize: 100, \n QuiesceOnILCompletion: true, \n Tables: 'waction.authors', \n adapterName: 'DatabaseReader', \n Password: 'w@ct10n', \n ConnectionURL: 'jdbc:mysql://localhost:3306', \n Username: 'root' ) \n OUTPUT TO MySQL2SNWriter_OutputStream1;\n\n CREATE OR REPLACE TARGET servicenowwriter USING Global.ServiceNowWriter ( \n ConnectionTimeOut: '60', \n ConnectionRetries: '3', \n\n ConnectionUrl: 'https://dev849543232.service-now.com',\n UserName: 'snr',\n Password: '^Pre&$EMO%6O.e_{96h+$R?rJd,=[4Vt=K)Szh?6gou-z#GjBw[u8x', \n ClientID: 'ce4fd5af894a11103d2c5c3a8fe075e1', \n ClientSecret: '6Wa-cv`I7x', \n \n BatchAPI: 'true', \n ApplicationErrorCountThreshold: '20', \n \n MaxConnections: '20', \n Tables: 'waction.authors,u_authors ColumnMap(u_birthdate=birthdate,u_email=email,u_first_name=first_name,u_id=id,u_last_name=last_name)', \n ParallelThreads: '20', \n Mode: 'MERGE', \n \n adapterName: 'ServiceNowWriter', \n BatchPolicy: 'eventCount:1000, Interval:30' ) \nINPUT FROM MySQL2SNWriter_OutputStream1;\n\n\nEND APPLICATION MySQL2SNWriter;This sample TQL writes the contents of a file to ServiceNow:CREATE OR REPLACE APPLICATION KW RECOVERY 1 SECOND INTERVAL;\n\nCREATE TYPE AccessLogType1 (\n merchantName java.lang.String,\n merchantId java.lang.String);\n\nCREATE OR REPLACE SOURCE CSVSource USING Global.FileReader ( \n adapterName: 'FileReader', \n rolloverstyle: 'Default', \n positionByEOF: false, \n WildCard: 'posdata.csv', \n blocksize: 64, \n skipbom: true, \n includesubdirectories: false, \n directory: '/Users/vishwanath.shindhe/Documents/Project/product/Samples/Customer/PosApp/appData/' ) \nPARSE USING Global.DSVParser ( \n trimwhitespace: false, \n linenumber: '-1', \n columndelimiter: ',', \n columndelimittill: '-1', \n trimquote: true, \n ignoreemptycolumn: false, \n separator: ':', \n parserName: 'DSVParser', \n quoteset: '\\\"', \n handler: 'com.webaction.proc.DSVParser_1_0', \n charset: 'UTF-8', \n ignoremultiplerecordbegin: 'true', \n ignorerowdelimiterinquote: false, \n blockascompleterecord: false, \n rowdelimiter: '\\n', \n nocolumndelimiter: false, \n headerlineno: 0, \n header: true ) \nOUTPUT TO FileStream;\n\nCREATE STREAM TypedAccessLogStream1 OF AccessLogType1 PARTITION BY merchantId;\n\nCREATE OR REPLACE TARGET snow USING Global.ServiceNowWriter ( \n ClientSecret: 'RmmXEqB8GI2xGl5IfVEdiw==', \n BatchPolicy: 'eventCount:10000, Interval:60', \n ClientID_encrypted: 'true', \n UserName: 'snr', \n Password_encrypted: 'true', \n Tables: 'u_merchant ColumnMap(u_businessName=merchantName,u_merchID=merchantId)', \n ClientSecret_encrypted: 'true', \n Mode: 'APPENDONLY', \n ConnectionUrl: 'https://dev84954.service-now.com', \n MaxConnections: '20', \n ClientID: '3bKQkHNl8EbV6xdRLPdMCK7gkLrzmWa+Bv8ZNJ2rIy/AsM+2Gvk3dCgKfzF/QqSL', \n Password: 'eO29sCEwzZmYPDFfOs+6JWUBYa6QGDaRLtWvm3FBP+d06UkuCjMnTQqYcTjYAo7K86p16KoJ5+LIZayJteb8QnjlbARe8rO8X5BgQrCzsX8f7w1c9gyPF4Yu/VPqTlV+/IfBA0MzPfmu7Uw9S9H4XQ==', \n ConnectionRetries: '3', \n ConnectionTimeOut: '60', \n ApplicationErrorCountThreshold: '0', \n adapterName: 'ServiceNowWriter', \n BatchAPI: 'false' ) \nINPUT FROM TypedAccessLogStream1;\n\nCREATE CQ AccesslogCQ1 \nINSERT INTO TypedAccessLogStream1 \nSELECT data[0],data[1]\nFROM FileStream;\n\nCREATE TARGET DsvWriter USING KafkaWriter VERSION '0.11.0'( \n brokerAddress: 'localhost:9092', \n PartitionKey: 'merchantId', \n KafkaConfig: 'retries=3;retry.backoff.ms=500', \n Mode: 'Sync', \n Topic: 'dsve1ptest' ) \nFORMAT USING DSVFormatter ( \n ) \nINPUT FROM TypedAccessLogStream1;\n\nCREATE TARGET sys USING Global.SysOut ( \n name: 'ss' ) \nINPUT FROM TypedAccessLogStream1;\n\nEND APPLICATION KW;In this section: ServiceNow WriterUser configurationCreating a ServiceNow user accountServiceNow Writer PropertiesConfiguring OAuthGenerating the initial access and refresh token pairUsing the refresh token to renew the access token.Managing data in existing tablesRecoveryTQL ExamplesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-24\n", "metadata": {"source": "https://www.striim.com/docs/en/servicenow-writer.html", "title": "ServiceNow Writer", "language": "en"}} {"page_content": "\n\nSnowflake WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsSnowflake WriterPrevNextSnowflake WriterWrites to one or more existing tables in Snowflake. Events are staged to local storage, AWS S3, or Azure Storage, then written to Snowflake as per the Upload Policy setting. Striim connects to Snowflake over JDBC with SSL enabled. Files are uploaded to Snowflake's staging area using Snowflake's PUT command and are encrypted using 128-bit keys.If this reader will be deployed to a Forwarding Agent, install the driver as described in Install the Snowflake JDBC driver.To evaluate Striim with Snowflake, see Getting your free trial of Striim for Snowflake.Snowflake Writer log levelsSnowflake Writer logs use the log4j tool. Event logs use the log levels specified by the properties configured for log4j. Log levels for Snowflake Writer cannot change at runtime and Snowflake Writer ignores the Tungsten console command set loglevel during runtime.To change log levels for Snowflake Writer, edit the properties for log4j and restart the server.Snowflake Writer propertiespropertytypedefault valuenotesAppend OnlyBooleanFalseWith the default value of False, updates and deletes in the source are handled as updates and deletes in the target.Set to True to handle updates and deletes as inserts in the target. With this setting:Updates and deletes from DatabaseReader, IncrementalBatchReader, and SQL CDC sources are handled as inserts in the target.Primary key updates result in two records in the target, one with the previous value and one with the new value. If the Tables setting has a ColumnMap that includes @METADATA(OperationName), the operation name for the first event will be DELETE and for the second INSERT.Authentication TypeDrop-downPasswordSelects the type of user authentication. Select Password to use username/password pairs for authentication. Select OAuth to use OAuth for authentication. Select Key-pair to use private/public key pairs for authentication. Authenticating with Key-pair removes the requirement to pass the Private key and User role properties separately when using streaming uploads. Key-pair authentication supports but does not require encrypted private keys.CDDL ActionStringProcessSee Handling schema evolution.Client ConfigurationStringIf using a proxy, specify ProxyHost=,ProxyPort=.Client IDStringThis property is required when OAuth authentication is selected.Client SecretpasswordThis property is required when OAuth authentication is selected.Column DelimiterString| (UTF-8 007C)The character(s) used to delimit fields in the delimited text files in which the adapter accumulates batched data. If the data will contain the | character, change the default value to a sequence of characters that will not appear in the data.Connection URLStringThe JDBC driver connection string for your Snowflake account. The syntax is jdbc:snowflake://.snowflakecomputing.com?db=. The account identifier is part of the URL you use to log in to Snowflake: for example, if your login URL were https://ef12345.west-us-2.azure.snowflakecomputing.com/console/login, the account identifier would be ef12345.west-us-2.azure. (For more information, see Docs > Managing Your Snowflake Account > Account Identifiers.) The JDBC connection uses SSL.External Stage TypeStringlocalWith the default value of local, stages to a Snowflake internal named stage.To stage to Azure Storage, set to AzureBlob and set the Azure Storage properties as described below.To stage to S3, set to S3 and set the S3 properties as described below.File Format Optionsnull_if = \"\"Do not change unless instructed to by Striim support.Ignorable Exception CodeStringSet to TABLE_NOT_FOUND to prevent the application from terminating when Striim tries to write to a table that does not exist in the target. See Handling \"table not found\" errors for more information.Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Null MarkerStringOptionally, specify a string inserted into fields in the stage files to indicate that a field has a null value. These are converted back to nulls in the target tables. If any field might contain the string NULL, change this to a sequence of characters that will not appear in the data.When you set a value for Null Marker, set the same value for File Format Options. For example, if Null Marker is xnullx, File Format Options must be null_if=\"xnullx\".Optimized MergeBooleanfalseSet to True only when Mode is MERGE and the target's input stream is the output of an HP NonStop reader, MySQL Reader, or Oracle Reader source and the source events will include partial records. For example, with Oracle Reader, when supplemental logging has not been enabled for all columns, partial records are sent for updates. When the source events will always include full records, leave this set to false.Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Passwordencrypted passwordThe password for the specified user. See\u00a0Encrypted passwords.Private KeypasswordWhen Streaming Upload is True, specify the private key with which to authenticate the user. The key may be stored in a vault. This property is required when using public/private key pair authentication. This property supports, but does not require, encrypted private keys.Private Key Passphraseencrypted passwordThis property is required when using public/private key pair authentication with an encrypted private key.Refresh TokenpasswordThis property is required when OAuth authentication is selected.Streaming ConfigurationStringMaxParallelRequests=5, MaxRequestSizeInMB=5, MaxRecordsPerRequest=10000When Streaming Upload is False, this setting is ignored.MaxParallelRequests:When Append Only is False, specifies the number of streaming requests (threads) that will be executed in parallel.When Append Only is True and you are not doing an initial load, set to 1 to ensure records are not written out of order.MaxRecordsPerRequest: the maximum number of records per streaming requestMaxRequestSizeInMB: size in MB which denotes the maximum size of each streaming request with maximum value of 10 MB.Streaming UploadBooleanFalseWith the default value of False, Snowflake Writer will use the Snowflake JDBC driver.Set to True to use the Snowpipe Streaming API.When set to True, the adapter uses public/private key authentication, ignoring other settings that affect authentication type. When set to True,specify the Private Key and User Role properties, and, optionally, adjust the Streaming Configuration property as appropriate.TablesStringThe name(s) of the table(s) to write to. The table(s) must exist in the DBMS and the user specified in Username must have insert permission.Specify Snowflake target table names in uppercase as .
. The database is specified in the connection URL.When the target's input stream is a user-defined event, specify a single table.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,MYSCHEMA.EMP\nsource.%,MYSCHEMA.%See\u00a0Mapping columns for additional options.Upload PolicyStringeventcount:10000, interval:5mThe upload policy may include eventcount, interval, and/or filesize (see Setting output names and rollover / upload policies for syntax). Cached data is written to the storage account every time any of the specified values is exceeded. With the default value, data will be written every five minutes or sooner if the cache contains 10,000 events. When the app is undeployed, all remaining data is written to the storage account.UsernameStringSpecify the username you use to log in to Snowflake. Alternatively, specify the name of a Snowflake user with SELECT, INSERT, UPDATE, and DELETE privileges on the tables to be written to and the CREATE TABLE privileges on the database specified in the connection URL.User RoleStringWhen Streaming Upload is True, specify the role to use for the session (see Docs \u00bb Managing Security in Snowflake \u00bb Administration & Authorization \u00bb Access Control in Snowflake \u00bb Overview of Access Control).Authentication mechanismsThis adapter supports the following authentication mechanisms:Username and passwordPublic/private key pairOAuthUsername and password authenticationSet the values of the username and password properties as normal.Public/private key pair authenticationThis procedure generates the public/private key pair. For details, see the Snowflake documentation on configuring key pair authentication.From a terminal, execute the following command.openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out keyname.p8 -nocryptThis command generates an unencrypted private key. Remove the -nocrypt option to generate an encrypted private key. Replace keyname with a file name for the private key.From a terminal, execute the following command.\u00a0openssl rsa -in keyname.p8 -pubout -out pubkeyname.pubReplace keyname with the name chosen in the previous step. Replace pubkeyname with a file name for the public key.In the console, execute the following command to assign the public key to the user account.ALTER USER any_snowflake_user SET RSA_PUBLIC_KEY='code string';Replace code string with the public key, not including the start and end key delimiters.Example\u00a05.\u00a0Example: Snowflake Writer with an encrypted private keyCREATE TARGET SfTgt USING SnowflakeWriter\n(\n\u00a0\u00a0ConnectionURL:'jdbc:snowflake://striim.snowflakecomputing.com/?db=DEMO_DB',\n\u00a0\u00a0username:'infra_1780_oauth_bearer_encrypted',\n\u00a0\u00a0appendOnly:'true',\n\u00a0\u00a0IgnorableExceptionCode:'TABLE_NOT_FOUND',\n\u00a0\u00a0Tables:'QATEST.oracleRawSRC,QATEST1679489873.SNOWFLAKERAWTGT',\n\u00a0\u00a0uploadpolicy:'eventcount:1,interval:10s',\n privateKey:'keydata',\n\u00a0\u00a0streamingUpload:'TRUE',\n\u00a0\u00a0userRole:'SYSADMIN',\n\u00a0\u00a0authenticationType:'KeyPair',\n\u00a0\u00a0privateKeyPassphrase:'striim'\n)\n\u00a0INPUT FROM OracleInitStream;Example\u00a06.\u00a0Example: Snowflake Writer with an unencrypted private keyCREATE OR REPLACE TARGET sf USING Global.SnowflakeWriter (\n\u00a0\u00a0\u00a0userRole: 'sysadmin',\n\u00a0\u00a0\u00a0connectionUrl: 'jdbc:snowflake://striim.snowflakecomputing.com/?db=SAMPLEDB&schema=SANJAYPRATAP',\n\u00a0\u00a0\u00a0streamingUpload: 'true',\n\u00a0\u00a0\u00a0tables: 'public.sample_pk,SAMPLEDB.SANJAYPRATAP.SAMPLE_TB',\n\u00a0\u00a0\u00a0CDDLAction: 'Process',\n\u00a0\u00a0\u00a0optimizedMerge: 'false',\n\u00a0\u00a0\u00a0columnDelimiter: '|',\n\u00a0\u00a0\u00a0privateKey_encrypted: 'true',\n\u00a0\u00a0\u00a0appendOnly: 'false',\n\u00a0\u00a0\u00a0authenticationType: 'KeyPair',\n\u00a0\u00a0\u00a0username: 'rahul_mishra',\n\u00a0\u00a0\u00a0uploadPolicy: 'eventcount:10000,interval:5m',\n\u00a0\u00a0\u00a0privateKey: 'keydata',\n\u00a0\u00a0\u00a0externalStageType: 'Local',\n\u00a0\u00a0\u00a0adapterName: 'SnowflakeWriter',\n\u00a0\u00a0\u00a0fileFormatOptions: 'null_if = \\\"\\\"' \n)\n\u00a0INPUT FROM sysout;OAuth authenticationSnowflake enables OAuth integration with Striim through the Security Integration Snowflake object.The following procedure creates the Security Integration object.Log in to the Snowflake Web Interface with a user account with the privilege to create the Security Integration.Create the Security Integration object.create or replace security integration DEMO_OAUTH\n\u00a0\u00a0\u00a0\u00a0\t\ttype=oauth\n\u00a0\u00a0\u00a0\u00a0\t\tenabled=true\n\u00a0\u00a0\u00a0\u00a0\t\toauth_client=CUSTOM\n\u00a0\u00a0\u00a0\u00a0\t\toauth_client_type='CONFIDENTIAL'\n\u00a0\u00a0\u00a0\u00a0\t\toauth_redirect_uri='https://localhost.com:7734/striim-callback'\n\u00a0\u00a0\u00a0\u00a0\t\toauth_issue_refresh_tokens = true\n oauth_refresh_token_validity = 7776000\n oauth_allow_non_tls_redirect_uri = true;\nNoteWhen you are using Custom Role, add the following line to grant the Usage on Integration role:grant usage on integration DEMO_OAUTH to role api_admin;In the Snowflake console, issue the following command:desc integration DEMO_OAUTH;Note the following values for later use.Postman nameValue nameClient IDOAUTH_CLIENT_IDAuthorization URLOAUTH_AUTHORIZATION_ENDPOINTToken URLOAUTH_TOKEN_ENDPOINTRefresh Token Expires InOAUTH_REFRESH_TOKEN_VALIDITYIn the Snowflake console, issue the following command:select system$show_oauth_client_secrets('DEMO_OAUTH');The first value is the Client Secret. Note the Client Secret for future use.In Postman, create a new token using the noted values.Sign in to Snowflake.After authenticating, Snowflake sends an authentiction access and refresh token.Copy the refresh token.The following procedure uses curl and the Web browser to fetch the refresh token.In the Snowflake Console, issue the following command:desc integration DEMO_OAUTH;Note the Client ID.URL encode the Client ID.Set the encoded Client ID as the value of the ENCODED_OAUTH_CLIENT_ID property.Construct the endpoint call.https:///oauth/authorize?\n response_type=code&\n client_id=&\n redirect_uri=https%3A%2F%2Flocalhost%3A7734%2Fstriim-callbackReplace with your account URL in the format myorg-account_xyz.snowflakecomputing.com. Replace with the URL encoded Client ID. Optionally, add a scope to the URL with &scope=. URL encode the scope definition. The call uses the default scope for the user when no scope is provided.Open the endpoint URL in a browser.The Snowflake login screen appears.Authenticate to Snowflake. If a consent dialog box displays, click Allow.The web browser redirects to the specified redirect URI. The authorization code is the part of the URI after the code= string.Note the authorization code for future use.From a terminal shell, execute the curl command with the token endpoint call.curl -X POST 'https:///oauth/token-request' \\\n\u00a0-H 'Authorization: Basic '\\\n\u00a0-H 'Content-Type: application/x-www-form-urlencoded' \\\n\u00a0-d 'grant_type=authorization_code&code=&redirect_uri=https%3A%2F%2Flocalhost%3A7734%2Fstriim-callback'Replace with your account URL. Replace with a Base64 encoding of the Client ID and Client Secret separated by a colon (:) character. Replace \u00a0with the previously noted authorization code.Snowflake sends the refresh token as a response to the command.Note the refresh token for future use.Azure Storage properties for Snowflake Writerpropertytypedefault valuenotesAzure Account Access Keyencrypted passwordthe account access key from Storage accounts > > Access keysAzure Account NameStringthe name of the Azure storage account for the blob containerAzure Container NameStringthe blob container name from Storage accounts > > ContainersIf it does not exist, it will be created.S3 properties for Snowflake WriterSpecify either the access key and secret access key or an IAM role.propertytypedefault valuenotesS3 Access KeyStringan AWS access key ID (created on the AWS Security Credentials page) for a user with read and write permissions on the bucket (leave blank if using an IAM role)S3 Bucket NameStringSpecify the S3 bucket to be used for staging. If it does not exist, it will be created.S3 IAM RoleStringan AWS IAM role with\u00a0read and write permissions on the bucket (leave blank if using an access key)S3 RegionStringthe AWS region of the bucketS3 Secret Access Keyencrypted passwordthe secret access key for the access keySnowflake Writer sample applicationThe following sample application will write data from PosDataPreview.csv to Snowflake. The target table must exist.CREATE SOURCE PosSource USING FileReader (\n wildcard: 'PosDataPreview.csv',\n directory: 'Samples/PosApp/appData',\n positionByEOF:false )\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false )\nOUTPUT TO PosSource_Stream;\n \nCREATE CQ PosSource_Stream_CQ\nINSERT INTO PosSource_TransformedStream\nSELECT TO_STRING(data[1]) AS MerchantId,\n TO_DATE(data[4]) AS DateTime,\n TO_DOUBLE(data[7]) AS AuthAmount,\n TO_STRING(data[9]) AS Zip\nFROM PosSource_Stream;\n\nCREATE TARGET SnowflakeDemo USING SnowflakeWriter (\n ConnectionURL: ',)NUMBER((,)NVARCHAR2VARCHARRAWBINARYTIMESTAMPTIMESTAMP_NTZTIMESTAMP WITH LOCAL TIMEZONETIMESTAMP_LTZTIMESTAMP WITH TIMEZONETIMESTAMP_TZVARCHAR2VARCHARXMLTYPEVARCHARDSee Oracle Reader and OJet WAEvent fields for additional information including limitations for some types.In this section: Snowflake WriterSnowflake Writer log levelsSnowflake Writer propertiesAuthentication mechanismsAzure Storage properties for Snowflake WriterS3 properties for Snowflake WriterSnowflake Writer sample applicationSnowflake Writer data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-08\n", "metadata": {"source": "https://www.striim.com/docs/en/snowflake-writer.html", "title": "Snowflake Writer", "language": "en"}} {"page_content": "\n\nSpanner WriterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsSpanner WriterPrevNextSpanner WriterFor a hands-on tutorial, see Continuous data replication to Cloud Spanner using Striim on cloud.google.com.Writes to one or more tables in Google Cloud Spanner.Spanner Writer propertiespropertytypedefault valuenotesBatch PolicyStringeventCount: 1000, Interval: 60sThe batch policy includes eventcount and interval (see Setting output names and rollover / upload policies for syntax). Events are buffered locally on the Striim server and sent as a batch to the target every time either of the specified values is exceeded. When the app is stopped, all remaining events are sent to the target.\u00a0To disable batching, set to EventCount:1,Interval:0.With the default setting, data will be written every 60 seconds or sooner if the buffer accumulates 1000 events.CDDL ActionStringProcessSee Handling schema evolution.Checkpoint TableStringCHKPOINTTo support recovery (see Recovering applications, a checkpoint table must be created in each target database using the following DDL:Recovering applicationsCREATE TABLE CHKPOINT (\n ID STRING(MAX) NOT NULL,\n SOURCEPOSITION BYTES(MAX)\n) PRIMARY KEY (ID);If necessary you may use a different table name, in which case change the value of this property. All databases must use the same checkpoint table name.Excluded TablesStringWhen a wildcard is specified for Tables, you may specify here any tables you wish to exclude. Specify the value as for Tables.Ignorable Exception CodeStringBy default, if the target DBMS returns an error, Striim terminates the application. Use this property to specify one or more error codes (see Cloud Spanner > Documentation > Reference > Code) to ignore, separated by semicolons, for example, NOT_FOUND;ALREADY_EXISTS. (You may also specify error numbers from legacy documentation.)Ignored exceptions will be written to the application's exception store (see CREATE EXCEPTIONSTORE).Instance IDStringSpecify the instance ID for the databases containing the tables to be written to. (Note: the instance ID may not be the same as the instance name.)Parallel ThreadsIntegerSee\u00a0Creating multiple writer instances.Private Service Connect EndpointStringName of the Private Service Connect endpoint created in the target VPC.This endpoint name will be used to generate the private hostname internally and will be used for all connections.See Private Service Connect support in Google cloud adapters.Project IDStringTo use a service account key other than the one associated with the Spanner instance's project, specify its project ID here. Otherwise leave blank.Service Account KeyStringThe path (from root or the Striim program directory) and file name to the .json credentials file downloaded from Google (see Service Accounts).\u00a0This file must be copied to the same location on each Striim server that will run this adapter, or to a network location accessible by all servers.\u00a0The associated service account must have the Cloud Spanner Database User or higher role for the instance (see Cloud Spanner Roles).To use a service account key other than the one associated with the Spanner instance's project, specify a value for the Project ID property.TablesStringThe name(s) of the table(s) to write to, in\u00a0the format\u00a0.
. The table(s) must exist when the application is started.The target table name(s) specified here must match the case shown in the Spanner UI. See Naming conventions.When the target's input stream is a user-defined event, specify a single table.When the input stream of the target is the output of a DatabaseReader, IncrementalBatchReader, or SQL CDC source (that is, when replicating data from one database to another), it can write to multiple tables. In this case, specify the names of both the source and target tables. You may use the % wildcard only for tables, not for schemas or databases. If the reader uses three-part names, you must use them here as well. Note that Oracle CDB/PDB source table names must be specified in two parts when the source is Database Reader or Incremental Batch reader (schema.%,schema.%) but in three parts when the source is Oracle Reader or OJet ((database.schema.%,schema.%). Note that SQL Server source table names must be specified in three parts when the source is Database Reader or Incremental Batch Reader (database.schema.%,schema.%) but in two parts when the source is MS SQL Reader or MS Jet (schema.%,schema.%). Examples:source.emp,target.emp\nsource.db1,target.db1;source.db2,target.db2\nsource.%,target.%\nsource.mydatabase.emp%,target.mydb.%\nsource1.%,target1.%;source2.%,target2.%\nWhen a target table has a commit timestamp column, by default its value will be Spanner's current system time when the transaction is committed. To use a different value, use ColumnMap. For example, to use the time the source transaction was committed in Oracle: ORADB1.%,spandb1.% ColumnMap (Ts @metadata(DBCommitTimestamp))See\u00a0Mapping columns for additional options.Spanner Writer sample applicationThe following sample application will copy all tables from two Oracle source schemas to tables with the same names in two Spanner databases. All source and target tables must exist before the application is started.CREATE SOURCE OracleSource1 USING OracleReader (\n Username:'myname',\n Password:'******',\n ConnectionURL: 'localhost:1521:XE\u2019,\n Tables:'MYDB1.%;MYDB2.%\u2019\n) \nOUTPUT TO sourceStream;\n\nCREATE TARGET SpannerWriterTest USING SpannerWriter(\n Tables:'ORADB1.%,spandb1.%;ORADB2.%,spandb2.%',\n ServiceAccountKey: '/.json',\n instanceId: 'myinstance'\n)\nINPUT FROM sourceStream;Spanner Writer data type support and correspondenceTQL typeSpanner typenotesBooleanBOOLbyte[]BYTESDateTimeDATE, TIMESTAMPDouble, FloatFLOAT64, NUMERICInteger, LongINT64StringSTRINGmaximum permitted length in Spanner is\u00a02,621,440If a String represents a timestamp value, use one of the TO_ZONEDDATETIME functions (see Date functions) to convert it to java.time.ZonedDateTime.When the input of a SpannerWriter target is the output of an Oracle source (DatabaseReader, IncremenatlBatchReader, or MySQLReader):Oracle typeSpanner typenotesBINARY_DOUBLEFLOAT64, NUMERICBINARYFLOAT64,\u00a0NUMERICBLOBBYTESCHARSTRINGmaximum permitted length in Spanner is\u00a02,621,440CLOBDATEDATEFLOATFLOAT64,\u00a0NUMERICLONGSTRINGmaximum permitted length in Spanner is\u00a02,621,440NCHARSTRINGmaximum permitted length in Spanner is\u00a02,621,440NCLOBSTRINGmaximum permitted length in Spanner is\u00a02,621,440NVARCHAR2STRINGmaximum permitted length in Spanner is\u00a02,621,440NUMBERINT64NUMBER(precision,scale)FLOAT64,\u00a0NUMERICRAWBYTESROWIDSTRINGmaximum permitted length in Spanner is\u00a02,621,440TIMESTAMPTIMESTAMPTIMESTAMPTIMESTAMP WITH LOCAL TIMEZONETIMESTAMPTIMESTAMPTIMESTAMP WITH TIMEZONETIMESTAMPTIMESTAMPUROWIDSTRINGmaximum permitted length in Spanner is\u00a02,621,440VARCHAR2STRINGmaximum permitted length in Spanner is\u00a02,621,440XMLTYPEWhen the input of a SpannerWriter target is the output of a SQL Server source (DatabaseReader, IncremenatlBatchReader, or MySQLReader):SQL Server typeSpanner typebigintINT64binarybitbitcharSTRINGmaximum permitted length in Spanner is\u00a02,621,440datedatetimedatetime2datetimeoffsetdecimalFLOAT64, NUMERICfloatFLOAT64, NUMERICimageintINT64moneyncharntextnumericnvarcharnvarchar(max)realsmalldatetimesmallintINT64smallmoneytextSTRINGmaximum permitted length in Spanner is\u00a02,621,440timetinyintINT64uniqueidentifiervarbinaryvarcharSTRINGmaximum permitted length in Spanner is\u00a02,621,440xmlIn this section: Spanner WriterSpanner Writer propertiesSpanner Writer sample applicationSpanner Writer data type support and correspondenceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-05\n", "metadata": {"source": "https://www.striim.com/docs/en/spanner-writer.html", "title": "Spanner Writer", "language": "en"}} {"page_content": "\n\nSQL ServerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsSQL ServerPrevNextSQL ServerSee Database Writer.Database WriterIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-01\n", "metadata": {"source": "https://www.striim.com/docs/en/sql-server-writers.html", "title": "SQL Server", "language": "en"}} {"page_content": "\n\nFormattersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsFormattersPrevNextFormattersWhen a writer supports multiple output file formats, the format is specified by selecting the appropriate parser. For example, a target using the FileWriter adapter can format its output as DSV, JSON, or XML depending on whether the DSV Formatter, JSONFormatter, or XMLFormatter is selected. See Supported writer-formatter combinations for more information.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-06-02\n", "metadata": {"source": "https://www.striim.com/docs/en/formatters.html", "title": "Formatters", "language": "en"}} {"page_content": "\n\nAvro FormatterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsFormattersAvro FormatterPrevNextAvro FormatterFormats a writer's output for use by Apache Avro and generates an Avro schema file. Avro is a schema-based serialization utility that accepts schemas as input. For more information on compatible writers, see Supported writer-formatter combinations.A common use case with AvroFormatter is to move data of type JSON from a file or Kafka source (or any other source which emits JSONNodeEvents) to Kafka topics as Avro Records. A record data type in Avro is a collection of multiple attributes.Avro Formatter propertiespropertytypedefault valuenotesFormat AsStringdefaultDo not change default value unless Using the Confluent or Hortonworks schema registry.Schema File NameStringA string specifying the\u00a0path and name of the Avro schema file Striim will create based on the type of the target's input stream.\u00a0(Be sure Striim has write permission for the specified directory.) If no path is specified, the file will be created in the Striim program directory. To generate the schema file, deploy the application. Then compile the schema file as directed in the Avro documentation for use in your application.Schema Registry ConfigurationStringWhen using Confluent Cloud's schema registry, specify the required authentication properties in the format basic.auth.user.info=,basic.auth.credentials.source=. Otherwise, leave blank.Schema Registry URLStringLeave blank unless\u00a0Using the Confluent or Hortonworks schema registry.Schema Registry Subject NameStringThe name of the subject against which the formatted Avro record's schema will be registered in the schema registry. The name and namespace of the Avro records will be the same as the subject name.This is a required property if you are using a message bus writer (such as Kafka, or EventHub) and if you are using the Avro schema registry to record the schema evolution.There are two values that are accepted:UseTopicName: all the Avro records will be registered under the topic name (configured in Kafka writer). If the topic has more than one type of AvroRecords in the topic, then the same subject will have multiple versions pointing to different types of records.UseDynamicValues: each type of the record will have its own subject name and clearly shows the evolution of the type. The value of the subject name will be picked from the field specified in the \u201cSchemaRegistrySubjectNameMapping\u201d property.Schema Registry Subject Name MappingStringIf the Schema Registry Subject Name property was set to \"UseDynamicValues\" then this property is mandatory. The value can be one of the following types:@metadata()You can pick the subject name from the metadata map. For example: @metadata(Directory) or @metadata(TableName).@userdata()You can pick the subject name from the userdata map. For example: @userdata(key1).A static name enclosed within quotes. For example: \"TestSubjectName\".An incoming field name if the incoming stream was a typed event. For example: EmpId.Avro Formatter sample applicationsThe following sample application filters and parses part of PosApp's data and writes it to a file using AvroFormatter:CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO CsvStream;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')) as hourValue,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM CsvStream;\n\nCREATE TARGET AvroFileOut USING FileWriter(\n filename:'AvroTestOutput'\n)\nFORMAT USING AvroFormatter (\n schemaFileName:'AvroTestParsed.avsc'\n)\nINPUT FROM PosDataStream;If you deploy the above application in the namespace avrons, AvroTestParsed.avsc is created with the following contents:{\"namespace\": \"PosDataStream_Type.avrons\",\n \"type\" : \"record\",\n \"name\": \"Typed_Record\",\n \"fields\": [\n{\"name\" : \"merchantId\", \"type\" : \"string\"},\n{\"name\" : \"dateTime\", \"type\" : \"string\"},\n{\"name\" : \"hourValue\", \"type\" : \"int\"},\n{\"name\" : \"amount\", \"type\" : \"double\"},\n{\"name\" : \"zip\", \"type\" : \"string\"}\n ]\n}The following application simply writes the raw data in WAEvent format:CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO CsvStream;\n\nCREATE TARGET AvroFileOut USING FileWriter(\n filename:'AvroTestOutput'\n)\nFORMAT USING AvroFormatter (\n schemaFileName:'AvroTestRaw.avsc'\n)\nINPUT FROM CsvStream;If you deploy the above application in the namespace avrons, AvroTestRaw.avsc is created with the following contents:{\"namespace\": \"WAEvent.avrons\",\n \"type\" : \"record\",\n \"name\": \"WAEvent_Record\",\n \"fields\": [\n {\"name\" : \"data\",\n \"type\" : [\"null\" , { \"type\": \"map\",\"values\":[ \"null\" , \"string\"] }]\n },\n {\"name\" : \"before\",\n \"type\" : [\"null\" , { \"type\": \"map\",\"values\":[ \"null\" , \"string\"] }]\n },\n {\"name\" : \"metadata\",\n \"type\" : { \"type\": \"map\",\"values\":\"string\" }\n }\n ]\n}See Parsing the data field of WAEvent and Using the META() function for information about this format.For additional Avro Formatter examples, see Reading from and writing to Kafka using Avro.In this section: Avro FormatterAvro Formatter propertiesAvro Formatter sample applicationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/avro-formatter.html", "title": "Avro Formatter", "language": "en"}} {"page_content": "\n\nDSV FormatterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsFormattersDSV FormatterPrevNextDSV FormatterFormats a writer's output as delimited text.DSV Formatter propertiespropertytypedefault valuenotesCharsetStringColumn DelimiterString,HeaderBooleanFalseSet to True to add a header to the output files for FileWriter or S3Writer.When the target's input stream is of a user-defined type, the header will include the field names.When the target's input stream is the output of a DatabaseReader or CDC reader source, the header will include the source table's column names. In this case, the writer's properties must include\u00a0directory: '%@metadata(TableName)%' or, in S3,\u00a0bucketname: '%@metadata(TableName)%' or foldername: '%@metadata(TableName)%', and events for each table will be written to a separate directory or S3 bucket.When Header is True, if any of the special characters listed in Using non-default case and special characters in table identifiers are used in source column names, they will be preserved in the output.Using non-default case and special characters in table identifiersKnown issue DEV-14229: in this release, headers may be incorrect if the writer's rolloverpolicy includes an interval.\u00a0Workaround: use only eventcount and/or filesize in the rolloverpolicy.MembersStringcomma-separated list of fields to be selected from the writer's input stream; if left\u00a0 blank, selects all fieldsOne use for this property is to remove fields used only to name the output directory in the target (see\u00a0Setting output names and rollover / upload policies).Null ValueStringNULLQuote CharacterString\"Row DelimiterString\\nsee Setting rowdelimiter valuesStandardString\u00a0noneset to\u00a0RFC4180 to format output using the\u00a0RFC 4180 standard and ignore any conflicting values in other propertiesUse QuotesBooleanFalseset to True to escape values of type String using the quotecharacterDSV Formatter sample applicationFor example, this variation on the PosApp sample application writes to a file using DSVFormatter:CREATE SOURCE DSVFormatterTestSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO DSVSource_Stream;\n\nCREATE CQ CsvToPosData\nINSERT INTO DSVTransformed_Stream\nSELECT TO_STRING(data[1]),\n TO_DATEF(data[4],'yyyyMMddHHmmss'),\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')),\n TO_DOUBLE(data[7]),\n TO_STRING(data[9])\nFROM DSVSource_Stream;\n\nCREATE TARGET DSVFormatterOut using FileWriter(\n filename:'DSVFormatterOutput')\nFORMAT USING DSVFormatter ()\nINPUT FROM DSVTransformed_Stream;The first lines of DSVFormatterOutput are:D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu,2013-03-12T17:32:10.000-07:00,17,2.2,41363\nOFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1,2013-03-12T17:32:10.000-07:00,17,22.78,16950\nljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx,2013-03-12T17:32:10.000-07:00,17,218.57,18224If you set DSVFormatter to escape the strings, as follows:FORMAT USING DSVFormatter (\n usequotes:True,\n quotecharacter:'\"')Then the first lines of DSVFormatterOutput would be:\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",2013-03-12T17:32:10.000+01:00,17,2.2,\"41363\"\n\"OFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1\",2013-03-12T17:32:10.000+01:00,17,22.78,\"16950\"\n\"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\",2013-03-12T17:32:10.000+01:00,17,218.57,\"18224\"In this section: DSV FormatterDSV Formatter propertiesDSV Formatter sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/dsv-formatter.html", "title": "DSV Formatter", "language": "en"}} {"page_content": "\n\nJSON FormatterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsFormattersJSON FormatterPrevNextJSON FormatterFormats a writer's output as JSON.JSON Formatter propertiespropertytypedefault valuenotesCharsetStringEvents as Array of JSON ObjectsBooleanTrueWith the default value True, output is an array:[\n {field1:value1,field2:value2,.....} ,\n {field1:value1,field2:value2,.....} ,\n {field1:value1,field2:value2,.....} ]Set to False to output a collection:{field1:value1,field2:value2,.....} \n{field1:value1,field2:value2,.....} \n{field1:value1,field2:value2,.....}JSON Member DelimiterString\\nJSON Object DelimiterString\\nMembersStringcomma-separated list of fields to be selected from the writer's input stream; if left\u00a0 blank, selects all fieldsOne use for this property is to remove fields used only to name the output directory in the target (see\u00a0Setting output names and rollover / upload policies).If source column names contain any of the special characters listed in Using non-default case and special characters in table identifiers, they will be used in the corresponding field names in the JSON output.Using non-default case and special characters in table identifiersJSON Formatter sample applicationFor example, this variation on the PosApp sample application writes to a file using JSONFormatter:CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO CsvStream;\n \nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM CsvStream;\n\nCREATE TARGET JFFileOut USING FileWriter(\n filename:'JFTestOutput.json'\n)\nFORMAT USING JSONFormatter()\nINPUT FROM PosDataStream;The first lines of JSONFormatterOutput are:[\n {\n \"merchantId\":\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",\n \"dateTime\":\"2013-03-12T17:32:10.000-07:00\",\n \"amount\":2.2\n },\n {\n \"merchantId\":\"OFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1\",\n \"dateTime\":\"2013-03-12T17:32:10.000-07:00\",\n \"amount\":22.78\n },In this section: JSON FormatterJSON Formatter propertiesJSON Formatter sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/json-formatter.html", "title": "JSON Formatter", "language": "en"}} {"page_content": "\n\nParquet FormatterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsFormattersParquet FormatterPrevNextParquet FormatterFormats a writer's output for use by Apache Parquet and generates one or more schema files. See Supported writer-formatter combinations.Notes:Encryption Policy cannot be set for the associated writer.Data written using Parquet Formatter cannot be consumed until the target file is closed (rolls over).Parquet Formatter propertiespropertytypedefault valuenotesBlock SizeLong128000000Sets the parquet.block.size property in Parquet.Compression TypeStringUNCOMPRESSEDOptionally, specify the target's compression format. Supported types are GZIP, LZO, and SNAPPY.Format AsStringDefaultWith the Default setting, a single schema file is created.With an input stream of a user-defined type, the output includes the name and value of each field.With an input stream of type WAEvent (from any source), the output includes all contents of\u00a0the event excluding dataPresenceBitMap, beforePresenceBitMap, and typeUUID.The two other settings are supported only with an input stream of type WAEvent from a Database Reader, Incremental Batch Reader, or SQL CDC reader source. A dynamic directory, folder, or bucket name must be specified in the writer (see Setting output names and rollover / upload policies).A schema file with a timestamp appended to its name is created in each directory, folder, or bucket. With S3 Writer, if both the bucket name and folder name are dynamic, each combination of bucket and folder will have its own schema file. If there is a DDL change in the source, a new schema file is created and the output file(s) rolls over.When Format As is set to Native, the output includes all contents of\u00a0the WAEvent except for typeUUID.When Format As is set to Table, the output includes only the column names and values.See Parquet Formatter examples for sample schema files and output for each setting.MembersStringOptionally:With an input stream of type WAEvent, specify a comma-separated list of elements to include in the output.With an input stream of type WAEvent from a Database Reader, Incremental Batch Reader, or SQL CDC reader source, specify additional elements (for example, from the METADATA or USERDATA maps) to include in addition to the data array values. See Parquet Formatter examples for a sample schema file and output.Schema File NameStringThe fully qualified name of the Parquet schema file Striim will create when the application runs. When a dynamic directory is specified in the writer, Striim in some cases writes the files in the target directories and/or appends a timestamp to the file names. See the notes for Format As for more details.Parquet Formatter data type support and correspondenceThe following apply when the input stream is of a user-defined type.Striim typeParquet typeNotesByteINT32DateTimeINT32Unix epoch (number of days from 1 January 1970)DoubleDOUBLEIEEE 64-bitFloatFLOATIEEE 32-bitIntegerINT3232-bit signedLongINT6464-bit signedShortINT32StringBYTE_ARRAYUTF-8The following apply when the input stream is the output of a Database Reader or Incremental Batch Reader source.JDBC column typeParquet typeNotesTypes.BIGINTINT3232-bit signedTypes.BITINT32Types.CHARBYTE_ARRAYUTF-8Types.DATEINT32Unix epoch (number of days from 1 January 1970)Types.DECIMALBYTE_ARRAYUTF-8Types.DOUBLEDOUBLEIEEE 64-bitTypes.FLOATFLOATIEEE 32-bitTypes.INTEGERINT3232-bit signedTypes.NUMERICBYTE_ARRAYUTF-8Types.REALFLOATIEEE 32-bitTypes.SMALLINTINT32Unix epoch (number of days from 1 January 1970)Types.TIMESTAMPINT32Types.TINYINTINT32Types.VARCHARCHARBYTE_ARRAYUTF-8other typesBYTE_ARRAYUTF-8The following apply when the input stream is the output of an Oracle Reader source.Oracle typeParquet typeADTunsupportedARRAYunsupportedBFILEunsupportedBINARY_DOUBLEDOUBLEBINARY_FLOATFLOATBFILEunsupportedBLOBBYTE_ARRAYCHARBYTE_ARRAYCLOBBYTE_ARRAYDATEBYTE_ARRAYFLOATBYTE_ARRAYINTERVALDAYTOSECONDunsupportedINTERVALYEARTOMONTHunsupportedLONGunsupportedLONG RAWunsupportedNCHARBYTE_ARRAYNCLOBBYTE_ARRAYNESTED TABLEunsupportedNUMBERBYTE_ARRAYNVARCHAR2unsupportedRAWunsupportedREFunsupportedROWIDunsupportedTIMESTAMPBYTE_ARRAYTIMESTAMP WITHLOCALTIMEZONEBYTE_ARRAYTIMESTAMP WITHTIMEZONEBYTE_ARRAYUDTunsupportedUROWIDunsupportedVARCHAR2BYTE_ARRAYVARRAYsupported for primitive data types (see Oracle Reader and OJet WAEvent fields)XMLTYPEunsupportedParquet Formatter examplesThe output of Parquet Formatter varies depending on the type of the input stream and the Format As setting.Input stream of user-defined type, Format As = DefaultInput stream type:Create Type PERSON (\n\u00a0\u00a0ID Integer,\n\u00a0\u00a0City String,\n\u00a0\u00a0Code String,\n\u00a0\u00a0Name String);Schema:message Person.Person {\n\u00a0\u00a0optional int32 ID;\u00a0\u00a0\n optional binary city (UTF8);\n\u00a0\u00a0optional binary code (UTF8);\n\u00a0\u00a0optional binary name (UTF8);\n}Sample output:{\"ID\":1216,\"city\":\"South Kent\",\"code\":\"USD\",\"name\":\"COMPANY 4999992\"}Input stream of Oracle Reader WAEvent, Format As = DefaultSource table:CREATE TABLE TABLES1 (\n \"EMPNO\" NUMBER(4),\n \"ENAME\" VARCHAR2(10),\n \"JOB\" VARCHAR(20),\n \"HIREDATE\" TIMESTAMP(6),\n \"SAL\" NUMBER DEFAULT 0\n);Schema:message WAEvent.avro.WAEvent {\n optional group metadata (MAP) {\n repeated group map (MAP_KEY_VALUE) {\n required binary key (UTF8);\n optional binary value (UTF8);\n }\n }\n optional group data (MAP) {\n repeated group map (MAP_KEY_VALUE) {\n required binary key (UTF8);\n optional binary value (UTF8);\n }\n }\n optional group before (MAP) {\n repeated group map (MAP_KEY_VALUE) {\n required binary key (UTF8);\n optional binary value (UTF8);\n }\n }\n optional group userdata (MAP) {\n repeated group map (MAP_KEY_VALUE) {\n required binary key (UTF8);\n optional binary value (UTF8);\n }\n }\n}Sample output, insert:{\n \"metadata\": {\n \"map\": [\n {\n \"key\": \"RbaSqn\",\n \"value\": \"29\"\n },\n {\n \"key\": \"AuditSessionId\",\n \"value\": \"174320\"\n },\n {\n \"key\": \"TableSpace\",\n \"value\": \"USERS\"\n },\n {\n \"key\": \"CURRENTSCN\",\n \"value\": \"1215123\"\n },\n {\n \"key\": \"SQLRedoLength\",\n \"value\": \"152\"\n },\n {\n \"key\": \"BytesProcessed\"\n },\n {\n \"key\": \"ParentTxnID\",\n \"value\": \"1.20.571\"\n },\n {\n \"key\": \"SessionInfo\",\n \"value\": \"UNKNOWN\"\n },\n {\n \"key\": \"RecordSetID\",\n \"value\": \" 0010 \"\n },\n {\n \"key\": \"DBCommitTimestamp\",\n \"value\": \"1587561609000\"\n },\n {\n \"key\": \"COMMITSCN\",\n \"value\": \"1215124\"\n },\n {\n \"key\": \"SEQUENCE\",\n \"value\": \"1\"\n },\n {\n \"key\": \"Rollback\",\n \"value\": \"0\"\n },\n {\n \"key\": \"STARTSCN\",\n \"value\": \"1215123\"\n },\n {\n \"key\": \"SegmentName\",\n \"value\": \"TABLES1\"\n },\n {\n \"key\": \"OperationName\",\n \"value\": \"INSERT\"\n },\n {\n \"key\": \"TimeStamp\",\n \"value\": \"2020-04-22T13:20:09.000-07:00\"\n },\n {\n \"key\": \"TxnUserID\",\n \"value\": \"QATEST\"\n },\n {\n \"key\": \"RbaBlk\",\n \"value\": \"2808\"\n },\n {\n \"key\": \"SegmentType\",\n \"value\": \"TABLE\"\n },\n {\n \"key\": \"TableName\",\n \"value\": \"QATEST.TABLES1\"\n },\n {\n \"key\": \"TxnID\",\n \"value\": \"1.20.571\"\n },\n {\n \"key\": \"Serial\",\n \"value\": \"12481\"\n },\n {\n \"key\": \"ThreadID\",\n \"value\": \"1\"\n },\n {\n \"key\": \"COMMIT_TIMESTAMP\",\n \"value\": \"2020-04-22T13:20:09.000-07:00\"\n },\n {\n \"key\": \"OperationType\",\n \"value\": \"DML\"\n },\n {\n \"key\": \"ROWID\",\n \"value\": \"AAAFSWAAEAAAApcAAB\"\n },\n {\n \"key\": \"DBTimeStamp\",\n \"value\": \"1587561609000\"\n },\n {\n \"key\": \"TransactionName\",\n \"value\": \"\"\n },\n {\n \"key\": \"SCN\",\n \"value\": \"121512300000081627745086341280001\"\n },\n {\n \"key\": \"Session\",\n \"value\": \"142\"\n }\n ]\n },\n \"data\": {\n \"map\": [\n {\n \"key\": \"ENAME\",\n \"value\": \"tanuja\"\n },\n {\n \"key\": \"EMPNO\",\n \"value\": \"1\"\n },\n {\n \"key\": \"JOB\",\n \"value\": \"So\"\n },\n {\n \"key\": \"HIREDATE\",\n \"value\": \"2020-04-22T13:20:09.074-07:00\"\n },\n {\n \"key\": \"SAL\",\n \"value\": \"60000\"\n }\n ]\n }\nSample output, update:{\n \"metadata\": {\n \"map\": [\n {\n \"key\": \"RbaSqn\",\n \"value\": \"59\"\n },\n {\n \"key\": \"AuditSessionId\",\n \"value\": \"174782\"\n },\n {\n \"key\": \"TableSpace\",\n \"value\": \"USERS\"\n },\n {\n \"key\": \"CURRENTSCN\",\n \"value\": \"1397647\"\n },\n {\n \"key\": \"SQLRedoLength\",\n \"value\": \"189\"\n },\n {\n \"key\": \"BytesProcessed\"\n },\n {\n \"key\": \"ParentTxnID\",\n \"value\": \"8.10.809\"\n },\n {\n \"key\": \"SessionInfo\",\n \"value\": \"UNKNOWN\"\n },\n {\n \"key\": \"RecordSetID\",\n \"value\": \" 0x00003b.00013dfb.0010 \"\n },\n {\n \"key\": \"DBCommitTimestamp\",\n \"value\": \"1587636418000\"\n },\n {\n \"key\": \"COMMITSCN\",\n \"value\": \"1397648\"\n },\n {\n \"key\": \"SEQUENCE\",\n \"value\": \"1\"\n },\n {\n \"key\": \"Rollback\",\n \"value\": \"0\"\n },\n {\n \"key\": \"STARTSCN\",\n \"value\": \"1397647\"\n },\n {\n \"key\": \"SegmentName\",\n \"value\": \"TABLES4\"\n },\n {\n \"key\": \"OperationName\",\n \"value\": \"UPDATE\"\n },\n {\n \"key\": \"TimeStamp\",\n \"value\": \"2020-04-23T10:06:58.000-07:00\"\n },\n {\n \"key\": \"TxnUserID\",\n \"value\": \"QATEST\"\n },\n {\n \"key\": \"RbaBlk\",\n \"value\": \"81403\"\n },\n {\n \"key\": \"SegmentType\",\n \"value\": \"TABLE\"\n },\n {\n \"key\": \"TableName\",\n \"value\": \"QATEST.TABLES4\"\n },\n {\n \"key\": \"TxnID\",\n \"value\": \"8.10.809\"\n },\n {\n \"key\": \"Serial\",\n \"value\": \"9289\"\n },\n {\n \"key\": \"ThreadID\",\n \"value\": \"1\"\n },\n {\n \"key\": \"COMMIT_TIMESTAMP\",\n \"value\": \"2020-04-23T10:06:58.000-07:00\"\n },\n {\n \"key\": \"OperationType\",\n \"value\": \"DML\"\n },\n {\n \"key\": \"ROWID\",\n \"value\": \"AAAFprAAEAAAAqlAAA\"\n },\n {\n \"key\": \"DBTimeStamp\",\n \"value\": \"1587636418000\"\n },\n {\n \"key\": \"TransactionName\",\n \"value\": \"\"\n },\n {\n \"key\": \"SCN\",\n \"value\": \"139764700000166070289607557280000\"\n },\n {\n \"key\": \"Session\",\n \"value\": \"9\"\n }\n ]\n },\n \"data\": {\n \"map\": [\n {\n \"key\": \"ENAME\",\n \"value\": \"tanuja\"\n },\n {\n \"key\": \"EMPNO\",\n \"value\": \"6\"\n },\n {\n \"key\": \"JOB\",\n \"value\": \"So\"\n },\n {\n \"key\": \"HIREDATE\",\n \"value\": \"2020-04-23T10:05:03.381-07:00\"\n },\n {\n \"key\": \"SAL\",\n \"value\": \"70000\"\n }\n ]\n },\n \"before\": {\n \"map\": [\n {\n \"key\": \"ENAME\",\n \"value\": \"tanuja\"\n },\n {\n \"key\": \"EMPNO\",\n \"value\": \"6\"\n },\n {\n \"key\": \"JOB\",\n \"value\": \"So\"\n },\n {\n \"key\": \"HIREDATE\",\n \"value\": \"2020-04-23T10:05:03.381-07:00\"\n },\n {\n \"key\": \"SAL\",\n \"value\": \"60000\"\n }\n ]\n }\n}Sample output, delete:{\n \"metadata\": {\n \"map\": [\n {\n \"key\": \"RbaSqn\",\n \"value\": \"59\"\n },\n {\n \"key\": \"AuditSessionId\",\n \"value\": \"174782\"\n },\n {\n \"key\": \"TableSpace\",\n \"value\": \"USERS\"\n },\n {\n \"key\": \"CURRENTSCN\",\n \"value\": \"1397672\"\n },\n {\n \"key\": \"SQLRedoLength\",\n \"value\": \"174\"\n },\n {\n \"key\": \"BytesProcessed\"\n },\n {\n \"key\": \"ParentTxnID\",\n \"value\": \"3.33.844\"\n },\n {\n \"key\": \"SessionInfo\",\n \"value\": \"UNKNOWN\"\n },\n {\n \"key\": \"RecordSetID\",\n \"value\": \" 0x00003b.00013dfe.0010 \"\n },\n {\n \"key\": \"DBCommitTimestamp\",\n \"value\": \"1587636481000\"\n },\n {\n \"key\": \"COMMITSCN\",\n \"value\": \"1397673\"\n },\n {\n \"key\": \"SEQUENCE\",\n \"value\": \"1\"\n },\n {\n \"key\": \"Rollback\",\n \"value\": \"0\"\n },\n {\n \"key\": \"STARTSCN\",\n \"value\": \"1397672\"\n },\n {\n \"key\": \"SegmentName\",\n \"value\": \"TABLES4\"\n },\n {\n \"key\": \"OperationName\",\n \"value\": \"DELETE\"\n },\n {\n \"key\": \"TimeStamp\",\n \"value\": \"2020-04-23T10:08:01.000-07:00\"\n },\n {\n \"key\": \"TxnUserID\",\n \"value\": \"QATEST\"\n },\n {\n \"key\": \"RbaBlk\",\n \"value\": \"81406\"\n },\n {\n \"key\": \"SegmentType\",\n \"value\": \"TABLE\"\n },\n {\n \"key\": \"TableName\",\n \"value\": \"QATEST.TABLES4\"\n },\n {\n \"key\": \"TxnID\",\n \"value\": \"3.33.844\"\n },\n {\n \"key\": \"Serial\",\n \"value\": \"9289\"\n },\n {\n \"key\": \"ThreadID\",\n \"value\": \"1\"\n },\n {\n \"key\": \"COMMIT_TIMESTAMP\",\n \"value\": \"2020-04-23T10:08:01.000-07:00\"\n },\n {\n \"key\": \"OperationType\",\n \"value\": \"DML\"\n },\n {\n \"key\": \"ROWID\",\n \"value\": \"AAAFprAAEAAAAqlAAA\"\n },\n {\n \"key\": \"DBTimeStamp\",\n \"value\": \"1587636481000\"\n },\n {\n \"key\": \"TransactionName\",\n \"value\": \"\"\n },\n {\n \"key\": \"SCN\",\n \"value\": \"139767200000166070289609523360000\"\n },\n {\n \"key\": \"Session\",\n \"value\": \"9\"\n }\n ]\n },\n \"data\": {\n \"map\": [\n {\n \"key\": \"ENAME\",\n \"value\": \"tanuja\"\n },\n {\n \"key\": \"EMPNO\",\n \"value\": \"6\"\n },\n {\n \"key\": \"JOB\",\n \"value\": \"So\"\n },\n {\n \"key\": \"HIREDATE\",\n \"value\": \"2020-04-23T10:05:03.381-07:00\"\n },\n {\n \"key\": \"SAL\",\n \"value\": \"70000\"\n }\n ]\n }\n}Input stream of Oracle Reader WAEvent, Format As = NativeSource table:CREATE TABLE TABLES1 (\n \"EMPNO\" NUMBER(4),\n \"ENAME\" VARCHAR2(10),\n \"JOB\" VARCHAR(20),\n \"HIREDATE\" TIMESTAMP(6),\n \"SAL\" NUMBER DEFAULT 0\n);Schema:message QATEST.TABLES3 {\n optional group data {\n optional binary EMPNO (UTF8);\n optional binary ENAME (UTF8);\n optional binary JOB (UTF8);\n optional binary HIREDATE (UTF8);\n optional binary SAL (UTF8);\n }\n optional group before {\n optional binary EMPNO (UTF8);\n optional binary ENAME (UTF8);\n optional binary JOB (UTF8);\n optional binary HIREDATE (UTF8);\n optional binary SAL (UTF8);\n }\n optional group metadata (MAP) {\n repeated group map (MAP_KEY_VALUE) {\n required binary key (UTF8);\n optional binary value (UTF8);\n }\n }\n optional group userdata (MAP) {\n repeated group map (MAP_KEY_VALUE) {\n required binary key (UTF8);\n optional binary value (UTF8);\n }\n }\n optional group datapresenceinfo {\n required boolean EMPNO;\n required boolean ENAME;\n required boolean JOB;\n required boolean HIREDATE;\n required boolean SAL;\n }\n optional group beforepresenceinfo {\n required boolean EMPNO;\n required boolean ENAME;\n required boolean JOB;\n required boolean HIREDATE;\n required boolean SAL;\n }\n}Sample output, insert:{\n \"data\": {\n \"EMPNO\": \"5\",\n \"ENAME\": \"tanuja\",\n \"JOB\": \"So\",\n \"HIREDATE\": \"2020-04-23T09:01:46.172-07:00\",\n \"SAL\": \"60000\"\n },\n \"metadata\": {\n \"map\": [\n {\n \"key\": \"RbaSqn\",\n \"value\": \"52\"\n },\n {\n \"key\": \"AuditSessionId\",\n \"value\": \"174782\"\n },\n {\n \"key\": \"TableSpace\",\n \"value\": \"USERS\"\n },\n {\n \"key\": \"CURRENTSCN\",\n \"value\": \"1353271\"\n },\n {\n \"key\": \"SQLRedoLength\",\n \"value\": \"152\"\n },\n {\n \"key\": \"BytesProcessed\"\n },\n {\n \"key\": \"ParentTxnID\",\n \"value\": \"2.31.778\"\n },\n {\n \"key\": \"SessionInfo\",\n \"value\": \"UNKNOWN\"\n },\n {\n \"key\": \"RecordSetID\",\n \"value\": \" 0x000034.00006bf0.0010 \"\n },\n {\n \"key\": \"DBCommitTimestamp\",\n \"value\": \"1587632506000\"\n },\n {\n \"key\": \"COMMITSCN\",\n \"value\": \"1353272\"\n },\n {\n \"key\": \"SEQUENCE\",\n \"value\": \"1\"\n },\n {\n \"key\": \"Rollback\",\n \"value\": \"0\"\n },\n {\n \"key\": \"STARTSCN\",\n \"value\": \"1353271\"\n },\n {\n \"key\": \"SegmentName\",\n \"value\": \"TABLES3\"\n },\n {\n \"key\": \"OperationName\",\n \"value\": \"INSERT\"\n },\n {\n \"key\": \"TimeStamp\",\n \"value\": \"2020-04-23T09:01:46.000-07:00\"\n },\n {\n \"key\": \"TxnUserID\",\n \"value\": \"QATEST\"\n },\n {\n \"key\": \"RbaBlk\",\n \"value\": \"27632\"\n },\n {\n \"key\": \"SegmentType\",\n \"value\": \"TABLE\"\n },\n {\n \"key\": \"TableName\",\n \"value\": \"QATEST.TABLES3\"\n },\n {\n \"key\": \"TxnID\",\n \"value\": \"2.31.778\"\n },\n {\n \"key\": \"Serial\",\n \"value\": \"9289\"\n },\n {\n \"key\": \"ThreadID\",\n \"value\": \"1\"\n },\n {\n \"key\": \"COMMIT_TIMESTAMP\",\n \"value\": \"2020-04-23T09:01:46.000-07:00\"\n },\n {\n \"key\": \"OperationType\",\n \"value\": \"DML\"\n },\n {\n \"key\": \"ROWID\",\n \"value\": \"AAAFpNAAEAAAAqdAAG\"\n },\n {\n \"key\": \"DBTimeStamp\",\n \"value\": \"1587632506000\"\n },\n {\n \"key\": \"TransactionName\",\n \"value\": \"\"\n },\n {\n \"key\": \"SCN\",\n \"value\": \"135327100000146367005998448800006\"\n },\n {\n \"key\": \"Session\",\n \"value\": \"9\"\n }\n ]\n },\n \"datapresenceinfo\": {\n \"EMPNO\": true,\n \"ENAME\": true,\n \"JOB\": true,\n \"HIREDATE\": true,\n \"SAL\": true\n },\n \"beforepresenceinfo\": {\n \"EMPNO\": false,\n \"ENAME\": false,\n \"JOB\": false,\n \"HIREDATE\": false,\n \"SAL\": false\n }\n}Sample output, update:{\n \"data\": {\n \"EMPNO\": \"3\",\n \"ENAME\": \"tanuja\",\n \"JOB\": \"So\",\n \"HIREDATE\": \"2020-04-23T09:53:17.459-07:00\",\n \"SAL\": \"70000\"\n },\n \"before\": {\n \"EMPNO\": \"3\",\n \"ENAME\": \"tanuja\",\n \"JOB\": \"So\",\n \"HIREDATE\": \"2020-04-23T09:53:17.459-07:00\",\n \"SAL\": \"60000\"\n },\n \"metadata\": {\n \"map\": [\n {\n \"key\": \"RbaSqn\",\n \"value\": \"55\"\n },\n {\n \"key\": \"AuditSessionId\",\n \"value\": \"174782\"\n },\n {\n \"key\": \"TableSpace\",\n \"value\": \"USERS\"\n },\n {\n \"key\": \"CURRENTSCN\",\n \"value\": \"1366172\"\n },\n {\n \"key\": \"SQLRedoLength\",\n \"value\": \"189\"\n },\n {\n \"key\": \"BytesProcessed\"\n },\n {\n \"key\": \"ParentTxnID\",\n \"value\": \"5.30.788\"\n },\n {\n \"key\": \"SessionInfo\",\n \"value\": \"UNKNOWN\"\n },\n {\n \"key\": \"RecordSetID\",\n \"value\": \" 0x000037.000128c2.0010 \"\n },\n {\n \"key\": \"DBCommitTimestamp\",\n \"value\": \"1587635716000\"\n },\n {\n \"key\": \"COMMITSCN\",\n \"value\": \"1366173\"\n },\n {\n \"key\": \"SEQUENCE\",\n \"value\": \"1\"\n },\n {\n \"key\": \"Rollback\",\n \"value\": \"0\"\n },\n {\n \"key\": \"STARTSCN\",\n \"value\": \"1366172\"\n },\n {\n \"key\": \"SegmentName\",\n \"value\": \"TABLES4\"\n },\n {\n \"key\": \"OperationName\",\n \"value\": \"UPDATE\"\n },\n {\n \"key\": \"TimeStamp\",\n \"value\": \"2020-04-23T09:55:16.000-07:00\"\n },\n {\n \"key\": \"TxnUserID\",\n \"value\": \"QATEST\"\n },\n {\n \"key\": \"RbaBlk\",\n \"value\": \"75970\"\n },\n {\n \"key\": \"SegmentType\",\n \"value\": \"TABLE\"\n },\n {\n \"key\": \"TableName\",\n \"value\": \"QATEST.TABLES4\"\n },\n {\n \"key\": \"TxnID\",\n \"value\": \"5.30.788\"\n },\n {\n \"key\": \"Serial\",\n \"value\": \"9289\"\n },\n {\n \"key\": \"ThreadID\",\n \"value\": \"1\"\n },\n {\n \"key\": \"COMMIT_TIMESTAMP\",\n \"value\": \"2020-04-23T09:55:16.000-07:00\"\n },\n {\n \"key\": \"OperationType\",\n \"value\": \"DML\"\n },\n {\n \"key\": \"ROWID\",\n \"value\": \"AAAFprAAEAAAAqlAAA\"\n },\n {\n \"key\": \"DBTimeStamp\",\n \"value\": \"1587635716000\"\n },\n {\n \"key\": \"TransactionName\",\n \"value\": \"\"\n },\n {\n \"key\": \"SCN\",\n \"value\": \"136617200000154811286978560160000\"\n },\n {\n \"key\": \"Session\",\n \"value\": \"9\"\n }\n ]\n },\n \"datapresenceinfo\": {\n \"EMPNO\": true,\n \"ENAME\": true,\n \"JOB\": true,\n \"HIREDATE\": true,\n \"SAL\": true\n },\n \"beforepresenceinfo\": {\n \"EMPNO\": true,\n \"ENAME\": true,\n \"JOB\": true,\n \"HIREDATE\": true,\n \"SAL\": true\n }\n}Sample output, delete:{\n \"data\": {\n \"EMPNO\": \"3\",\n \"ENAME\": \"tanuja\",\n \"JOB\": \"So\",\n \"HIREDATE\": \"2020-04-23T09:53:17.459-07:00\",\n \"SAL\": \"70000\"\n },\n \"metadata\": {\n \"map\": [\n {\n \"key\": \"RbaSqn\",\n \"value\": \"55\"\n },\n {\n \"key\": \"AuditSessionId\",\n \"value\": \"174782\"\n },\n {\n \"key\": \"TableSpace\",\n \"value\": \"USERS\"\n },\n {\n \"key\": \"CURRENTSCN\",\n \"value\": \"1366251\"\n },\n {\n \"key\": \"SQLRedoLength\",\n \"value\": \"174\"\n },\n {\n \"key\": \"BytesProcessed\"\n },\n {\n \"key\": \"ParentTxnID\",\n \"value\": \"7.31.697\"\n },\n {\n \"key\": \"SessionInfo\",\n \"value\": \"UNKNOWN\"\n },\n {\n \"key\": \"RecordSetID\",\n \"value\": \" 0x000037.000128df.0010 \"\n },\n {\n \"key\": \"DBCommitTimestamp\",\n \"value\": \"1587635881000\"\n },\n {\n \"key\": \"COMMITSCN\",\n \"value\": \"1366252\"\n },\n {\n \"key\": \"SEQUENCE\",\n \"value\": \"1\"\n },\n {\n \"key\": \"Rollback\",\n \"value\": \"0\"\n },\n {\n \"key\": \"STARTSCN\",\n \"value\": \"1366251\"\n },\n {\n \"key\": \"SegmentName\",\n \"value\": \"TABLES4\"\n },\n {\n \"key\": \"OperationName\",\n \"value\": \"DELETE\"\n },\n {\n \"key\": \"TimeStamp\",\n \"value\": \"2020-04-23T09:58:01.000-07:00\"\n },\n {\n \"key\": \"TxnUserID\",\n \"value\": \"QATEST\"\n },\n {\n \"key\": \"RbaBlk\",\n \"value\": \"75999\"\n },\n {\n \"key\": \"SegmentType\",\n \"value\": \"TABLE\"\n },\n {\n \"key\": \"TableName\",\n \"value\": \"QATEST.TABLES4\"\n },\n {\n \"key\": \"TxnID\",\n \"value\": \"7.31.697\"\n },\n {\n \"key\": \"Serial\",\n \"value\": \"9289\"\n },\n {\n \"key\": \"ThreadID\",\n \"value\": \"1\"\n },\n {\n \"key\": \"COMMIT_TIMESTAMP\",\n \"value\": \"2020-04-23T09:58:01.000-07:00\"\n },\n {\n \"key\": \"OperationType\",\n \"value\": \"DML\"\n },\n {\n \"key\": \"ROWID\",\n \"value\": \"AAAFprAAEAAAAqlAAA\"\n },\n {\n \"key\": \"DBTimeStamp\",\n \"value\": \"1587635881000\"\n },\n {\n \"key\": \"TransactionName\",\n \"value\": \"\"\n },\n {\n \"key\": \"SCN\",\n \"value\": \"136625100000154811286997565600000\"\n },\n {\n \"key\": \"Session\",\n \"value\": \"9\"\n }\n ]\n },\n \"datapresenceinfo\": {\n \"EMPNO\": true,\n \"ENAME\": true,\n \"JOB\": true,\n \"HIREDATE\": true,\n \"SAL\": true\n },\n \"beforepresenceinfo\": {\n \"EMPNO\": false,\n \"ENAME\": false,\n \"JOB\": false,\n \"HIREDATE\": false,\n \"SAL\": false\n }\n}Input stream of Oracle Reader WAEvent, Format As = TableSource table:CREATE TABLE TABLES1 (\n \"EMPNO\" NUMBER(4),\n \"ENAME\" VARCHAR2(10),\n \"JOB\" VARCHAR(20),\n \"HIREDATE\" TIMESTAMP(6),\n \"SAL\" NUMBER DEFAULT 0\n);Schema:message QATEST.TABLES1 {\n optional binary EMPNO (UTF8);\n optional binary ENAME (UTF8);\n optional binary JOB (UTF8);\n optional binary HIREDATE (UTF8);\n optional binary SAL (UTF8);\n}Sample output, insert:{\"EMPNO\":\"1\",\"ENAME\":\"tanuja\",\"JOB\":\"So\",\"HIREDATE\":\"2020-04-22T13:30:25.432-07:00\",\"SAL\":\"60000\"}Sample output, update:{\"EMPNO\":\"5\",\"ENAME\":\"tanuja\",\"JOB\":\"So\",\"HIREDATE\":\"2020-04-23T10:01:10.614-07:00\",\"SAL\":\"70000\"}Sample output, delete:{\"EMPNO\":\"5\",\"ENAME\":\"tanuja\",\"JOB\":\"So\",\"HIREDATE\":\"2020-04-23T10:01:10.614-07:00\",\"SAL\":\"70000\"}Input stream of Database Reader WAEvent, Format As = Table, using MembersSource table:CREATE TABLE ORACLETOPARQUET1 (\n ID INTEGER,\n NAME VARCHAR(30),\n COST BINARY_FLOAT,\n CREATED_TS TIMESTAMP WITH TIME ZONE,\n LARGE_DATA CLOB\n);Members property value:Table=@metadata(TableName),OpName=@metadata(OperationName)Schema:message QATEST.ORACLETOPARQUET1 {\n optional binary ID (UTF8);\n optional binary NAME (UTF8);\n optional float COST;\n optional binary CREATED_TS (UTF8);\n optional binary LARGE_DATA (UTF8);\n optional binary Table (UTF8);\n optional binary OpName (UTF8);\n}Sample output:{\"ID\":\"79\",\"NAME\":\"abc\",\"COST\":39.5,\"CREATED_TS\":\"2020-05-18T05:54:30.000Z\",\n\"LARGE_DATA\":\"This is a character literal 1\",\"Table\":\"QATEST.ORACLETOPARQUET1\",\n\"OpName\":\"INSERT\"}\nIn this section: Parquet FormatterParquet Formatter propertiesParquet Formatter data type support and correspondenceParquet Formatter examplesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/en/parquet-formatter.html", "title": "Parquet Formatter", "language": "en"}} {"page_content": "\n\nXML FormatterSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 TargetsFormattersXML FormatterPrevNextXML FormatterFormats a writer's output as XML.XML Formatter propertiespropertytypedefault valuenotesCharsetStringElement TupleStringdefines the XML outputFormat Column Value AsStringxmlattributeIf the target's input is the output of an Oracle Reader source that will emit VARRAY data, set to xmlelement (see Oracle Reader and OJet WAEvent fields).Root ElementStringRow DelimiterString\\nsee Setting rowdelimiter valuesXML Formatter sample applicationFor example, this variation on the PosApp sample application writes to a file using XMLFormatter:CREATE source XMLFormatterTestSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO XMLSource_Stream;\n\nCREATE CQ CsvToPosData\nINSERT INTO XMLTransformed_Stream\nSELECT TO_STRING(data[0]),\n TO_STRING(data[1]),\n TO_DATEF(data[4],'yyyyMMddHHmmss'),\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')),\n TO_DOUBLE(data[7]),\n TO_STRING(data[9])\nFROM XMLSource_Stream;\n\nCREATE TARGET XMLFormatterOut using FileWriter(\n filename:'XMLFormatterOutput')\nFORMAT USING XMLFormatter (\n rootelement:'document',\n elementtuple:'MerchantName:merchantid:text=merchantname'\n)\nINPUT FROM XMLTransformed_Stream; The first lines of XMLFormatterOutput are:\n\n COMPANY 1 \n COMPANY 2 \n COMPANY 3 In this section: XML FormatterXML Formatter propertiesXML Formatter sample applicationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/xml-formatter.html", "title": "XML Formatter", "language": "en"}} {"page_content": "\n\nProgrammer's GuideSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuidePrevNextProgrammer's GuideThis section of the documentation provides detailed documentation and sample code for the Tungsten Query Language (TQL) and other development tools for creating Striim applications.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-15\n", "metadata": {"source": "https://www.striim.com/docs/en/programmer-s-guide.html", "title": "Programmer's Guide", "language": "en"}} {"page_content": "\n\nStriim conceptsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideStriim conceptsPrevNextStriim conceptsStriim's key concepts include applications, flows, sources, streams,\u00a0Kafka streams, types, events, windows, continuous queries (CQs), caches, WActions and WActionStores, targets, and subscriptions.Tungsten Query Language (TQL) and applicationsStriim lets you develop and run custom applications that acquire data from external sources, process it, and deliver it for consumption through the Striim dashboard or to other applications. As in a SQL environment, the core of every application is one or more queries. As detailed in the rest of this Concepts Guide, an application also contains sources, targets, and other logical components organized into one or more flows, plus definitions for any charts, maps, or other built-in visualizations it uses.Applications may be created graphically using the web client or coded using the Tungsten Query Language (TQL), a SQL-like language that can be extended with Java (see Sample applications for programmers for examples). TQL is also used by the Tungsten console, the platform's command-line client.FlowFlows define what data an application receives, how it processes the data, and what it does with the results.Flows are made up of several kinds of components:sources to receive real-time event data from adaptersstreams to define the flow of data among the other componentswindows to bound the event data by time or countcontinuous queries to filter, aggregate, join, enrich, and transform the datacaches of historical, or reference data to enrich the event dataWActionStores to populate the built-in reports and visualizations and persist the processed datatargets to pass data to external applicationsAn application may contain multiple flows to organize the components into logical groups. See MultiLogApp for an example.An application is itself a flow that can contain other flows, so when an application contains only a single flow, it does not need to be explicitly created. See PosApp for an example.SourceA source is a start point of a flow and defines how data is acquired from an external data source. A flow may have multiple sources.Each source specifies:an input adapter (reader) for collection of real-time data from external sources such as database tables or log files (for more detailed information, see Sources)properties required by the selected reader, such as a host name, directory path, authentication credentials, and so onwith some readers, a parser that defines what to do with the data from the source (for example, DSVParser to parse delimited files, or FreeFormTextParser to parse using regex)an output stream to pass the data to other flow componentsHere is the TQL code for one of the sources in the MultiLogApp sample application:CREATE SOURCE Log4JSource USING FileReader (\n directory:'Samples/MultiLogApp/appData',\n wildcard:'log4jLog.xml',\n positionByEOF:false\n) \nPARSE USING XMLParser(\n rootnode:'/log4j:event',\n columnlist:'log4j:event/@timestamp,\n log4j:event/@level,\n log4j:event/log4j:message,\n log4j:event/log4j:throwable,\n log4j:event/log4j:locationInfo/@class,\n log4j:event/log4j:locationInfo/@method,\n log4j:event/log4j:locationInfo/@file,\n log4j:event/log4j:locationInfo/@line'\n)\nOUTPUT TO RawXMLStream;Log4JSource uses the FileReader adapter to read \u2026/Striim/Samples/MultiLogApp/appData/log4jLog.xml, parses it with XMLParser, and outputs the data to RawXMLStream. In the UI, the same source looks like this:Note: The other examples in the Concepts Guide appear in their TQL form only, but they all have UI counterparts similar to the above.StreamA stream passes one component\u2019s output to one or more other components. For example, a simple flow that only writes to a file might have this sequence:source > stream1 > queryA > stream2 > FileWriterThis more complex flow branches at stream2 in order to send alerts and populate the dashboard:source > stream1 > queryA > stream2 > ...\n... stream2 > queryB > stream3 > Subscription\n... stream2 > queryC > WActionStoreKafka streamsStriim natively integrates Apache Kafka, a high-throughput, low-latency, massively scalable message broker. For a technical explanation, see kafka.apache.org.In simple terms, what Kafka offers Striim users is the ability to persist real-time streaming source data to disk at the same time Striim loads it into memory, then replay it later. If data comes in too fast to be handled by the built-in Kafka broker, an external Kafka system may be used instead, and scaled up as necessary.Replaying from Kafka has many potential uses. For example:If you put a source persisted to a Kafka stream in one application and the associated CQs, windows, caches, targets, and WActionStores in another, you can bring down the second application to update the code, and when you restart it processing of source data will automatically continue from the point it left off, with zero data loss and no duplicates.Developers can use a persisted stream to do A/B testing of various TQL application options, or to perform any other useful experiments.You can perform forensics on historical data, mining a persisted stream for data you didn't know would be useful. For example, if you were troubleshooting a security alert, you could write new queries against a persisted stream to gather additional data that was not captured in a WActionStore.By persisting sources to an external Kafka broker, you can enable zero-data-loss recovery after a Striim cluster failure for sources that are normally not recoverable, such as HTTPReader, TCPReader, and UDPReader (see Recovering applications).Persisting to an external Kafka broker can also allow recovery of sources running on a remote host using the Forwarding Agent.You can use a Kafka stream like any other stream, by referencing it in a CQ, putting a window over it, and so on. Alternatively, you can also use it as a Kafka topic:You can read the Kafka topic with KafkaReader, allowing events to be consumed later using messaging semantics rather than immediately using event semantics.You can read the Kafka topic with an external Kafka consumer, allowing development of custom applications or integration with third-party Kafka consumers.For additional information, see:Persisting a stream to KafkaReading a Kafka stream with KafkaReaderTypeA stream is associated with a Striim data model type, a type being a named set of fields, each of which has a name and a Java data type, such as Integer or String (see Supported data types for a full list). Any other Java type may be imported and used, though with some restrictions, for example regarding serializability. One field may have a key for use in generating WActions.A stream that receives its input from a source is automatically assigned the Striim type associated with the reader specified in the source. For other streams, you must create an appropriate Striim type. Any casting or other manipulation of fields is performed by queries.Here is sample TQL code for a Striim type suitable for product order data:CREATE TYPE OrderType(\n storeId String KEY,\n orderId String,\n sku String,\n orderAmount Double,\n dateTime DateTime\n);Each event of this type will have the ID of the store where it was purchased, the order ID, the SKU of the product, the amount of the order, and the timestamp of the order.EventA stream is composed of a series of events, much as a table in a SQL environment is composed of rows. Each event is a fixed sequence of data elements corresponding to the stream's type.WindowA window bounds real-time data by time (for example, five minutes), event count (for example, 10,000 events), or both. A window is required for an application to aggregate or perform calculations on data, populate the dashboard, or send alerts when conditions deviate from normal parameters. Without a window to bound the data, an application is limited to evaluating and acting on individual events.Striim supports three types of windows: sliding, jumping, and session. Windows send data to downstream queries when their contents change (sliding) or expire (jumping), or when there has been a gap in use activity (session).Sliding windows always contain the most recent events in the data stream. For example, at 8:06 am, a five-minute sliding window would contain data from 8:01 to 8:06, at 8:07 am, it would contain data from 8:02 am to 8:07 am, and so on. The time values may be taken from an attribute of the incoming stream (see the ON dateTime example below).If the window's size is specified as a number of events, each time a new event is received, the oldest event is discarded.If the size is specified as a length of time, each event is discarded after the specified time has elapsed since it was added to the window, so the number of events in the window may vary. Be sure to keep this in mind when writing queries that make calculations.If both a number of events and a length of time are specified, each event is discarded after it has been in the window for the specified time, or sooner if necessary to avoid exceeding the specified number.Jumping windows are periodically updated with an entirely new set of events. For example, a five-minute jumping window would output data sets for 8:00:00-8:04:59 am, 8:05:00-8:09:59 am, and so on. A 10,000-event jumping window would output a new data set for every 10,000 events. If both five minutes and 10,000 events were specified, the window would output a new data set every time it accumulates 10,000 events or five minutes has elapsed since the previous data set was output.To put it another way, a jumping window slices the data stream into chunks. The query, WActionStore, or target that receives the events will process each chunk in turn. For example, a map visualization for a five-minute jumping window would refresh every five minutes.For better performance, filter out any unneeded fields using a query before the data is sent to the window.This window breaks the RetailOrders stream (discussed above) into chunks:CREATE JUMPING WINDOW ProductData_15MIN \nOVER RetailOrders \nKEEP WITHIN 15 MINUTE ON dateTime;Each chunk contains 15 minutes worth of events, with the 15 minutes measured using the timestamp values from the events' dateTime field (rather than the Striim host's system clock).The PARTITION BY field_name option applies the KEEP clause separately for each value of the specified field. For example, this window would contain 100 orders per store:CREATE JUMPING WINDOW Orders100PerStore\nOVER RetailOrders\nKEEP 100 ROWS \nPARTITION BY storeId;Session windows break a stream up into chunks when there are gaps in the flow of events; that is, when no new event has been received for a specified period of time (the idle timeout). Session windows are defined by user activity, and represent a period of activity followed by a defined gap of inactivity. For example, this window has a defined inactivity gap of ten minutes, as shown in IDLE TIMEOUT. If a new order event arrives after ten minutes have passed, then a new session is created.CREATE SESSION WINDOW NewOrders\nOVER RetailOrders\nIDLE TIMEOUT 10 MINUTE\nPARTITION BY storeId;For more information about window syntax, see CREATE WINDOW.Continuous query (CQ)Most of an application\u2019s logic is specified by continuous queries. Striim queries are in most respects similar to SQL, except that they are continually running and act on real-time data instead of relational tables.Queries may be used to filter, aggregate, join, enrich, and transform events. A query may have multiple input streams to combine data from multiple sources, windows, caches, and/or WActionStores.Some example queries illustrating common use cases:Filtering eventsThe GetErrors query, from the MultiLogApp sample application, filters the log file data in Log4ErrorWarningStream to pass only error messages to ErrorStream:CREATE CQ GetErrors \nINSERT INTO ErrorStream \nSELECT log4j \nFROM Log4ErrorWarningStream log4j WHERE log4j.level = 'ERROR';Warning messages are discarded.Filtering fieldsThe TrackCompanyApiDetail query, also from the MultiLogApp sample application, inserts a subset of the fields in a stream into a WActionStore:CREATE CQ TrackCompanyApiDetail\nINSERT INTO CompanyApiActivity(company,companyZip,companyLat,companyLong,state,ts)\nSELECT company,companyZip,companyLat,companyLong,state,ts\nFROM CompanyApiUsageStream;Values for the fields not inserted by TrackCompanyApiDetail are picked up from the most recent insertion by TrackCompanyApiSummary with the same company value.AlertingCREATE CQ SendErrorAlerts \nINSERT INTO ErrorAlertStream \nSELECT 'ErrorAlert', ''+logTime, 'error', 'raise', 'Error in log ' + message \nFROM ErrorStream;The SendErrorAlerts query, from the MultiLogApp sample application, sends an alert whenever an error message appears in ErrorStream.AggregationThis portion of the GenerateMerchantTxRateOnly query, from the PosApp sample application, aggregates the data from the incoming PosData5Minutes stream and outputs one event per merchant per five-minute batch of transactions to MerchantTxRateOnlyStream:CREATE CQ GenerateMerchantTxRateOnly\nINSERT INTO MerchantTxRateOnlyStream\nSELECT p.merchantId,\n FIRST(p.zip),\n FIRST(p.dateTime),\n COUNT(p.merchantId),\n SUM(p.amount) ...\nFROM PosData5Minutes p ...\nGROUP BY p.merchantId;\nEach output event includes the zip code and timestamp of the first transaction, the total number of transactions in the batch, and the total amount of those transactions.EnrichmentThe GetUserDetails query, from the MultiLogApp sample application, enhances the event log message events in InfoStream by joining the corresponding user and company names and zip codes from the MLogUserLookup cache:CREATE CQ GetUserDetails \nINSERT INTO ApiEnrichedStream \nSELECT a.userId, a.api, a.sobject, a.logTime, u.userName, u.company, u.userZip, u.companyZip \nFROM InfoStream a, MLogUserLookup u \nWHERE a.userId = u.userId;A subsequent query further enhances the data with latitude and longitude values corresponding to the zip codes, and uses the result to populate maps on the dashboard.Handling nullsThe following will return values from the stream when there is no match for the join in the cache:SELECT ...\nFROM stream S\nLEFT OUTER JOIN cache C\nON S.joinkey=C.joinkey WHERE C.joinkey IS NULLCacheA memory-based cache of non-real-time historical or reference data acquired from an external source, such as a static file of postal codes and geographic data used to display data on dashboard maps, or a database table containing historical averages used to determine when to send alerts. If the source is updated regularly, the cache can be set to refresh the data at an appropriate interval.Cached data is typically used by queries to enrich real-time data by, for example, adding detailed user or company information, or adding latitude and longitude values so the data can be plotted on a map. For example, the following query, from the PosApp sample application, enriches real-time data that has previously been filtered and aggregated with company name and location information from two separate caches, NameLookup and ZipLookup:CREATE CQ GenerateWactionContext\nINSERT INTO MerchantActivity\nSELECT m.merchantId,\n m.startTime,\n n.companyName,\n m.category,\n m.status,\n m.count,\n m.hourlyAve,\n m.upperLimit,\n m.lowerLimit,\n m.zip,\n z.city,\n z.state,\n z.latVal,\n z.longVal\nFROM MerchantTxRateWithStatusStream m, NameLookup n, ZipLookup z\nWHERE m.merchantId = n.merchantId AND m.zip = z.zip\nLINK SOURCE EVENT;NoteA cache is loaded into memory when it is deployed, so deployment of an application or flow with a large cache may take some time.WAction and WActionStoreA WActionStore stores event data from one or more sources based on criteria defined in one or more queries. These events may be related using common key fields. The stored data may be queried by CQs (see CREATE CQ (query)), by dashboard visualizations (see Defining dashboard queries), or manually using the console (see Browsing data with ad-hoc queries). This data may also be directly accessed by external applications using the REST API (see Querying a WActionStore using the REST API).A WActionStore may exist only in memory or it may be persisted to disk (see CREATE WACTIONSTORE). If a WActionStore exists only in memory, when the available memory is full, older events will be removed to make room for new ones. If a WActionStore is persisted to disk, older events remain available for use in queries and visualizations and by external applications.A WAction typically consists of:detail data for a set of related real-time events (optional)results of calculations on those eventscommon context informationFor example, a WAction of logins for a user might contain:source IP, login timestamp, and device type for each login by the user (detail data)number of logins (calculation)username and historical average number of logins (context information)If the number of logins exceeded the historical average by a certain amount, the application could send an alert to the appropriate network administrators.Including the detail data (by including the LINK SOURCE EVENT option in the query) allows you to drill down in the visualizations to see specific events. If an application does not require that, detail data may be omitted, reducing memory requirements.See PosApp for a discussion of one example.TargetA target is an end point of a flow and defines how data is passed to an external application for storage, analysis, or other purposes. A flow may have multiple targets.Each target specifies:an input streaman output adapter (writer) to pass data to an external system such as a database, data warehouse, or cloud storage (for more detailed information, see Targets)with some adapters, a formatter that defines how to write the data (for example, JSONFormatter or XMLFormattter)properties required by the selected adapter, such as a host name, directory path, authentication credentials, and so onSee PosApp for a discussion of one example.SubscriptionA subscription sends an alert to specified users by a specified channel.Each subscription specifies:an input streaman alert adapterproperties required by the selected adapter, such as an SMTP server and email addressSee Sending alerts from applications for more information.In this section: Striim conceptsTungsten Query Language (TQL) and applicationsFlowSourceStreamKafka streamsTypeEventWindowContinuous query (CQ)CacheWAction and WActionStoreTargetSubscriptionSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/striim-concepts.html", "title": "Striim concepts", "language": "en"}} {"page_content": "\n\nFundamentals of TQL programmingSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideFundamentals of TQL programmingPrevNextFundamentals of TQL programmingThis section covers the TQL language, basic programming tasks, and best practices.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/fundamentals-of-tql-programming.html", "title": "Fundamentals of TQL programming", "language": "en"}} {"page_content": "\n\nTQL programming rules and best practicesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideFundamentals of TQL programmingTQL programming rules and best practicesPrevNextTQL programming rules and best practicesThis section covers the rules you must follow when writing TQL applications as well as best practices that will make development easier and more efficient. Refer to the\u00a0Striim concepts or\u00a0Glossary if you are unfamiliar with any of the terminology.NamespacesNamespaces are logical domains within the Striim environment that contain applications, flows, their components such as sources, streams, and so on, and dashboards.Every user account has a personal namespace with the same name. For example, the admin user has the admin namespace.Every Striim application exists in a namespace. For example, if you install an evaluation version using the installer, the PosApp application is in the Samples namespace.See Using namespaces for more information.ConnectionsA Striim application consists of a set of components and the connections between them. Some components may be connected directly, others must use an intermediate stream, as follows:source (OUTPUT TO) > stream > CQ (SELECT FROM)source (OUTPUT TO) > stream > window (OVER)source (OUTPUT TO) > stream > target (INPUT FROM)cache > CQ (SELECT FROM)WActionStore > CQ (SELECT FROM)window > CQ (SELECT FROM)window > CQ (SELECT FROM window, INSERT INTO stream) > stream > window (OVER)CQ (INSERT INTO) > WActionStoreCQ (INSERT INTO) > stream > target (INPUT FROM)NoteThe output from a cache or window must be processed by a CQ's SELECT statement before it can be passed to a target. In other words, cache > stream > target and window > stream > target are invalid sequences.JoinsInner joins may be performed implicitly by specifying multiple sources in the FROM clause. For example:FROM PosData5Minutes p, HourlyAveLookup lOther joins must be explicitly declared. See CREATE CQ (query).A join must include bound data, in other words, at least one cache, event table, WActionStore, or window. For example, assuming thatStoreOrders is a stream that originated from a source and ZipCodeLookup is a cache, the following is valid:SELECT s.storeId, z.latVal, z.longVal\nFROM StoreOrders s, ZipCodeLookup z\nWHERE s.zip = z.zipWhen joining events from multiple streams, the CQ's SELECT FROM should reference windows rather than streams. See the LargeRTCheck and ZeroContentCheck flows in the MultiLogApp sample application for examples. Define the windows so that they are large enough to include the events to be joined. If necessary, use the GRACE PERIOD option (see CREATE STREAM) and/or a sorter (see CREATE SORTER) to ensure that the events' timestamps are in sync.DependenciesYou must create a component before you can reference it in the CREATE statement for another component.component typecreate aftercreate beforeapplicationany other componentflow*its containing applicationthe components it containssourceany window for which its output stream is an input (OVER)any query for which its output stream is an input (SELECT FROM)typeany stream for which it is the type (OF)**any cache for which it is the type (OF)referencing it in a WActionStore's CONTEXT OF or EVENT TYPES clausestreamits type**any source for which it is the output (OUTPUT TO)any window for which it is the input (OVER)any query for which it is an input (SELECT FROM)any targets for which it is the input (INPUT FROM)cacheany query for which it is an input (SELECT FROM)windowits input stream (OVER)any query for which it is an input (SELECT FROM)WActionStoreits types (CONTEXT OF and EVENT TYPES)any query for which it is an input (SELECT FROM)any query for which it is the output (INSERT INTO)CQ (query)all input streams (SELECT FROM)all input caches (SELECT FROM)all input windows (SELECT FROM)its output stream (INSERT INTO)referencing its output stream in a targettargetits input stream (INPUT FROM)*When an application contains only one flow, it does not need to be explicitly declared by a CREATE FLOW statement. See PosApp for an example of an application with a single flow and MultiLogApp for an example with multiple flows.**When a stream is created automatically by referencing it in a source's OUTPUT TO clause, it will use the built-in type associated with the source's adapter, so it is not necessary to manually create the type first.Component namesNames of applications, flows, sources, and so on:must contain only alphanumeric characters and underscoresmay not start with a numeric charactermust be unique within the namespacemust not be reserved keywords (see List of reserved keywords)List of reserved keywordsComponents cannot be renamed. In the Flow Designer, you can copy a component and give it a new name.Grouping statements and commentingTQL supports SQL-style comments. For example:-- The PosApp sample application demonstrates how a credit card\n-- payment processor might use Striim to generate reports on current\n-- transaction activity by merchant and send alerts when transaction\n-- counts for a merchant are higher or lower than average for the time\n-- of day.\n\nCREATE APPLICATION PosApp; ...To make your TQL easier to read, we recommend grouping statements by CQ or flow and preceding each group with explanatory comments as necessary. See the TQL source code for the Sample applications for programmers for examples.Including one TQL file in anotherYou can include one TQL file in another by reference. For example, if part of PosApp were in another file, you could include it in the main file using:@Samples/PosApp/PosApp_part_2.tql;You must specify the path relative to the\u00a0Striim program directory\u00a0or from root. If you do not, the include will fail with a \"file not found\" error.In this section: TQL programming rules and best practicesNamespacesConnectionsJoinsDependenciesComponent namesGrouping statements and commentingIncluding one TQL file in anotherSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/tql-programming-rules-and-best-practices.html", "title": "TQL programming rules and best practices", "language": "en"}} {"page_content": "\n\nLoading and reloading TQL applications during developmentSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideFundamentals of TQL programmingLoading and reloading TQL applications during developmentPrevNextLoading and reloading TQL applications during developmentBefore you can start a new TQL application, you must first create and deploy it (see Managing deployment groups). For example, the following series of console commands creates, deploys, and runs simple.tql:W (admin): @Samples/simple.tql;\n Processing - CREATE APPLICATION simple\n ...\n Processing - END APPLICATION simple\n Elapsed time: 2473 ms\nW (admin): deploy application simple in default;\n Processing - deploy application simple in default\n Elapsed time: 180 m\nW (admin): start application simple;\n Processing - start application simple\n Elapsed time: 73 msBefore editing the TQL file, stop the application:W (admin) > stop application simple;\n Processing - stop application simple\n Elapsed time: 1336 ms After editing the TQL file, to run the new version of the application, you must:undeploy and drop the old versionload, deploy, and start the new versionFor example, the following series of commands will drop the application loaded by the above commands.W (admin) > undeploy application simple;\n Processing - undeploy application simple\n Elapsed time: 358 ms\nW (admin) > drop application simple cascade;\n Now repeat the @, DEPLOY, and START commands you used the first time you ran the application.You can automate this by adding the commands to your application:UNDEPLOY APPLICATION PosApp;\nDROP APPLICATION PosApp CASCADE;\nCREATE APPLICATION PosApp;\n...\nEND APPLICATION PosApp;\nDEPLOY APPLICATION PosApp;\nSTART APPLICATION PosApp;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-09\n", "metadata": {"source": "https://www.striim.com/docs/en/loading-and-reloading-tql-applications-during-development.html", "title": "Loading and reloading TQL applications during development", "language": "en"}} {"page_content": "\n\nParsing the data field of WAEventSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideFundamentals of TQL programmingParsing the data field of WAEventPrevNextParsing the data field of WAEventWAEvent is the data type used by the output stream of many readers. Its data field is an array containing one event's field values. Here is a sample event in WAEvent format from the output stream of CsvDataSource in the PosApp sample application:WAEvent{\n data: [\"COMPANY 1159\",\"IQ6wCy3k7PnAiRAN71ROxcNBavvVoUcwp7y\",\"8229344557372754288\",\"1\",\"20130312173212\",\n\"0614\",\"USD\",\"329.64\",\"2094770823399082\",\"79769\",\"Odessa\"]\n metadata: {\"RecordStatus\":\"VALID_RECORD\",\"FileName\":\"posdata.csv\",\"FileOffset\":154173}\n before: null\n dataPresenceBitMap: \"AAA=\"\n beforePresenceBitMap: \"AAA=\"\n typeUUID: null\n}; For information on metadata, see Using the META() function.dataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.To parse the data array, PosApp uses the following TQL:CREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_STRING(data[1]) AS merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') AS dateTime,\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')) AS hourValue,\n TO_DOUBLE(data[7]) AS amount,\n TO_STRING(data[9]) AS zip\nFROM CsvStream;PosDataStream is created automatically using the data types and AS strings in the SELECT statement.The order of the data[#] functions in the SELECT clause determines the order of the fields in the output. These may be specified in any order: for example, data[1] could precede data[0].Fields not referenced by the the SELECT clause are discarded.The data[#] function counts the fields in the array starting from 0, so in this example the first field in the array (COMPANY 1159) is omitted.Non-string values are converted to the types required by the output stream (as defined by the PosData type) by the TO_DATEF, DHOURS, and TO_DOUBLE functions (see Functions for more information).In the PosDataStream output, the parsed version of the sample event shown above is:merchantId: \"IQ6wCy3k7PnAiRAN71ROxcNBavvVoUcwp7y\"\ndateTime: 1363134732000\nhourValue: 17\namount: 329.64\nzip: \"79769\"See PosApp for more information. See also the discussions of ParseAccessLog and ParseLog4J in MultiLogApp for additional examples.To put the raw, unparsed\u00a0data array into a field in a stream, use this syntax:CREATE CQ CsvRawData\nINSERT INTO PosRawDataStream\nSELECT data AS object[]\nFROM CsvStream;The field type will be an array of Java objects.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-31\n", "metadata": {"source": "https://www.striim.com/docs/en/parsing-the-data-field-of-waevent.html", "title": "Parsing the data field of WAEvent", "language": "en"}} {"page_content": "\n\nUsing regular expressions (regex)Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideFundamentals of TQL programmingUsing regular expressions (regex)PrevNextUsing regular expressions (regex)Striim supports the use of regular expressions (regex) in your TQL applications. It is important to remember that the Striim implementation of regex is Java-based (see java.util.regex.Pattern), so there are a few things to keep in mind as you develop your regex expressions:The backslash character ( \\ ) is recognized as an escape character in Java strings, so if you want to define something like \\w in regex, use \\\\w in such cases.In regex, \\\\ matches a single backslash literal. Therefore if you want to use the backslash character as a literal in the Striim Java implementation of regex, you must actually use \\\\\\\\.The java.lang.String class provides you with these methods supporting regex: matches(), split(), replaceFirst(), replaceAll(). Note that the String.replace() methods do not support regex.TQL supports the regex syntax and constructs from java.util.regex. Note that this has some differences from POSIX regex. If you are new to using regular expressions, refer to the following resources to get started: java.util.regex.PatternOracle: The Java Tutorials. Lesson: Regular ExpressionsLars Vogel: Java Regex - TutorialYou may use regex in LIKE and NOT LIKE expressions. For example:WHERE ProcessName NOT LIKE '%.tmp%': filter out data from temp filesWHERE instance_applications LIKE '%Apache%': select only applications with Apache in their namesWHERE MerchantID LIKE '45%': select only merchants with IDs that start with 45.The following entry from the MultiLogApp sample Apache access log data includes information about a REST API call in line 4:0: 206.130.134.68\n1: -\n2: AWashington\n3: 25/Oct/2013:11:28:36.960 -0700\n4: GET http://cloud.saas.me/query?type=ChatterMessage&id=01e33d9a-34ee-ccd0-84b9-\n 14109fcf2383&jsessionId=01e33d9a-34c9-1c68-84b9-14109fcf2383 HTTP/1.1\n5: 200\n6: 0\n7: -\n8: Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu \n Chromium/28.0.1500.71 Chrome/28.0.1500.71 Safari/537.36\n9: 1506Regex is also used by the MATCH function. The MATCH function in the ParseAccessLog CQ parses the information in line 4 in to extract the session ID:MATCH(data[4], \".*jsessionId=(.*) \")The parsed output is:sessionId: \"01e33d9a-34c9-1c68-84b9-14109fcf2383\"The following, also from MultiLogApp, is an example of the data[2] element of a RawXMLStream WAEvent data array:\"Problem in API call [api=login] [session=01e3928f-e975-ffd4-bdc5-14109fcf2383] \n[user=HGonzalez] [sobject=User]\",\"com.me.saas.SaasMultiApplication$SaasException: \nProblem in API call [api=login] [session=01e3928f-e975-ffd4-bdc5-14109fcf2383] \n[user=HGonzalez] [sobject=User]\\n\\tat com.me.saas.SaasMultiApplication.login\n(SaasMultiApplication.java:1253)\\n\\tat \nsun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)\\n\\tat \nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\\n\\tat java.lang.reflect.Method.invoke(Method.java:606)\\n\\tat \ncom.me.saas.SaasMultiApplication$UserApiCall.invoke(SaasMultiApplication.java:360)\\n\\tat \ncom.me.saas.SaasMultiApplication$Session.login(SaasMultiApplication.java:1447)\\n\\tat \ncom.me.saas.SaasMultiApplication.main(SaasMultiApplication.java:1587)\"This is parsed by the ParseLog4J CQ as follows:MATCH(data[2], '\\\\\\\\[api=([a-zA-Z0-9]*)\\\\\\\\]'),\nMATCH(data[2], '\\\\\\\\[session=([a-zA-Z0-9\\\\-]*)\\\\\\\\]'),\nMATCH(data[2], '\\\\\\\\[user=([a-zA-Z0-9\\\\-]*)\\\\\\\\]'),\nMATCH(data[2], '\\\\\\\\[sobject=([a-zA-Z0-9]*)\\\\\\\\]')The parsed output is:api: \"login\"\nsessionId: \"01e3928f-e975-ffd4-bdc5-14109fcf2383\"\nuserId: \"HGonzalez\"\nsobject: \"User\"See Parsing sources with regular expressions, FreeFormTextParser, and MultiFileReader for additional examples.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/using-regular-expressions--regex-.html", "title": "Using regular expressions (regex)", "language": "en"}} {"page_content": "\n\nSending alerts from applicationsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideFundamentals of TQL programmingSending alerts from applicationsPrevNextSending alerts from applicationsSee also Sending alerts about servers and applications.Applications can send alerts via email, Microsoft Teams, Slack, or the web UI. To send alerts, generate a stream of type AlertEvent and use it as the input for a subscription (a kind of target).The syntax for subscriptions is:CREATE SUBSCRIPTION name \nUSING [ EmailAdapter | SlackAlertAdapter | TeamsAlertAdapter | WebAlertAdapter] () \nINPUT FROM Alerts generated by the WebAlertAdapter appear only in the alert counter in the upper right corner of some pages of the Striim web UI. This delivery method is suitable mostly for development purposes since the counter may be reset before the user sees an alert. You do not need to specify any properties for this adapter.Email Adapter propertiesThe EmailAdapter properties are:propertytypedefault valuenotesbccEmailListjava.lang.String\"bcc\" address(es) for the alerts (separate addresses with commas) or\u00a0%%ccEmailListjava.lang.String\"cc\" address(es) for the alerts (separate addresses with commas) or\u00a0%%contentTypejava.lang.Stringtext/html; charset=utf-8the other supported value is\u00a0text/plain; charset=utf-8emailListjava.lang.String\"to\" address(es) for the alerts (separate addresses with commas) or\u00a0%%senderEmailjava.lang.String\"from\" address for the alerts (if this is not a valid, monitored mailbox, the alert text should instruct the user not to reply) or\u00a0%%smtp_authBooleanTrueset to False if the SMTP server does not require authentication, in which case leave smtpUser and stmpPassword blanksmtpPasswordcom.webaction. security.Passwordpassword for the SMTP account (see Encrypted passwords); leave blank if smtpUser is not specifiedsmtpPropertiesNamestringa Striim property set containing SMTP server properties (any properties specified in the EmailAdapter override those in the property set)smtpUrlstringnetwork_name:port for the SMTP server (if port is not specified, defaults to 587)smtpUserstringuser name of the account on the SMTP server; leave blank if authentication is not requiredstarttls_enableBooleanFalseset to True if required by the SMTP serversubjectstringsubject line for the alertsthreadCountint4number of threads on the Striim server to be used to send alertsuseridsjava.lang.StringStriim user(s) to receive alerts at the email address(es) specified in their Striim account properties or\u00a0%%The following would create a property set smtpprop which could then be specified as the value for smtpPropertiesName:CREATE PROPERTYSET smtpprop (\n SMTPUSER:'xx@example.com',\n SmtpPassword:'secret', \n smtpurl:'smtp.example.com', \n threadCount:\"5\", \n senderEmail:\"alertsender@example.com\" );To change properties in an existing property set, see ALTER PROPERTYSET.Slack Alert Adapter propertiesThe SlackAlertAdapter properties are:propertydescriptionChannel NameName of the channel where the Slack alert adapter posts alert messages.OAuth TokenSlack bot user OAuth authorization token for the Slack workspace.See Configure Slack to receive alerts from Striim for more about these properties.Teams Alert Adapter propertiesThe TeamsAlertAdapter properties are:propertydescriptionChannel URLA URL that specifies a channel in Microsoft Teams.Client IDA unique identifier for a specific Microsoft Teams application.Client SecretA secret key that authenticates the client.Refresh TokenA unique token that enables the generation of a new Client Secret/Refresh Token pair.See Configure Teams to receive alerts from Striim for more about these properties.AlertEvent fieldsThe input stream for a subscription must use the AlertEvent type. Its fields are:fieldtypenotesnamestringreservedkeyValstringFor any given keyVal, an alert will be sent on for the first event with a flag value of raise. Subsequent events with the same keyVal and a flag value of raise will be ignored and until a cancel is received for that keyVal.severitystringvalid values: error, warning, or infoflagstringvalid values: raise or cancelmessagestringSpecify the text of the alert, typically passed from a log entry.When the target is Microsoft teams, the message must not contain newlines.The following sample code (based on PosApp) generates both types of alerts:CREATE STREAM AlertStream OF Global.AlertEvent;\n\nCREATE CQ GenerateAlerts\nINSERT INTO AlertStream\nSELECT n.CompanyName,\n m.MerchantId,\n CASE\n WHEN m.Status = 'OK' THEN 'info'\n ELSE 'warning' END,\n CASE\n WHEN m.Status = 'OK' THEN 'cancel'\n ELSE 'raise' END,\n CASE\n WHEN m.Status = 'OK'\n THEN 'Merchant ' + n.companyName + ' count of ' + m.count +\n ' is back between ' + ROUND_DOUBLE(m.lowerLimit,0) + ' and ' + \n ROUND_DOUBLE(m.upperLimit,0)\n WHEN m.Status = 'TOOHIGH'\n THEN 'Merchant ' + n.companyName + ' count of ' + m.count +\n ' is above upper limit of ' + ROUND_DOUBLE(m.upperLimit,0)\n WHEN m.Status = 'TOOLOW'\n THEN 'Merchant ' + n.companyName + ' count of ' + m.count +\n ' is below lower limit of ' + ROUND_DOUBLE(m.lowerLimit,0)\n ELSE ''\n END\nFROM MerchantTxRateWithStatusStream m, NameLookup n\nWHERE m.merchantId = n.merchantId;\n\nCREATE SUBSCRIPTION PosAppEmailAlert\nUSING EmailAdapter (\n SMTPUSER:'sender@example.com',\n SMTPPASSWORD:'********', \n smtpurl:'smtp.gmail.com',\n starttls_enable:'true',\n subject:\"test subject\",\n emailList:\"recipient@example.com,recipient2.example.com\",\n senderEmail:\"alertsender@example.com\" \n)\nINPUT FROM AlertStream;\n\nCREATE SUBSCRIPTION PosAppWebAlert \nUSING WebAlertAdapter( ) \nINPUT FROM AlertStream;When a merchant's status changes to TOOLOW or TOOHIGH, Striim will send an alert such as, \"WARNING - alert from Striim - POSUnusualActivity - 2013-12-20 13:55:14 - Merchant Urban Outfitters Inc. count of 12012 is below lower limit of 13304.347826086958.\" The \"raise\" value for the flag field instructs the subscription not to send another alert until the status returns to OK.Using field values in email alertsWhen sending alerts with the EmailAdapter, you can populate the subject, sender address, and\u00a0recipient addresses with values from the fields of the subscription's input stream.To do this, first create a custom alert stream with the extra fields you want to use. The first five fields must be identical to Global.AlertEvent. To those, you may add fields containing the subjects, sender addresses, and recipient addresses.CREATE TYPE CustomEmailAlert ( \n name String,\n keyVal String,\n severity String,\n flag String,\n message String,\n emailsubject String,\n senderEmail String,\n receipientList String \n);\nCREATE STREAM CustomAlertStream OF CustomEmailAlert;Reference the subject, sender, and recipient fields in the EmailAdapter properties as follows:CREATE SUBSCRIPTION alertSubscription USING EmailAdapter (\n smtpurl:'localhost:25',\n subject: '%emailsubject%',\n senderEmail:,\n emailList: \n)\nINPUT FROM MyAlertStream;\nCREATE SUBSCRIPTION PosAppCustomEmailAlert\nUSING EmailAdapter (\n SMTPUSER:'sender@example.com',\n SMTPPASSWORD:'********', \n smtpurl:'smtp.gmail.com',\n starttls_enable:'true',\n subject:'%emailsubject%',\n emailList:'%receipientlist%',\n senderEmail:'%senderEmaiId%' \n)\nINPUT FROM AlertStream;You do not need to use all three. For example, you could populate only emailList with field values: subject:\"test subject\",\n emailList:'%receipientlist%',\n senderEmail:\"alertsender@example.com\"\nThe values in the recipientlist field may include multiple email addresses separated by commas (no spaces).In this section: Sending alerts from applicationsEmail Adapter propertiesSlack Alert Adapter propertiesTeams Alert Adapter propertiesAlertEvent fieldsUsing field values in email alertsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-07\n", "metadata": {"source": "https://www.striim.com/docs/en/sending-alerts-from-applications.html", "title": "Sending alerts from applications", "language": "en"}} {"page_content": "\n\nHandling exceptionsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideFundamentals of TQL programmingHandling exceptionsPrevNextHandling exceptionsBy default, when Striim encounters a non-fatal exception, it ignores it and continues. You may add an EXCEPTIONHANDLER clause to your CREATE APPLICATION statement to log exceptions and take various actions. The syntax is:CREATE APPLICATION ... EXCEPTIONHANDLER ([:'',...]);\nSupported exceptions are:AdapterExceptionArithmeticExceptionClassCastExceptionConnectionExceptionInvalidDataExceptionNullPointerExceptionNumberFormatExceptionSystemExceptionUnexpectedDDLExceptionUnknownExceptionSupported actions are:IGNORETERMINATE (\"Stop Processing\" in Flow Designer)See also\u00a0Writing exceptions to a WActionStore.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-15\n", "metadata": {"source": "https://www.striim.com/docs/en/handling-exceptions.html", "title": "Handling exceptions", "language": "en"}} {"page_content": "\n\nSample applications for programmersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideSample applications for programmersPrevNextSample applications for programmersPosApp demonstrates how a credit card payment processor might use Striim to generate reports on current transaction activity by merchant and send alerts when transaction counts for a merchant are higher or lower than average for the time of day. This application is explained in greater detail than the other two, so you should read about it first. The posApp.tql file is also the more extensively commented of the two.MultiLogApp demonstrates how Striim could be used to monitor and correlate logs from web and application server logs from the same web application.The source code and data for the sample applications are installed with the server in \u2026/Striim/Samples. You may also download them from www.striim.com/docs/samples/PosApp.zip or www.striim.com/docs/samples/MultiLogApp.zip.The alternative PosAppThrottled.tql and MultiLogAppThrottled.tql versions introduce delays in the parsing of the source streams in order to simulate real-time data in the dashboards. See the comments in those TQL files for more information.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-01-14\n", "metadata": {"source": "https://www.striim.com/docs/en/sample-applications-for-programmers.html", "title": "Sample applications for programmers", "language": "en"}} {"page_content": "\n\nPosAppSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideSample applications for programmersPosAppPrevNextPosAppThe PosApp sample application demonstrates how a credit card payment processor might use Striim to generate reports on current transaction activity by merchant and send alerts when transaction counts for a merchant are higher or lower than average for the time of day.OverviewIn the web UI, from the top menu, select Apps > View All Apps.If you don't see PosApp anywhere on the page (you may need to expand the Samples group) , select Create App > Import TQL file, navigate to\u00a0Striim/Samples/PosApp, double-click PosApp.tql, enter Samples as the namespace, and click Import.At the bottom right corner of the PosApp tile, select ... > Manage Flow. The Flow Designer displays a graphical representation of the application flow:The following is a simplified diagram of that flow:Step 1: acquire dataThe flow starts with a source:Double-clicking CsvDataSource displays its properties:This is the primary data source for this application. In a real-world application, it would be real-time data. Here, the data comes from a comma-delimited file, posdata.csv. Here are the first two lines of that file:BUSINESS NAME, MERCHANT ID, PRIMARY ACCOUNT NUMBER, POS DATA CODE, DATETIME, EXP DATE, \nCURRENCY CODE, AUTH AMOUNT, TERMINAL ID, ZIP, CITY\nCOMPANY 1,D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu,6705362103919221351,0,20130312173210,\n0916,USD,2.20,5150279519809946,41363,QuicksandIn Striim terms, each line of the file is an event, which in many ways is comparable to a row in a SQL database table, and can be used in similar ways. Under Parser, click Show Advanced Settings to see the DSVParser properties:The true (toggle on) setting for the Header property indicates that the first line contains field labels that are not to be treated as data.The Output to stream CsvStream uses the WAEvent type associated with DSVParser:The only field used by this application is data, an array that contains the delimited fields.Step 2: filter the data streamCsvDataSource outputs the data to CsvStream, which is the input for the query CsvToPosData:This CQ converts the comma-delimited fields from the source into typed fields in a stream that can be consumed by other Striim components. Here, data refers to the array shown above. The number in brackets specifies a field from the array, counting from zero. Thus data[1] is MERCHANT ID, data[4] is DATETIME, data[7] is AUTH AMOUNT, and data[9] is ZIP.TO_STRING, TO_DATEF, and TO_DOUBLE functions cast the fields as the types to be used in the Output to stream. The DATETIME field from the source is converted to both a dateTime value, used as the event timestamp by the application, and (via the DHOURS function) an integer hourValue, which is used to look up historical hourly averages from the HourlyAveLookup cache.The other fields are discarded. Thus the first line of data from posdata.csv has at this point been reduced to the following values:D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu (merchantId)20130312173210 (DateTime)17 (hourValue)2.20 (amount)41363 (zip)The CsvToPosDemo query outputs the processed data to PosDataStream:PosDataStream assigns the remaining fields the names and data types in the order listed above:PRIMARY ACCOUNT NUMBER to merchantIDDATETIME to dateTimethe DATETIME substring to hourValueAUTH AMOUNT to amountZIP to zipStep 3: define the data setPosDataStream passes the data to the window PosData5Minutes:A window is in many ways comparable to a table in a SQL database, just as the events it contains are comparable to rows in a table. The Mode and Size settings determine how many events the window will contain and how it will be refreshed. With the Mode set to Jumping, this window is refreshed with a completely new set of events every five minutes. For example, if the first five-minute set of events received when the application runs from 1:00 pm through 1:05 pm, then the next set of events will run from 1:06 through 1:10, and so on. If the Mode were set to Sliding, the window continuously adds new events and drops old ones so as to always contain the events of the most recent five minutes.Step 4: process and enhance the dataThe PosData5Minutes window sends each five-minute set of data to the GenerateMerchantTxRateOnly query. As you can see from the following schema diagram, this query is fairly complex:The GenerateMerchantTxRateOnly query combines data from the PosData5Minutes event stream with data from the HourlyAveLookup cache. A cache is similar to a source, except that the data is static rather than real-time. In the real world, this data would come from a periodically updated table in the payment processor's system containing historical averages of the number of transactions processed for each merchant for each hour of each day of the week (168 values per merchant). In this sample application, the source is a file, hourlyData.txt, which to simplify the sample data set has only 24 values per merchant, one for each hour in the day.For each five-minute set of events received from the PosData5Minutes window, the GenerateMerchantTxRateOnly query outputs one event for each merchantID found in the set to MerchantTxRateOnlyStream, which applies the MerchantTxRate type. The easiest way to summarize what is happening in the above diagram is to describe where each of the fields in the MerchantTxRateOnlySteam comes from:fielddescriptionTQLmerchantIdthe merchantID field from PosData5MinutesSELECT p.merchantIDzipthe zip field from PosData5MinutesSELECT ... p.zipstartTimethe dateTime field for the first event for the merchantID in the five-minute set from PosData5MinutesSELECT ... FIRST(p.dateTime)countcount of events for the merchantID in the five-minute set from PosData5MinutesSELECT ... COUNT(p.merchantID)totalAmountsum of amount field values for the merchantID in the five-minute set from PosData5MinutesSELECT ... SUM(p.amount)hourlyAvethe hourlyAve value for the current hour from HourlyAveLookup, divided by 12 to give the five-minute averageSELECT \u2026 l.hourlyAve / 12 ...\n WHERE ...p.hourValue = l.hourValueupperLimitthe hourlyAve value for the current hour from HourlyAveLookup, divided by 12, then multiplied by 1.15 if the value is 200 or less, 1.2 if the value is between 201 and 800, 1.25 if the value is between 801 and 10,000, or 1.5 if the value is over 10,000SELECT \u2026 l.hourlyAve / 12 * CASE\n WHEN l.hourlyAve / 12 > 10000 THEN 1.15 \n WHEN l.hourlyAve / 12 > 800 THEN 1.2 \n WHEN l.hourlyAve / 12 > 200 THEN 1.25 \n ELSE 1.5 ENDlowerLimitthe hourlyAve value for the current hour from HourlyAveLookup, divided by 12, then divided by 1.15 if the value is 200 or less, 1.2 if the value is between 201 and 800, 1.25 if the value is between 801 and 10,000, or 1.5 if the value is over 10,000SELECT \u2026 l.hourlyAve / 12 / CASE\n WHEN l.hourlyAve / 12 > 10000 THEN 1.15 \n WHEN l.hourlyAve / 12 > 800 THEN 1.2 \n WHEN l.hourlyAve / 12 > 200 THEN 1.25\n ELSE 1.5 ENDcategory, statusplaceholders for values to be addedSELECT ... ''The MerchantTxRateOnlyStream passes this output to the GenerateMerchantTxRateWithStatus query, which populates the category and status fields by evaluating the count, upperLimit, and lowerLimit fields:\nSELECT merchantId,\n zip,\n startTime,\n count,\n totalAmount,\n hourlyAve,\n upperLimit,\n lowerLimit,\n CASE\n WHEN count > 10000 THEN 'HOT'\n WHEN count > 800 THEN 'WARM'\n WHEN count > 200 THEN 'COOL'\n ELSE 'COLD' END,\n CASE\n WHEN count > upperLimit THEN 'TOOHIGH'\n WHEN count < lowerLimit THEN 'TOOLOW'\n ELSE 'OK' END\nFROM MerchantTxRateOnlyStream\nThe category values are used by the Dashboard to color-code the map points. The status values are used by the GenerateAlerts query.The output from the GenerateMerchantTxRateWithStatus query goes to MerchantTxRateWithStatusStream.Step 5: populate the dashboardThe GenerateWactionContent query enhances the data from MerchantTxRateWithStatusStream with the merchant's company, city, state, and zip code, and the latitude and longitude to position the merchant on the map, then populates the MerchantActivity WActionStore:In a real-world application, the data for the NameLookup cache would come from a periodically updated table in the payment processor's system, but the data for the ZipLookup cache might come from a file such as the one used in this sample application.When the application finishes processing all the test data, the WActionStore will contain 423 WActions, one for each merchant. Each WAction includes the merchant's context information (MerchantId, StartTime, CompanyName, Category, Status, Count, HourlyAve, UpperLimit, LowerLimit, Zip, City, State, LatVal, and LongVal) and all events for that merchant from the MerchantTxRateWithStatusStream (merchantId, zip, String, startTime, count, totalAmount, hourlyAvet, upperLimit, lowerLimit, category, and status for each of 40 five-minute blocks). This data is used to populate the dashboard, as detailed in PosAppDash.Step 6: trigger alertsMerchantTxRateWithStatusStream sends the detailed event data to the GenerateAlerts query, which triggers alerts based on the Status value:When a merchant's status changes to TOOLOW or TOOHIGH, Striim will send an alert such as, \"WARNING - alert from Striim - POSUnusualActivity - 2013-12-20 13:55:14 - Merchant Urban Outfitters Inc. count of 12012 is below lower limit of 13304.347826086958.\" The \"raise\" value for the flag field instructs the subscription not to send another alert until the status returns to OK.When the status returns to OK, Striim will send an alert such as, \"INFO - alert from Striim - POSUnusualActivity - 2013-12-20 14:02:27 - Merchant Urban Outfitters Inc. count of 15853 is back between 13304.347826086958 and 17595.0.\" The cancel value for the flag field instructs the subscription to send an alert the next time the status changes to TOOLOW or TOOHIGH. See Sending alerts from applications for more information on info, warning, raise, and cancel.In this section: PosAppOverviewStep 1: acquire dataStep 2: filter the data streamStep 3: define the data setStep 4: process and enhance the dataStep 5: populate the dashboardStep 6: trigger alertsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/posapp.html", "title": "PosApp", "language": "en"}} {"page_content": "\n\nPosAppDashSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideSample applications for programmersPosAppDashPrevNextPosAppDashSee Viewing dashboards for instructions on getting to the PosAppDash Main page shown above.The following discussion gives a brief overview of the underlying queries and settings for some of the visualizations in this dashboard. For more details, see Dashboard Guide. To explore the settings yourself, click the Edit button at the top right.Main page vector mapTo view a query for a visualization, click Edit Query. When done, click x to exit.The basic query for this map is SELECT * FROM MerchantActivity ORDER BY StartTime, which gets all fields from the MerchantActivity WActionStore. The ORDER BY clause ensures that the map looks the same every time you run the application.To view the properties for a visualization, click Configure. When done, click x to exit.The fields are assigned to map properties as follows:Data Retention Type = Current means the map shows the latest events for each map point. Group By CompanyName means each company gets a single dot on the map. Color By Category means the colors are based on the Category field value (more about that below). The Longitude and Latitude properties are set to the fields with the coordinate values. Value = Count means that the size of the dot on the map varies according to the number of transactions in the company's latest event.The colors of the map points are set manually:If the map had a legend, it would use the alias strings:See Vector map for more information about these settings.The map's title is a \"value\" visualization, which can add text, query results, and almost any valid HTML to a dashboard. Its basic query is select count(distinct(MerchantId)) as mcount from MerchantActivity, which returns the number of merchants displayed on the map (currently 423). The underlying code is:
Latest event count for \n{{ mcount }} merchants (map and scatter plot)
See Value (text label) for more information.Main page scatter plotThe scatter plot uses the same basic query and Group By Company setting as the map plus SAMPLE BY Count MAXLIMIT 500. The Data Retention Type is All, so it shows a range of Count values over time for each company. If you changed Data Retention Type to Current, it would show only the most recent Count value for each company (the same data as on the map):Main page bar chartHover on the upper right of the chart to access a menu where you can:download its .csv datasearch for textfilter by attributes or timeThe basic query for this bar chart is\u00a0select\u00a0sum(Count) as Count, State from MerchantActivity group by State order by sum(Count) desc limit 10, which returns the top ten total counts (for all merchants) by state. For more information, see Bar chart.Main page heat mapClick the link to view additional representations of the data, such as pie charts and maps.The heat map for PosAppDash represents the total counts for 12 combinations of Status and Category, with blue representing the lowest counts and red the highest. See Heat map for a discussion of the query and settings.Company details page line chartIf you are in edit mode, click Done to stop editing. Then, in the map, click on the big red dot for Recreational Equipment in California to drill down to the \"Company details\" page.The green line represents the average count, red and orange represent the upper and lower bounds, and blue is the count itself. You can see that between 7:00 and 7:30 the count dropped below the lower bound, which means its current status is TOOLOW.The basic query is select CompanyName, Count, UpperLimit, LowerLimit, HourlyAve, StartTime, City, State, Zip, MerchantId, Status, Category from MerchantActivity. The queries also include where MerchantID=:mid, which limits the results to the company you drilled down on (see Creating links between pages). Data Retention Type is All because we\u2019re plotting the values over time. Group By is blank because we are not grouping the data. Color By is blank because colors are set manually as for the map.The horizontal axis for all series is is StartTime. The vertical axis uses a different field for each of the four series (Count, UpperLimit, HourlyAve, and LowerLimit). Above, series 1 (Count, the blue line) is selected. The Series Alias defines the label for the legend. To change settings for another series, click its number in the selector at upper right.Here we see the settings for series 2 (UpperLimit, the red line).For information on the other visualizations on this page, see Gauge, Icon, and Table.Interactive HeatMap page donut chartsTo return to the main page, click PosAppDash.Then click click here for additional visualizations in the heat map title.The basic query for both donut charts is SELECT count(*) AS Count, Status, Category FROM MerchantActivity.The left chart (settings shown above) shows the total Count for each Status (OK, TOOLOW, TOOHIGH). The right chart shows the total Count for each Category (COLD, COOL, WARM, HOT).See Making visualizations interactive for discussion of how clicking segments in the donut charts filters the data for the page's vector map and heat map.In this section: PosAppDashMain page vector mapMain page scatter plotMain page bar chartMain page heat mapCompany details page line chartInteractive HeatMap page donut chartsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/posappdash.html", "title": "PosAppDash", "language": "en"}} {"page_content": "\n\nMultiLogAppSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideSample applications for programmersMultiLogAppPrevNextMultiLogAppThis sample application shows how Striim could be used to monitor and correlate logs from web and application server logs from the same web application. The following is a relatively high-level explanation. For a more in-depth examination of a sample application with more detail about the components and how they interact, see PosApp.MultiLogApp contains 12 flows that analyze the data from one or both logs and take appropriate actions:MonitorLogs parses the log files to create two event streams (AccessStream for access log events and Log4JStream for application log events) used by the other flows. See the detailed discussion below.ErrorsAndWarnings selects application log error and warning messages for use by the ErrorHandling and WarningHandling flows, and creates a sliding window containing the 300 most recent errors and warnings for use by the LargeRTCheck and ZeroContentCheck flows, which join it with web server data.The following flows send alerts regarding the web server logs and populate the dashboard's Overview page world map and the Detail - UnusualActivity page:HackerCheck sends an alert when an access log srcIp value is on a blacklist.LargeRTCheck sends an alert when an access log responseTime value exceeds 2000 microseconds.ProxyCheck sends an alert when an access log srcIP value is on a list of suspicious proxies.ZeroContentCheck sends an alert when an access log entry's code value is 200 (that is, the HTTP request succeeded) but the size value is 0 (the return had no content).The following flows send alerts regarding the application server log and populate the dashboard's Overview page pie chart and API detail pages:ErrorHandling sends an alert when an error message appears in the application server log.WarningHandling sends an alert once an hour with the count of warnings for each API call for which there has been at least one alert.InfoFlow joins application log events with user information from the MLogUserLookup cache to create the ApiEnrichedStream used by ApiFlow, CompanyApiFlow, and UserApiFlow.ApiFlow populates the Detail - ApiActivity page.CompanyApiFlow populates the Detail - CompanyApiActivity page and the bar chart on the Overview page. It also sends an alert when an API call is used by a company more than 1500 times during the flow's one-hour jumping window.UserApiFlow populates the dashboard's Detail - UserApiActivity page and the US map on the Overview page. It also sends an alert when an API call is used by a user more than 125 times during the flow's one-hour window.MonitorLogs: web server log dataThe web server logs are in Apache NCSA extended/combined log format plus response time:\"%h %l %u %t \\\"%r\\\" %>s %b \\\"%{Referer}i\\\" \\\"%{User-agent}i\\\" %D\"(See apache.org for more information.) Here are four sample log entries:216.103.201.86 - EHernandez [10/Feb/2014:12:13:51.037 -0800] \"GET http://cloud.saas.me/login&jsessionId=01e3928f-e059-6361-bdc5-14109fcf2383 HTTP/1.1\" 200 21560 \"-\" \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)\" 1606\n216.103.201.86 - EHernandez [10/Feb/2014:12:13:52.487 -0800] \"GET http://cloud.saas.me/create?type=Partner&id=01e3928f-e05a-9be1-bdc5-14109fcf2383&jsessionId=01e3928f-e059-6361-bdc5-14109fcf2383 HTTP/1.1\" 200 63523 \"-\" \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)\" 1113\n216.103.201.86 - EHernandez [10/Feb/2014:12:13:52.543 -0800] \"GET http://cloud.saas.me/query?type=ChatterMessage&id=01e3928f-e05a-9be2-bdc5-14109fcf2383&jsessionId=01e3928f-e059-6361-bdc5-14109fcf2383 HTTP/1.1\" 200 46556 \"-\" \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)\" 1516\n216.103.201.86 - EHernandez [10/Feb/2014:12:13:52.578 -0800] \"GET http://cloud.saas.me/retrieve?type=ContractHistory&id=01e3928f-e05a-9be3-bdc5-14109fcf2383&jsessionId=01e3928f-e059-6361-bdc5-14109fcf2383 HTTP/1.1\" 200 44556 \"-\" \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)\" 39In MultiLogApp, these logs are read by AccessLogSource:CREATE SOURCE AccessLogSource USING FileReader (\n directory:'Samples/MultiLogApp/appData',\n wildcard:'access_log',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n columndelimiter:' ',\n ignoreemptycolumn:'Yes',\n quoteset:'[]~\"',\n separator:'~'\n)\nOUTPUT TO RawAccessStream;The log format is space-delimited, so the columndelimiter value is one space. With these quoteset and separator values, both square brackets and double quotes are recognized as delimiting strings that may contain spaces. With these settings, the first log entry above is output as a WAEvent data array with the following values:\"216.103.201.86\",\n\"-\",\n\"EHernandez\",\n\"10/Feb/2014:12:13:51.037 -0800\",\n\"GET http://cloud.saas.me/login&jsessionId=01e3928f-e059-6361-bdc5-14109fcf2383 HTTP/1.1\",\n\"200\",\n\"21560\",\n\"-\",\n\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)\",\n\"1606\"This in turn is processed by the ParseAccessLog CQ:CREATE CQ ParseAccessLog \nINSERT INTO AccessStream\nSELECT data[0],\n data[2],\n MATCH(data[4], \".*jsessionId=(.*) \"),\n TO_DATE(data[3], \"dd/MMM/yyyy:HH:mm:ss.SSS Z\"),\n data[4],\n TO_INT(data[5]),\n TO_INT(data[6]),\n data[7],\n data[8],\n TO_INT(data[9])\nFROM RawAccessStream;After the AccessLogEntry type is applied, the event looks like this:srcIp: \"216.103.201.86\"\nuserId: \"EHernandez\"\nsessionId: \"01e3928f-e059-6361-bdc5-14109fcf2383\"\naccessTime: 1392063231037\nrequest: \"GET http://cloud.saas.me/login&jsessionId=01e3928f-e059-6361-bdc5-14109fcf2383 HTTP/1.1\"\ncode: 200\nsize: 21560\nreferrer: \"-\"\nuserAgent: \"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)\"\nresponseTime: 1606The web server log data is now in a format that Striim can analyze. AccessStream is used by the HackerCheck, LargeRTCheck, ProxyCheck, and ZeroContentCheck flows.MonitorLogs: application server log dataThe application server logs are in Apache's Log4J format. Log4J is a standard Java logging framework used by many web-based applications. In a real-world implementation, this application could be reading many log files on many hosts. Here is a sample message:\n\n\n\nLog4JSource retrieves data from \u2026/Striim/Samples/MultiLOgApp/appData/log4jLog. This file contains around 1.45 million errors, warnings, and informational messages. The XMLParser portion of Log4JSource specifies the portions of the message that will be used by this application:CREATE SOURCE Log4JSource USING FileReader (\n directory:'Samples/MultiLogApp/appData',\n wildcard:'log4jLog.xml',\n positionByEOF:false\n) \nPARSE USING XMLParser(\n rootnode:'/log4j:event',\n columnlist:'log4j:event/@timestamp,\n log4j:event/@level,\n log4j:event/log4j:message,\n log4j:event/log4j:throwable,\n log4j:event/log4j:locationInfo/@class,\n log4j:event/log4j:locationInfo/@method,\n log4j:event/log4j:locationInfo/@file,\n log4j:event/log4j:locationInfo/@line'\n)\nOUTPUT TO RawXMLStream;For example, for the sample log message above, log4j:event/@level returns WARN and log4j:event/log4j:locationInfo/@line returns 1133. These elements are output as a WAEvent data array with the following values:\"1392067731765\",\n\"ERROR\",\n\"Problem in API call [api=login] [session=01e3928f-e975-ffd4-bdc5-14109fcf2383] [user=HGonzalez] [sobject=User]\",\"com.me.saas.SaasMultiApplication$SaasException: Problem in API call [api=login] [session=01e3928f-e975-ffd4-bdc5-14109fcf2383] [user=HGonzalez] [sobject=User]\\n\\tat com.me.saas.SaasMultiApplication.login(SaasMultiApplication.java:1253)\\n\\tat sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)\\n\\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\\n\\tat java.lang.reflect.Method.invoke(Method.java:606)\\n\\tat com.me.saas.SaasMultiApplication$UserApiCall.invoke(SaasMultiApplication.java:360)\\n\\tat com.me.saas.SaasMultiApplication$Session.login(SaasMultiApplication.java:1447)\\n\\tat com.me.saas.SaasMultiApplication.main(SaasMultiApplication.java:1587)\",\n\"com.me.saas.SaasMultiApplication\",\n\"login\",\n\"SaasMultiApplication.java\",\n\"1256\"This array in turn is processed by the ParseLog4J CQ:CREATE CQ ParseLog4J\nINSERT INTO Log4JStream\nSELECT TO_DATE(TO_LONG(data[0])),\n data[1],\n data[2], \n MATCH(data[2], '\\\\\\\\[api=([a-zA-Z0-9]*)\\\\\\\\]'),\n MATCH(data[2], '\\\\\\\\[session=([a-zA-Z0-9\\\\-]*)\\\\\\\\]'),\n MATCH(data[2], '\\\\\\\\[user=([a-zA-Z0-9\\\\-]*)\\\\\\\\]'),\n MATCH(data[2], '\\\\\\\\[sobject=([a-zA-Z0-9]*)\\\\\\\\]'),\n data[3],\n data[4],\n data[5],\n data[6],\n data[7]\nFROM RawXMLStream;The elements in the array are numbered from zero, so data[0] returns the timestamp, data[1] returns the level, and so on. The MATCH functions use regular expressions to return the api, session, user, and sobject portions of the message string. (See Using regular expressions (regex) for discussion of the multiple escapes for [ and ] in the regular expressions.) After processing by the CQ, the event looks like this:logTime: 1392067731765\nlevel: \"ERROR\"\nmessage: \"Problem in API call [api=login] [session=01e3928f-e975-ffd4-bdc5-14109fcf2383] [user=HGonzalez] [sobject=User]\"\napi: \"login\"\nsessionId: \"01e3928f-e975-ffd4-bdc5-14109fcf2383\"\nuserId: \"HGonzalez\"\nsobject: \"User\"\nxception: \"com.me.saas.SaasMultiApplication$SaasException: Problem in API call [api=login] [session=01e3928f-e975-ffd4-bdc5-14109fcf2383] [user=HGonzalez] [sobject=User]\\n\\tat com.me.saas.SaasMultiApplication.login(SaasMultiApplication.java:1253)\\n\\tat sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)\\n\\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\\n\\tat java.lang.reflect.Method.invoke(Method.java:606)\\n\\tat com.me.saas.SaasMultiApplication$UserApiCall.invoke(SaasMultiApplication.java:360)\\n\\tat com.me.saas.SaasMultiApplication$Session.login(SaasMultiApplication.java:1447)\\n\\tat com.me.saas.SaasMultiApplication.main(SaasMultiApplication.java:1587)\"\nclassName: \"com.me.saas.SaasMultiApplication\"\nmethod: \"login\"\nfileName: \"SaasMultiApplication.java\"\nlineNum: \"1256\"The application server log data is now in a format that Striim can analyze. Log4JStream is used by the ErrorsAndWarnings and InfoFlow flows.ErrorsAndWarningsThe CQ GetLog4JErrorWarning filters Log4JStream, selecting only WARN and ERROR messages and discarding all others.CREATE CQ GetLog4JErrorWarning\nINSERT INTO Log4ErrorWarningStream\nSELECT l FROM Log4JStream l\nWHERE l.level = 'ERROR' OR l.level = 'WARN';Log4ErrorWarningStream is used by the ErrorHandling and WarningHandling flows and by the Log4JErrorWarningActivity sliding window, which contains the most recent 300 events:CREATE WINDOW Log4JErrorWarningActivity \nOVER Log4ErrorWarningStream KEEP 300 ROWS;This window is used by the LargeRTCheck and ZeroContentCheck flows.HackerCheckThis flow sends an alert when an access log srcIp value is on a blacklist. The BlackListLookup cache contains the blacklist:CREATE CACHE BlackListLookup using FileReader (\n directory: 'Samples/MultiLogApp/appData',\n wildcard: 'multiLogBlackList.txt'\n)\nPARSE USING DSVParser ( )\nQUERY (keytomap:'ip') OF IPEntry;The CQ FindHackers selects access log events that match a blacklist entry:CREATE CQ FindHackers\nINSERT INTO HackerStream\nSELECT ale \nFROM AccessStream ale, BlackListLookup bll\nWHERE ale.srcIp = bll.ip;The CQ SendHackingAlerts sends an alert for each such event:CREATE CQ SendHackingAlerts \nINSERT INTO HackingAlertStream \nSELECT 'HackingAlert', ''+accessTime, 'warning', 'raise',\n 'Possible Hacking Attempt from ' + srcIp + ' in ' + IP_COUNTRY(srcIp)\nFROM HackerStream;\n\nCREATE SUBSCRIPTION HackingAlertSub \nUSING WebAlertAdapter( ) \nINPUT FROM HackingAlertStream;This flow also creates the UnusualActivity WActionStore that populates various charts and tables on the dashboard:CREATE TYPE UnusualContext (\n typeOfActivity String,\n accessTime DateTime,\n accessSessionId String,\n srcIp String KEY,\n userId String,\n country String,\n city String,\n lat double,\n lon double\n);\nCREATE WACTIONSTORE UnusualActivity \nCONTEXT OF UnusualContext ...The CQ GenerateHackerContext populates UnusualActivity:CREATE CQ GenerateHackerContext\nINSERT INTO UnusualActivity\nSELECT 'HackAttempt', accessTime, sessionId, srcIp, userId,\n IP_COUNTRY(srcIp), IP_CITY(srcIP), IP_LAT(srcIP), IP_LON(srcIP)\nFROM HackerStream\nLINK SOURCE EVENT;HackAttempt is a literal string that identifies the type of activity. That will distinguish events from this flow from those from the three other flows that populate UnusualActivity.LargeRTCheckLargeRTCheck sends an alert whenever an access log responseTime value exceeds 2000 microseconds.CREATE CQ FindLargeRT\nINSERT INTO LargeRTStream\nSELECT ale\nFROM AccessStream ale\nWHERE ale.responseTime > 2000;The alert code is similar to HackerCheck's.The typeOfActivity string for events written to the UnusualActivity WActionStore is LargeResponseTime.ProxyCheckProxyCheck sends an alert when an access log srcIP value is on a list of suspicious proxies. This works exactly like HackerCheck but with a different list. The typeOfActivity string for events written to the UnusualActivity WActionStore is ProxyAccess.ZeroContentCheckZeroContentCheck sends an alert when an access log entry's code value is 200 (that is, the HTTP request succeeded) but the size value is 0 (the return had no content).CREATE CQ FindZeroContent\nINSERT INTO ZeroContentStream\nSELECT ale\nFROM AccessStream ale\nWHERE ale.code = 200 AND ale.size = 0;The alert code is similar to HackerCheck's.The typeOfActivity string for events written to the UnusualActivity WActionStore is ZeroContent).ErrorHandlingThis flow sends an alert immediately when an error appears in Log4ErrorWarningStream.CREATE CQ GetErrors \nINSERT INTO ErrorStream \nSELECT log4j \nFROM Log4ErrorWarningStream log4j WHERE log4j.level = 'ERROR';\n\nCREATE CQ SendErrorAlerts \nINSERT INTO ErrorAlertStream \nSELECT 'ErrorAlert', ''+logTime, 'error', 'raise', 'Error in log ' + message \nFROM ErrorStream;CQ GetErrors discards all WARN messages and passes only ERROR messages. In CQ SendErrorAlerts, since the key value is logTime (which is different for every event) and the flag is raise (see Sending alerts from applications), an alert will be sent for every message in ErrorStream.WarningHandlingThis flow sends an alert once an hour with the count of warnings for each API call for which there has been at least one warning. The following code creates a one-hour jumping window of application log warning messages:CREATE CQ GetWarnings \nINSERT INTO WarningStream \nSELECT log4j \nFROM Log4ErrorWarningStream log4j WHERE log4j.level = 'WARN';\n\nCREATE JUMPING WINDOW WarningWindow \nOVER WarningStream KEEP WITHIN 60 MINUTE ON logTime;The HAVING clause in the CQ SendWarningAlerts filters out API calls that have had no warnings.CREATE CQ SendWarningAlerts \nINSERT INTO WarningAlertStream \nSELECT 'WarningAlert', ''+logTime, 'warning', 'raise', \n COUNT(logTime) + ' Warnings in log for api ' + api \nFROM WarningWindow \nGROUP BY api \nHAVING count(logTime) > 1;InfoFlow, APIFlow, CompanyApiFlow, and UserApiFlowInfoFlow joins application log INFO events with user information from the MLogUserLookup cache:CREATE CQ GetInfo \nINSERT INTO InfoStream \nSELECT log4j \nFROM Log4JStream log4j WHERE log4j.level = 'INFO';\n\nCREATE CQ GetUserDetails \nINSERT INTO ApiEnrichedStream \nSELECT a.userId, a.api, a.sobject, a.logTime,\n u.userName, u.company, u.userZip, u.companyZip \nFROM InfoStream a, MLogUserLookup u \nWHERE a.userId = u.userId;Otherwise this portion of the application is generally similar to PosApp. APIFlow, CompanyApiFlow, and UserAPIFlow populate dashboard charts and send alerts as described in the summary above.In this section: MultiLogAppMonitorLogs: web server log dataMonitorLogs: application server log dataErrorsAndWarningsHackerCheckLargeRTCheckProxyCheckZeroContentCheckErrorHandlingWarningHandlingInfoFlow, APIFlow, CompanyApiFlow, and UserApiFlowSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/multilogapp.html", "title": "MultiLogApp", "language": "en"}} {"page_content": "\n\nMultiLogDashSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideSample applications for programmersMultiLogDashPrevNextMultiLogDashThis dashboard includes only visualization types previously discussed in PosAppDash. There are numerous examples of tables that may be instructive.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/multilogdash.html", "title": "MultiLogDash", "language": "en"}} {"page_content": "\n\nUsing source and target adapters in applicationsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideUsing source and target adapters in applicationsPrevNextUsing source and target adapters in applicationsAn adapter is a process that connects the Striim platform to a specific type of external application or file. Which adapter is selected determines which properties must be specified for a source or target.For a list of readers, see Readers overview.For a list of writers, see Writers overview.Adapter property data typesAdapter properties use the same Supported data types as TQL, plus Encrypted passwords.Some property data types are enumerated: that is, only documented values are allowed. If setting properties in TQL, be careful not to use other values for these properties.Connecting with sources and targets over the internetThere are several ways to connect with sources and targets over the internet.... using cloud provider keysSome cloud sources and targets, such as Cosmos DB, secure their connections using keys. No additional configuration is required on your part, you simply provide the appropriate key in the source or target properties.... using an SSH tunnelSee Using an SSH tunnel to connect to a source or target.... by adding an inbound port rule to your firewall or cloud security groupIn the firewall or cloud security group for your source or target, create an inbound port rule for Striim's IP address and the port for your database (typically 3306 for MariaDB or MySQL, 1521 for Oracle, 5432 for PostgreSQL, or 1433 for SQL Server).To get the IP address for a Striim Cloud service:In Striim Cloud Console, go to the Services page.Next to the service, click More and select Security.Click the Copy IP icon next to the IP address.... using port forwardingIn your router configuration, create a port forwarding rule for your database's port. If supported by your router, set the source IP to your database's IP address and the target IP to Striim's IP address (which you can get as described above).Encrypted passwordsStriim encrypts adapter properties of the type com.webaction.security.Password when the adapter is created or altered and decrypts them when providing the values for authentication by a source or target host or service. The cleartext value is not shown in the UI or exported TQL. See also CREATE PROPERTYVARIABLE.If you are using Oracle JDK 8 or OpenJDK 8 version 1.8.0_161 or later, encryption will be AES-256. With earlier versions, encryption will be AES-128.To specify a cleartext property in TQL, include\u00a0Password_encrypted: false in the adapter properties. This will cause the compiler to encrypt the value when the TQL is loaded.To encrypt a password for use in TQL, use \u00a0striim/bin/passwordEncryptor.sh . If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped:\u00a0, . _ + : @ % / -When exporting TQL, you may protect encrypted passwords by specifying a passphrase, which you will need to provide when importing the TQL. This will allow import to a different Striim cluster. Alternatively, you may export without a passphrase, in which case the encrypted passwords in the exported TQL can be decrypted only when imported to the same cluster.Encrypting other property valuesSee Using vaults.Setting rowdelimiter valuesDefines the newline string to be used to identify or separate lines when parsing or formatting a file.\\n (default) is ASCII 010 (line feed / LF), used by UNIX, Linux, and Mac OS X\\r is ASCII 013 (carriage return / CR), used by earlier versions of Mac OS and still used by Excel for Mac when exporting text files\\r\\n is CR+LF, used by WindowsUsing non-default case and special characters in table identifiersStriim supports table and column names with non-default case and/or containing special characters in Tables property values for the following databases when read by their own CDC readers, Database Reader, or Incremental Batch Reader and when written to by Database Writer:MariaDBMySQLOracle DatabaseOracle GoldenGatePostgreSQLSQL ServerStriim also supports special characters inAzure Synapse WriterSnowflake WriterSupported special characterscharacternameASCII codenotesspace32!exclamation mark33#number sign35not supported in Azure Synapse table names$dollar36%percent sign37not supported in Azure Synapse table names; also see note below&ampersand38(open parenthesis40)close parenthesis41+plus43not supported in Azure Synapse table names,comma44-hyphen45:colon58;semicolon59<less than60=equals61>greater than62?question mark63not supported in Azure Synapse table names@at symbol64[opening bracket91not supported for MSJet]closing bracket93not supported for MSJet^caret94_underscore95With most sources and targets, this is supported without escaping the name in double quotes.{opening brace123|vertical bar124}closing brace125~tilde126Notes on using special charactersIdentifiers containing special characters must be escaped using double quotes: for example, MySchema.\"My@Table\"The following characters must be further escaped as follows:characterexample identifier in databaseexample escaped in TQLexample escaped in UIdouble quote*ab\"c\"ab\\\\\"c\"\"ab\\\"c\"backslashab\\c\"ab\\\\\\\\c\"\"ab\\\\c\"percent*ab%c\"ab\\\\%c\"\"ab\\%c\"*not supported in Azure Synapse table namesIn multi-part table names, each identifier containing special characters must be escaped separately: for example, \"My@Schema\".\"My@Table\".In three-part table names, special characters are not supported the first part: for example, MyDB.\"My@Schema\".\"My@Table\".When table names are not escaped, Striim will use the database's default case. For example, If an Oracle table is named MyTable, to read it you must specify the Tables property as \"MyTable\". If you omit the double quotes, Striim will attempt to read MYTABLE and will fail with an error that the table is not found.Special characters are supported in the ColumnMap function, for example, ColumnMap(\"ID\"=\"Pid\").When replicating data, if the source and target Tables properties use the % wildcard in double quotes (for example, \"My@Schema\".\"%\", case and special characters are preserved, provided the WAEvent output of the source is not parsed (see Parsing the data field of WAEvent) before reaching the target. If the output of the source is parsed, special characters will be lost unless stored in a field as strings. For example:CREATE TYPE ParsedDataType(\n TableName String,\n ...\n);\nCREATE STREAM OracleTypedStream OF ParsedDataType;\nCREATE CQ ParseOracleCDCStream\n INSERT INTO OracleParsedStream\n SELECT META(x, \"TableName\").toString(),\n ...\n FROM OracleCDCStream x;In this section: Using source and target adapters in applicationsAdapter property data typesConnecting with sources and targets over the internetEncrypted passwordsEncrypting other property valuesSetting rowdelimiter valuesUsing non-default case and special characters in table identifiersSupported special charactersNotes on using special charactersSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-02\n", "metadata": {"source": "https://www.striim.com/docs/en/using-source-and-target-adapters-in-applications.html", "title": "Using source and target adapters in applications", "language": "en"}} {"page_content": "\n\nData type support & mapping for schema conversion & evolutionSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideData type support & mapping for schema conversion & evolutionPrevNextData type support & mapping for schema conversion & evolutionThe data types and mappings detailed in this section are used by:schema evolution (see Handling schema evolution)Handling schema evolutionNoteSchema evolution with CDDL Action set to Process in both source and target requires that all data types in your source tables are supported by the target.templates that support auto schema conversion (see Creating apps using templates)Creating apps using templatesIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-04\n", "metadata": {"source": "https://www.striim.com/docs/en/data-type-support---mapping-for-schema-conversion---evolution.html", "title": "Data type support & mapping for schema conversion & evolution", "language": "en"}} {"page_content": "\n\nData type support & mapping for MariaDB and MySQL sourcesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideData type support & mapping for schema conversion & evolutionData type support & mapping for MariaDB and MySQL sourcesPrevNextData type support & mapping for MariaDB and MySQL sourcesMariaDB / MySQL source typeBigQueryDatabricksMySQLOraclePostgreSQLSnowflakeSpannerSQL Server / Azure Synapsebigint / bigint unsignedint64 / numericbigintbigint / bigint unsignedintbigInt / double precisionintegerint64 / float64bigInt / numeric(38)binarybytesbinarybinary(s)blobbyteabinarybytesbinary(s)bit / bit(n)not supportednot supportedbit / bit(n)blobbit / bit varying(s)binarystringvarchar(n)blobbytesbinarylongblobblobbyteabinarybytes(65535)bytes(max)varbinary(p) if <= 8000varbinary(max) if > 8000char(s)stringstringcharacter(s) if s<255longtext if s>255character(s) if s<=2000clob if s>2000character(s)character(s)string(s)character(s) if s<=8000varchar(max) if s>8000datedatedatedatedatedatedatedatedatedatetimedatetimetimestampdatetimetimestamptimestamp without time zonedatetimetimestampdatetime2decimalnumericdoubledecimal(10,0)number(10,0)numeric(10,0)numeric(10,0)float64numeric(10,0)decimal(p,s)numeric if p<=38 and s<=8string if p>38 or s>9double if p<=38 string if p>38decimal(p,s) if p<=65 and s<=30text if p>65 or s>30number(p,s) if p<=38 and s<=127varchar2(1000) if p>38 or s>127numeric(p,s)numeric(p,s) if p<=38 snf s<=37varchar if p>38 or s>37float64 if p<=308 and s<=15string(max) if p>308 pr s>15decimal(p,s) if p<=38 and s<=38varchar(8000) if p>38 or s>38decimal(p)numeric if p<=38string if p>38double if p<=38 string if p>38decimal(p) if p<=65text if p>65number(p) if p<=38varchar2(1000) if p>38numeric(p)numeric(p) if p<=38varchar if p>38float64 if p<=308string(max) if p>308decimal(p) if p<=38varchar(8000) if p>38doublefloat64doubledoubledouble precisiondouble precisiondouble precisionfloat64double precisiondouble(p,s)float64doubledoubledouble precisiondouble precisiondouble precisionfloat64 if p<=308 and s<=15string(max) if p>308 or s>15floatenumnot supportednot supportednot supportednot supportednot supportedvarcharnot supportednot supportedfloatfloat64doubledecimalfloatdouble precisionfloatfloat64floatfloat(p,s)float64floatfloat(p) if p<=38double if p>38float(p)double precisionfloat(p) if p<=38float if p>38float64float(p) if p<=53varchar(8000) if p>53int / int unsignedint64bigintint / int unsignedintbigIntintegerint64integerjsonstringstringjsonclobjsonvariantstring(max)varchar if <= 8000varchar(max) > 8000longblobbytesbinarylongblobblobbyteabinarybytes(max)varbinary(p) if <= 8000varbinary(max) for > 8000longtextstringstringtinytext / longtextclobtextvarcharstring(max)character(s) if s<=8000varchar(max) if s>8000mediumblobbytesbinarylongblobblobbyteabinarybytes(max)varbinary(p) if <= 8000varbinary(max) for > 8000mediumInt / mediumInt unsignedint64bigintmediumInt / mediumInt unsignedintintegerintegerint64integermediumtextstringstringtinytext / longtextclobtextvarcharstring(max)character(s) if s<=8000varchar(max) if s>8000smallint / smallint unsignedint64bigintsmallint / smallint unsignedintsmallintintegerint64smallinttextstringstringtextclobtextvarcharstring(s)varchar if s<=8000varchar(max) if s>8000timetimetimestamptimevarchar2(200)timetimestring(max)timetimestamptimestamptimestamptimestamptimestamp with time zonetimestamp without time zonetimestamp with timezonetimestampdatetimeOffsettinyblobbytesbinarylongblobblobbyteabinarybytes(255)bytes(max)varbinary(p) if <= 8000varbinary(max) for > 8000tinyint / tinyint unsignedint64biginttinyint / tinyint unsignedblobsmallintintegerint64tinyInttinyint(1)booleanbiginttinyint(1)not supportedbooleanbinarystringnot supportedtinytextstringstringtinytextclobtextvarcharstring(s)character(s) if s<=8000varchar(max) if s>8000varbinary(s)bytesbinaryvarbinary(s)blobcharacter varying(s)varbinarybytes(s)varbinary(s) if s<=8000varbinary(max) if s>8000varchar(s)stringstringvarchar(s)varchar2(s) if s<=4000clob if s>4000character varying(s)varchar(s)string(s)varchar(s) if s<=8000varchar(max) if s>8000yearnot supportednot supportednot supportednot supportednot supportednumeric(38,0)not supportednot supportedMySQL spatial types are not supported.See Using Data Types from Other Database Engines for a list of supported data type aliases (such as boolean and numeric).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/en/data-type-support---mapping-for-mariadb-and-mysql-sources.html", "title": "Data type support & mapping for MariaDB and MySQL sources", "language": "en"}} {"page_content": "\n\nData type support & mapping for Oracle sourcesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideData type support & mapping for schema conversion & evolutionData type support & mapping for Oracle sourcesPrevNextData type support & mapping for Oracle sourcesOracle source typeBigQueryDatabricksMySQLOraclePostgreSQLSnowflakeSpannerSQL Server / Azure Synapsebinary_doublefloat64doubledoubledouble precisiondouble Precisionfloatfloat64float(s)binary_floatfloat64floatfloat(s)floatrealfloatfloat64float(s)blobbytesbinarylongblobblobbyteabinarybytes(p)bytes(max) if p > 10485760varbinary(p) if <= 8000varbinary(max) if > 8000characterstringstringcharacter(1)character(1)character(1)character(1)string(1)character(1)character(s)stringstringcharacter(s)longtext if (s)>255character(s)character(s)character(s)string(s)character(s)clobstringstringtextclobtextvarcharstring(4000)string(max) if (s) > 2621440varchar(s)varchar(max) if (s) > 8000datedatedatedatetimedatetimestamp without time zonedatedatedatetime2floatfloat64floatdoublefloatdouble precisionfloatfloat64varchar(8000)float(s)float64floatfloat(s)floatdouble precisionfloatfloat64float(s)intnumericdecimal(38)decimal(38,0)number(38,0)numeric(38,0)numeric(38,0)float64numeric(38,0)interval day to secondstringstringvarchar(100)interval day(2) to second(6)character varyingvarcharstring(max)varchar(100)interval year to Monthstringstringvarchar(100)interval year(2) to monthcharacter varyingvarcharstring(max)varchar(100)long*bytesbinarylongbloblongblobbyteabinarybytes(max)varbinary(max)long_raw*bytesbinarylongbloblongblobbyteabinarybytes(max)varbinary(max)nchar(s)stringstringnvarchar(s)nchar(s)character(s)character(s)string(s)nchar(s)nclobstringstringlongtextnclobtextvarcharstring(s)string(max) if > 2621440nvarchar if <= 4000nvarchar(max) if > 4000numbernumericdecimal(38)decimal(65)numbernumericnumericfloat64numeric(38)number(p,s)numericstring if (s)>9decimal(p,s) (if p<=38 and s<=37)string (if p>38 or s>37)decimal(p,s)numeric(p,s)numeric (p,s)numeric(p,s)numericstring(max) if (s)>15numeric(p,s)number(p)numericdecimal(p,0) (if p <= 38)string (if p>38)decimal(p,0)number(p,0)numeric(p,0)numeric(p,0)float64numeric(p,0)nvarchar2(s)stringstringnvarchar(s)nvarchar2(s)character varying(s)varchar(s)string(s)nvarchar(s)Object typenot supportedNot Supportednot supportednot supportednot supportednot supportednot supportednot supportedrawbytesbinarylongblobblobbyteabinarybytes(p)bytes(max) if > 10485760varbinary(p) if <= 8000varbinary(max) if > 8000timestampdatetimetimestampdatetimetimestamptimestamp without timezonetimestamptimestampdatetime2timestamp with local time zonetimestamptimestampdatetimetimestamp with time zonetimestamp with time zonetimestamp with time zonetimestampdatetimeOffsettimestamp with time zonetimestamptimestampdatetimetimestamp with time zonetimestamp with time zonetimestamp with time zonetimestampdatetimeoffsetvarcharstringstringvarchar(s)Mapped with varchar(s)character varying(s)Mapped with varchar(s)string(s)varchar(s)varchar2(s)stringstringvarchar(s)varchar2(s)character Varying(s)varchar(s)string(s)varchar(s)xmlstringstringlongtextxmltypexmlvarcharstring(max)xml*OJet supports these types, but Oracle Reader does not. Oracle suggests using CLOB in place of LONG or LONG RAW.Oracle source types ADT, ARRAY, BFILE, LONG, LONG RAW, NESTED TABLE, REF, ROWID, SD0_GEOMETRY, UDT, and UROWID are not supported.VARCHAR with a JSON constraint is not supported in this release (DEV-26815).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-20\n", "metadata": {"source": "https://www.striim.com/docs/en/data-type-support---mapping-for-oracle-sources.html", "title": "Data type support & mapping for Oracle sources", "language": "en"}} {"page_content": "\n\nData type support & mapping for PostgreSQL sourcesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideData type support & mapping for schema conversion & evolutionData type support & mapping for PostgreSQL sourcesPrevNextData type support & mapping for PostgreSQL sourcesPostgreSQL source typeBigQueryDatabricksMySQLOraclePostgreSQLSnowflakeSpannerSQL Server / Azure Synapsebigint / int8int64bigintbigintintbingintintegerint64bigintbigserial / serial8int64bigintbigintintbigserialintegerint64bigintbit varying(s)not supportednot supportednot supportednot supportedbit varying(s)binarynot supportedvarbinary(10)bit(s)not supportednot supportedbit(s) bit(64)not supportedbit varying(s)binarynot supportedvarchar(s)booleanbooleanbooleanboolnot supportedbooleanbooleanboolbitbyteabytesbinarylongblobblobbyteabinarybytes(max)varbinary(max)character varying(s)stringstringvarchar(s)if (s) <65535 longtextvarchar2(s)character varying(s)if (s) <65535 longtextvarchar(s)string(s)varchar(s)character(s)stringstringcharacter(s)char(s)bpchar(s)character(s)string(s)character(s)cidrnot supportednot supportednot supportednot supportedcidrnot supportednot supportednot supporteddatedatedatedatedatedatedatedatedatedouble precisionnumericdoubledoubledouble precisiondouble precisiondouble precisionfloat64float(s)floatfloat64floatfloat(s)floatdouble precisionfloatfloat64float(17)inetnot supportednot supportednot supportednot supportedinetnot supportednot supportednot supportedinteger / int4int64bigintintegerintintegerintegerint64integerintervalstringstringvarchar(100)varchar2(100)intervalvarcharstring(max)varchar(100)jsonstringstringjsonclobjsonvariantstring(max)varchar(max)jsonbstringstringjsonclobjsonbvariantstring(max)varchar(max)macaddrnot supportednot supportednot supportednot supportedmacaddrnot supportednot supportednot supportednumeric / decimalnumericdecimal(38)decimal(65)numbernumericnumericfloat64numeric(38)numeric(p,s) / decimal(p,s)stringdecimal(p,s) if p<=38, s<=37string if p>38 or s>37decimal(p,s)number(p,s)numeric(p,s)numeric(p,s)float64numeric(p,s)numeric(p) / decimal(p)numericdecimal(p,0) if p<=38string if p>38decimal(p,0)numeric(p,0)numeric(p,0)numeric(p,0)float64numeric(p,0)pg_lsnnot supportednot supportednot supportednot supportedpg_lsnnot supportednot supportednot supportedrealfloat64floatfloat(s)floatrealfloatfloat64floatserial / serial4int64bigintintegerintserialintegerint64integersmallInt / int2int64bigintsmallintintsmallintintegerint64smallintsmallserial / serial2int64bigintsmallintintsmallserialintegerint64smallinttextstringstringlongtextclobtextvarcharstring(max)varchar(max)timetimetimestamptimevarchar2(100)time(s)timestring(max)timetime without Time zonetimetimestamptimevarchar2(100)time(s)timestring(max)timetimestampdatetimetimestampdatetimetimestamptimestamp(s) without timezonetimestamptimestampdatetime2timestamp With Timezonetimestamptimestamptimestamptimestamp with timezonetimestamp(s) with timezonetimestamp with time zonetimestampdatetimeoffsettxid_snapshotnot supportednot supportednot supportednot supportedtxid_snapshotnot supportednot supportednot supporteduuidnot supportednot supportednot supportednot supporteduuidnot supportednot supportednot supportedxmlstringstringlongtextxmltypexmlvarcharstring(max)xmlPostgreSQL source types box, cid, circle, daterange, int4range, int8range, line, lseg, money, numrange, oid, path, point, polygon, tsquery, tsrange, tstzrange, tsvector, and xid are not supported.See Data Types for a list of supported data type aliases (such as decimal and varchar).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-07\n", "metadata": {"source": "https://www.striim.com/docs/en/data-type-support---mapping-for-postgresql-sources.html", "title": "Data type support & mapping for PostgreSQL sources", "language": "en"}} {"page_content": "\n\nData type support & mapping for SQL Server sourcesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideData type support & mapping for schema conversion & evolutionData type support & mapping for SQL Server sourcesPrevNextData type support & mapping for SQL Server sourcesSQL Server source typeBigQueryDatabricksMySQLOraclePostgreSQLSnowflakeSpannerSQL Server / Azure Synapsebigintint64bigintbigintintbigintintegerint64bigintbinarybytesbinarybinary(1)blobbyteabinarybytes(1)binary(1)binary(s)bytesbinarybinary(s)blobbyteabinarybytes(s)binary(s)bitnot supportednot supportedbit(1)not supportedbit varyingbooleannot supportedbitcharstringstringcharacter(1)character(1)character(1)character(1)string(1)character(1)char(s)stringstringcharacter(s)longtext if s>255character(s)clob if s>2000character(s)character(s)string(s)character(s)datedatedatedatedatedatedatedatedatedatetimedatetimetimestampdatetimetimestamptimestamp without timezonetimestamptimestampdatetime2datetime2datetimetimestampdatetimetimestamptimestamp without timezonetimestamptimestampdatetime2datetimeoffsettimestamptimestamptimestamptimestamp with time zonetimestamp with time zonetimestamp with time zonetimestampdatetimeoffsetdecimalstringdecimal(38)decimal(18,0)number(18,0)numeric(18,0)numeric(18,0)float64decimal(18,0)decimal(p,s)numeric(p) if p<=38 and s<=8string if p>38 or s>9decimal(p,s) if p<=38, s<=37string if p>38 or s>37decimal(p,s) if p<=65 and s<=30text if p>65 or s>30number(p,s) if p<=38 ands<=127varchar2(1000) if p>38 or s>127numeric(p,s)numeric(p,s) if p<=38 and s<=37varchar if p>38 or s>37float64 if p<=308 and s<=15string(max) if p>308 pr s>15decimal(p,s) if p<=38 and s<=38varchar(8000) if p>38 or s>38decimal(p)numeric(p) if p<=38string if p>38decimal(p,0) if p<=38string if p>38decimal(p) if p<=65text if p>65number(p,s) if p<=38varchar2(1000) if p>38numeric(p)numeric(p) if p<=38varchar if p>38float64 if p<=308string(max) if p>308decimal(p) if p<=38varchar(8000) if p>38floatfloat64floatdoublefloatdouble precisionfloatfloat64float(s)float(p)float64floatfloat(p) if p<=38double if p>38float(p)double precisionfloat(p) if p<=38float if p>38float64float(p) if p<=53varchar(8000) if p>53imagebytesbinarylongblobblobbyteabinarybytes(max)varbinary(max)intint64bigintintegerintintegerintegerint64integermoneynot supportednot supportednot supportednot supportednot supportednot supportednot supportednot supportednchar(s)stringstringnchar(s) if s<255nvarchar(s) if s>255nachar(s) if s<=1000nclob if s>1000character(s)character(s)string(s)nchar(s) if <= 4000nvarchar(max) > 4000ntextstringstringlongtextnclobtextvarcharstring(s)nchar(s) if <= 4000nvarchar(max) > 4000numericstringdecimal(38)decimal(18,0)number(18,0)numeric(18,0)numeric(18,0)float64numeric(18,0)numeric(p,s)numeric(p) if p<=38 and s<=8string if p>38 or s>9decimal(p,s) if p<=38, s<=37string if p>38 or s>37decimal(p,s) if p<=65 and s<=30text if p>65 or s>30number(p,s) if p<=38 and s<=127number if p>38 or s>127numeric(p,s)numeric(p) if p<=38 and s<=37varchar if p>38 or s>37float64 if p<=308 and s<=15string(max) if p>308 pr s>15numeric(p,s) if p<=38 and s<=38varchar(8000) if p>38 or s>38numeric(p)numeric(p) if p<=38string if p>38decimal(p,0) if p<=38string if p>38decimal(p) if p<=65decimal(65) if p>65number(p) if p<=38number(*,0) if p>38numeric(p)numeric(p) if p<=38varchar if p>38int64 if p<=20float64 if 20<p<=308string(max) if p>308numeric(p) if p<=38numeric(38) if p>38nvarchar(s)stringstringnvarchar(s) if s<65535longtext if s>65535nvarchar(s) if s<=4000nclob if s>4000character varyingvarcharstring(s)nvarchar(s) if <= 4000nvarchar(max) if > 4000realfloat64floatfloatfloatdouble precisionfloatnot supportedfloat(s)smalldatetimedatetimetimestampdatetimetimestamptimestamp without timezonetimestamptimestampdatetime2smallintint64bigintsmallintintsmallintintegerint64smallintsmallmoneynot supportednot supportednot supportednot supportednot supportednot supportednot supportedsmall moneytextstringstringlongtextclobtextvarcharstring(max)varchar(s) if <= 8000varchar(max) > 8000timetimetimestamptimevarchar2(100)timetimestring(max)timetinyintint64biginttinyint unsignedintsmallintintegerint64tinyintvarbinarybytesbinaryvarbinary(1)blobcharacter varying(1)varbinarybytes(1)varbinary(1)varbinary(s)bytesbinaryvarbinary(s)blobcharacter varying(s)varbinarybytes(s)varbinary(s)varchar(s)stringstringvarchar(s)varchar2(s) if s<=4000clob if s>4000character varying(s)varchar(s)string(s)varchar(s) if s<=8000varchar(max) if s>8000xmlstringstringlongtextxmltypexmlvarcharstring(max)xmlSQL Server source data types geography, geometry, rowversion, sql_variant, udt, and uniqueidentifier are not supported.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-20\n", "metadata": {"source": "https://www.striim.com/docs/en/data-type-support---mapping-for-sql-server-sources.html", "title": "Data type support & mapping for SQL Server sources", "language": "en"}} {"page_content": "\n\nHandling schema evolutionSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideHandling schema evolutionPrevNextHandling schema evolutionFor the CDC sources listed below, Striim can capture certain changes to the DDL in the source tables. When CDDL Capture is enabled in a source, you must choose which of the following actions you want to happen when it encounters a DDL change:Halt in source: DDL changes are never expected in the source. Striim will halt the application so you can investigate the issue. Use this option when you want to prohibit changes to the source schema.Ignore in source: target is schemaless and does not have the CDDL Action property (for example, FileWriter with JSON Formatter, or MongoDB).Process in source, Halt in target: the application has one or more targets that support the Process action and one or more that have the CDDL Action property but do not support the Process action. In this case, set the CDDL Action property to Halt for the targets that do not support Process. Schema changes in the source will be replicated to the targets that support Process and the application will halt for you to deal with the others manually. If recovery is enabled for the application, after restart the DDL operation will be sent again.DDL changes to tables specified in Excluded Tables will not trigger Halt.Process in source, Ignore in target: the application has multiple targets that have the CDDL Action property and you do not want to replicate changes to this one.Process in source, Process in target: replicate changes to the target, keeping the target schema in sync with the source automatically without interrupting operation of the application. This is supported only for the targets listed below..When an unsupported DDL operation or unsupported data type is encountered in a DDL change, the application will halt for you to troubleshoot the problem.Quiesce in source: target does not support schema evolution, so when a DDL change is detected, Striim will write all the events received prior to the DDL operation to the target, then quiesce the application. Then you can update the target schema manually. If recovery is enabled for the application, after restart the DDL operation will not be sent again.Always select Process in the source when Using the Confluent or Hortonworks schema registry.Supported CDC sourcesMariaDBMySQLOracle Database 18c and earlierOracle GoldenGate 12.1 or later for Oracle Database onlyPostgreSQL (see PostgreSQL setup for schema evolution)SQL Server (with MSJet only)Targets with the CDDL Action propertyAzure SynapseDatabricks Writer: supports only CREATE TABLE, ADD COLUMN, DROP TABLE, and TRUNCATEGoogle BigQueryGoogle Cloud SpannerMariaDB (via Database Writer)MySQL (via Database Writer)Oracle Database (via Database Writer)PosgtgreSQL (via Database Writer)SAP Hana (via Database Writer - does not support Process)SnowflakeSQL Server (via Database Writer)Sybase (via Database Writer - does not support Process)Supported DDL operationsCREATE TABLE (default column values are not supported)ALTER TABLE ... ADD COLUMN (default column values are not supported)with MySQL, AFTER and ALGORITHM=INSTANT are not supported (known issues DEV-35539 and DEV-35681)with Oracle Database, adding NOT NULL constraints is not supported (known issues DEV-24666, DEV-25424)ALTER TABLE ... MODIFY COLUMN: The modification must be compatible with existing data, for example, you could change short to long, or varchar(20) to varchar(30). Default column values are not supported.not supported with BigQuery or Databricks targetsIf a ColumnMap is specified (see Mapping columns), the mapped target column will be modified.with Oracle Database, adding NOT NULL constraints is not supported (known issues DEV-24666, DEV-25424)Snowflake Writer: see limitations described in ALTER TABLE \u2026 ALTER COLUMNnot supported with SQL Server sources (known issue DEV-26386)ALTER TABLE ... ADD PRIMARY KEYnot supported with BigQuery or Databricks targetssupported with GoldenGate sources only when table has no primary key (known issues DEV-26575)not supported with MySQL or SQL Server sourcesALTER TABLE ... ADD CONSTRAINT ... PRIMARY KEYnot supported with BigQuery or Databricks targetsWith GoldenGate sources, use the syntax ALTER TABLE <name> ADD CONSTRAINT <name> PRIMARY KEY (id). (The syntax ALTER TABLE MODIFY ID NUMBER NOT NULL PRIMARY KEY will not work.)not supported with SQL Server sourcesALTER TABLE ... ADD CONSTRAINT ... UNIQUEnot supported with BigQuery or Databricks targetssupported with GoldenGate sources only when table has a primary key column and the constraint is not added to that column (known issues DEV-26575)not supported with SQL Server sources (known issue DEV-26386)ALTER TABLE ... ADD UNIQUEsupported with MariaDB and MySQL onlyALTER TABLE ... DROP COLUMN:not supported with BigQuery or Databricks targetsIf a ColumnMap is specified for the column (see Mapping columns), the application will halt. ALTER and RECOMPILE the application to remove the ColumnMap for the column, drop the column from the target table, and restart the application.with Oracle Database, adding NOT NULL constraints is not supported (known issues DEV-24666, DEV-25424)DROP TABLETRUNCATE TABLEsources that support TRUNCATE TABLEtargets that support TRUNCATE TABLEGG Trail ReaderMariaDB Xpand ReaderMySQL ReaderOJetOracle Reader (with Oracle version 18c or earlier only)PostgreSQL Reader (using wal2json version 2 only)Azure Synapse WriterBigQuery WriterDatabase Writer writing toMariaDBMariaDB XpandMySQLOraclePostgreSQLSQL ServerDatabricks WriterSnowflake WriterTRUNCATE TABLE ... CASCADE is not supportedTRUNCATE cannot be supported with Spanner or SQL Server sources as TRUNCATE operations is not included in their CDCData type support and mappingsSee Data type support & mapping for schema conversion & evolution.Process in the source is supported only when all source table data types are supported. Process in the target is supported only when all source types are mapped to target types.Monitoring schema evolutionThe MON command includes the following metrics for schema evolution:number of DDL operations, by tablelast captured / applied DDL statementtime of last captured / applied DDLignored DDL countNotes and limitationsSchema evolution is not supported when using Bidirectional replication.Striim can capture only those DDL changes made after schema evolution is enabled.The first time you start an application with CDDL Capture enabled, the CDC reader will take a snapshot of the source database's table metadata from the. It is essential that there are no DDL changes made to the database until startup completes. Otherwise, the schema captured in the snapshot will be out of date, which will eventually cause the application to terminate.When the Tables property in the reader uses a wildcard, the first time the application is started Striim must fetch the metadata for all tables in the schema. If there are many tables in the schema, this may take a significant amount of time.After an application with both recovery and schema evolution enabled is restarted, Striim will automatically use the correct schema for the restart positionIf the application halts due to an unsupported DDL change, an unsupported column data type, or a Parser Exception, you may add the table causing the halt to the Excluded Tables list and restart the application.LimitationsRenaming tables is not supported.TRUNCATE TABLE is not supported. Use DELETE FROM <table name>; or some other method for deleting all rows from a table.See also Schema evolution known issues and limitations.Sample WAEvents for DDL operations when schema evolution is enabledDDL commandexampleresulting WAEventCREATE TABLECREATE TABLE PRODUCT.CUSTOMER\n(\n c_custkey BIGINT not null,\n c_name VARCHAR(25) not null,\n c_address VARCHAR(40) not null,\n c_nationkey INTEGER not null,\n c_phone CHAR(15) not null,\n c_acctbal DOUBLE PRECISION,,\n c_mktsegment CHAR(10) not null\n);WAEvent{\ndata: [\"CREATE TABLE PRODUCT.CUSTOMER \u2026\u201d]\nmetadata:{\n\"OperationName\": \"Create\",\n\"TableName\": \"PRODUCT.CUSTOMER\",\n\"SchemaName\": \"PRODUCT\",\n\"OperationType\": \"DDL\",\n\"CDDLMetadata\": \u201c<Info about DDL>\u201d\n}\n};ALTER TABLE ADD COLUMNALTER TABLE PRODUCT.CUSTOMER\n ADD c_comment VARCHAR(117) not null;WAEvent{\ndata: [\"ALTER TABLE PRODUCT.CUSTOMER\n ADD c_comment VARCHAR(117) not null;\"]\nmetadata:{\n\"OperationName\": \"AlterColumns\",\n\"OperationSubName\": \"AddColumn\",\n\"TableName\": \"PRODUCT.CUSTOMER\",\n\"SchemaName\": \"PRODUCT\",\n\"OperationType\": \"DDL\",\n\"CDDLMetadata\": \u201c<Info about DDL>\u201d\n}\n};ALTER TABLE MODIFY COLUMNALTER TABLE PRODUCT.CUSTOMER\nALTER COLUMN c_address TYPE VARCHAR(200);WAEvent{\ndata: [\"ALTER TABLE PRODUCT.CUSTOMER\nALTER COLUMN c_address TYPE VARCHAR(200);\"]\nmetadata:{\n\"OperationName\": \"AlterColumns\",\n\"OperationSubName\": \"AlterColumn\",\n\"TableName\": \" PRODUCT.CUSTOMER\",\n\"SchemaName\": \"PRODUCT\",\n\"OperationType\": \"DDL\",\n\"CDDLMetadata\": \u201c<Info about DDL>\u201d\n}\n};\nALTER TABLE DROP COLUMNALTER TABLE PRODUCT.CUSTOMER \nDROP COLUMN c_acctbal;WAEvent{\ndata: [\"ALTER TABLE PRODUCT.CUSTOMER \nDROP COLUMN c_acctbal;\"]\nmetadata:{\n\"OperationName\": \"AlterColumns\",\n\"OperationSubName\": \"DropColumn\",\n\"TableName\": \"PRODUCT.CUSTOMER\",\n\"SchemaName\": \"PRODUCT\",\n\"OperationType\": \"DDL\",\n\"CDDLMetadata\": \u201c<Info about DDL>\u201d\n}\n};DROP TABLEDrop Table PRODUCT.CUSTOMER;WAEvent{\ndata: [\"DROP TABLE PRODUCT.CUSTOMER\"]\nmetadata:{\n\"OperationName\": \"Drop\",\n\"TableName\": \"HR.EMP\", \n\"SchemaName\": \"HR\",\n\"OperationType\": \"DDL\",\n\"CDDLMetadata\": \u201c<Info about DDL>\u201d\n}\n};\nSchema evolution known issues and limitationsThe following are known issues in this release related to schema evolution. Additional known issues are flagged by \"DEV-#####\"in Handling schema evolution.All sourcesColumns with a data type that has a scale set to a negative value (for example, number(1, -17) ) are not supported.Azure Synapse WriterIf using Optimized Merge mode, CREATE TABLE will cause the application to halt (DEV-29689).Adding a NOT NULL constraint on a column that already has a UNIQUE constraint is not supported. (DEV-26158)BigQuery WriterIf you are using the legacy streaming API to write to template tables, using the default setting of Process may cause the application to halt due to a limitation in BigQuery that does not allow writing for up to 90 minutes after a DDL change (see BigQuery > Documentation > Guides > Use the legacy streaming API > Creating tables automatically using template tables > Changing the template table schema). In this case, supporting schema evolution is impossible, so set CDDL Action to Ignore. This is not an issue if you are using partitioned tables.MSJetIf a table is dropped and a table of the same name is created, the application may terminate. (DEV-26417)The application will terminate if the database contains tables with names that vary only by case, for example, id and ID, even if those tables are not among those read by MSJet. (DEV-26872)MySQL ReaderAdding a UNQUE constraint syntax with a system generated constraint name (for example, ADD CONSTRAINT cs_01ec19f5f75caa91a1160eca1 UNIQUE (created_att) is not supported. (DEV-26678)OJetColumns of type ROWID are not supported.Invisible, virtual, and unused columns are not supported.Oracle ReaderColumns of type INTERVAL DAY(x) TO SECOND(y) are not supported. (DEV-24624).Invisible, virtual, and unused columns are not supported.PostgreSQL ReaderTo capture DDL changes when the command has more than 1024 characters (for example, a CREATE TABLE statement with many columns), you must raise PostgreSQL's track_activity_query_size parameter from its default value of 1024. (DEV-24650)Creating a table with a column of type serial or adding a column of type serial is not supported.In this section: Handling schema evolutionSample WAEvents for DDL operations when schema evolution is enabledSchema evolution known issues and limitationsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/en/handling-schema-evolution.html", "title": "Handling schema evolution", "language": "en"}} {"page_content": "\n\nCreating and modifying apps using the Flow DesignerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating and modifying apps using the Flow DesignerPrevNextCreating and modifying apps using the Flow DesignerThe Flow Designer is a graphical programming tool that can do almost everything you can do with TQL.WarningMaking changes to a component that affect the number or type of fields in its output stream (for example, removing a field from a CQ's SELECT statement) can cause downstream objects to become invalid. Before attempting such changes in Flow Designer, make a TQL backup of the application by selecting Configuration > Export.Flow Designer may become unresponsive when a type has a very large number of fields, such as 1,000.If you encounter either of these problems, make your changes in TQL instead (see ALTER and RECOMPILE).See\u00a0Modifying an application using Flow Designer for a walkthrough.The Base Components palette may be more efficient for experienced users. Any of the components in the Sources, Enrichers (caches), Processors (CQ and window), or Targets palettes may be created by setting the properties of a base component. See\u00a0DDL and component reference for information on component properties. You may also search for a component by the label that is displayed in the palette (which is not always the same as the name in TQL, for example, Database Reader and Database Writer are both labeled Database).Some readers have a Click here to configure using wizard link that will launch the \"to Striim\" App Wizard for that source (see Configuring an app template source). Some of these readers also have a Test connection button to validate the properties.Configuring an app template sourceThe components on the Transformers palettes provide a graphical alternative to writing CQs. See Using database event transformers and Using database event transformers.When you view a running application in Flow Designer, input and output total event counts, current input and output rate, and a graph of input and output rates appear at the top of the page.Use Configuration > App Settings to configure recovery, data validation, encryption, exception handling, and auto-resume (see Recovering applications, Creating a data validation dashboard), CREATE APPLICATION ... END APPLICATION, and Automatically restarting an application).Recovering applicationsCREATE APPLICATION ... END APPLICATIONNotes:An application must be in the Created state to be editable. Applications in the Deployed or Running state are read-only.Click Metadata Browser at the top right or Copy from in source and target editors to view and copy settings from other flows or applications.Click and drag to select multiple components to cut and paste in another flow, or to copy and paste in another application.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-14\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-and-modifying-apps-using-the-flow-designer.html", "title": "Creating and modifying apps using the Flow Designer", "language": "en"}} {"page_content": "\n\nCreating an app using the Flow DesignerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating and modifying apps using the Flow DesignerCreating an app using the Flow DesignerPrevNextCreating an app using the Flow DesignerThis tutorial uses the sample applications described in Running the CDC demo apps. The Docker PostgreSQL instance must be running. Kafka is not required.Running the CDC demo appsOn the Apps page, if ValidatePostgres is not running, deploy and start it.Click Create App > Start from scratch, name the app PG2File, select your personal namespace or enter a name for a new namespace, and click Save.Click the Metadata Browser icon, search for ReadCDCFromPostgresDB (located in Source), and click Copy to App.For Name , enter ReadPostgreSQLCDC; for Password, enter striim; for New Output, enter rawCDCstream; and click Save. Except for the name and output stream, this new source will have the same properties as SamplesDB.ReadCDCfromPostgresDB.Click the stream icon, click +, and select Connect next Target component.In the New Target dialog, set the properties as follows (leave other properties at their defaults), then click Save.Name: WriteRawDataAdapter: FileWriterFile Name: RawDataDirectory: MyDirectory (click Show Advanced Settings)Rollover Policy: leave as isFlush Policy: EventCount 1 (delete Interval)Formatter: JSONFormatterDeploy and start the application, then return to the View All Apps page and start SamplesDB.Execute250Inserts.Open Striim\\MyDirectory\\RawData.00. This contains the raw WAEvent output of PostgreSQLReader formatted as JSON. It should look similar to the following:[\n {\n \"metadata\":{\"TableName\":\"public.customer\",\"TxnID\":\"2175\",\"OperationName\":\"INSERT\",\"LSN\":\"0\\/2F9803D8\",\"NEXT_LSN\":\"0\\/2F9942C8\",\"Sequence\":1,\"Timestamp\":\"2022-02-25 14:51:31.919978-06\"},\n \"data\":{\n\"c_custkey\":150001,\n\"c_name\":\"Customer#150001\",\n\"c_address\":\"IVhzIApeRb ot,c,E\",\n\"c_nationkey\":15,\n\"c_phone\":\"25-989-741-2988\",\n\"c_acctbal\":\"711.56\",\n\"c_mktsegment\":\"BUILDING \",\n\"c_comment\":\"to the even, regular platelets. regular, ironic epitaphs nag e\"\n},\n \"before\":null,\n \"userdata\":null\n },\n...This includes the column names and values for the row inserted into the public.customer table by transaction ID (TxnID).Modifying an app using the Flow DesignerTo parse this raw data, you must write a CQ as described in Parsing the data field of WAEvent. Reopen the PG2File app in Flow Designer, stop and undeploy it, click the stream icon, click +, and select Connect next CQ component.Name the CQ ParseData, name the new output stream ParsedDataStream, and copy and paste the following into the Query field:SELECT \n META(rawCDCstream,\"OperationName\").toString() AS OpType,\n TO_INT(data[0]) AS CustomerKey,\n TO_STRING(data[1]) AS CustomerName,\n TO_STRING(DATA[2]) AS CustomerAddress,\n TO_INT(DATA[3]) AS NationKey,\n TO_STRING(DATA[4]) AS CustomerPhone,\n TO_DOUBLE(data[5]) AS CustomerAccountBalance,\n TO_STRING(DATA[6]) AS MarketSegment,\n TO_STRING(data[7]) AS CustomerComment\nFROM rawCDCstream;Make sure the properties look as shown below, then click Save.Click the ParseData stream icon, click +, and select Connect next Target component.Set the properties as shown below and click Save.Name: WriteParsedDataAdapter: FileWriterFile Name: ParsedDataDirectory: MyDirectory (click Show Advanced Settings)Rollover Policy: leave as isFlush Policy: EventCount 1 (delete Interval)Formatter: JSONFormatterDeploy and run the application, then return to the Apps page and start SamplesDB.Execute200Inserts.Open Striim\\MyDirectory\\ParsedData.00. This contains the parsed data in the custom format defined by the CQ as well as the operation type from the metadata.\"OpType\":\"INSERT\",\n\"CustomerKey\":151501,\n\"CustomerName\":\"Customer#151501\",\n\"CustomerAddress\":\"IVhzIApeRb ot,c,E\",\n\"NationKey\":15,\n\"CustomerPhone\":\"25-989-741-2988\",\n\"CustomerAccountBalance\":711.56,\n\"MarketSegment\":\"BUILDING \",\n\"CustomerComment\":\"to the even, regular platelets. regular, ironic epitaphs nag e\"\nIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-28\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-an-app-using-the-flow-designer.html", "title": "Creating an app using the Flow Designer", "language": "en"}} {"page_content": "\n\nUsing event transformersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating and modifying apps using the Flow DesignerUsing event transformersPrevNextUsing event transformersEvent transformers are a graphical alternative to CQs (see CREATE CQ (query)) for modifying data flows. For the transformers discussed in this section, the input stream must have a user-defined type.If you convert a transformer to a CQ, you cannot convert it back to a transformer.Field AdderAdds one or more fields and creates a corresponding type for the output stream. For each field, enter a name, select a data type, and enter an expression.Expressions may use the same Operators and Functions as CQ SELECT statements. Enclose literal string values in quotes.Field EnricherJoins fields from a stream with fields from a cache, event table, WActionStore, or window (see Joins). Select a stream from Input, another component from Enrichment Component, and the fields to join on from Input Field and Enrichment Component Field. All the fields from both components will appear in Output Fields (Input fields will start with i_ and Enrichment Component fields with e_.) Delete any fields you don't want by clicking x and reorder them as you like by dragging \u2195 up or down. Optionally, click Convert to CQ to specify a particular type of join (see CREATE CQ (query)).CREATE CQ (query)Field MaskerSee Masking functions.Masking functionsField RemoverRemoves one or more fields and creates a corresponding type for the output stream. Click the X to remove a field. Click RESET to start over.Field RenamerRenames one or more fields and creates a corresponding type for the output stream.Field SplitterSplits a string field into multiple string fields using the specified delimiter. The delimiter may include any combination of alphanumeric characters, spaces, and punctuation.Field Type ModifierChanges the data type of one or more fields and creates a corresponding type for the output stream.The following conversions should work without problems. Others may cause the application to terminate.Input typeOutput typeByteDouble, Integer, Float, Long, Short, or StringDateTimeStringDoubleStringFloatDouble or StringIntegerDouble, Float, Long, or StringLongDouble, Float, or StringShortDouble, Integer, Float, Long, or StringStringByte, Double, Float, Integer, Long, or Short, provided the values are compatibleField Value FilterFilters the output stream based on one or more specified conditions. Each additional condition narrows the criteria, so for the output for the example above, the output will contain events with amounts from $5,000 to $10,000.To DB EventConverts events of a user-defined type to WAEvent format. The output stream's type is WAEvent and can be used as the input for a target using DatabaseWriter or another writer accept input from DatabaseReader, IncrementalBatchReader, or a SQL CDC source. In the output, the table name will be the value of the metadata TableName field. In this release, the value of the metadata OperationName field will always be INSERT.For example, using the settings shown above, the following input event:PosDataStream_Type_1_0{\n merchantId: \"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\"\n dateTime: 1363134730000\n amount: 2.2\n zip: 41363\n};\nwould be output as:WAEvent{\n data: [\"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\",1363134730000,2.2,41363]\n metadata: {\"TableName\":\"mydb.mytable\",\"OperationName\":\"INSERT\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"Dw==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: {\"uuidstring\":\"01e91cfe-b88a-8ef1-8e39-8cae4cf129d6\"}\n};In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-14\n", "metadata": {"source": "https://www.striim.com/docs/en/using-event-transformers.html", "title": "Using event transformers", "language": "en"}} {"page_content": "\n\nUsing database event transformersSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating and modifying apps using the Flow DesignerUsing database event transformersPrevNextUsing database event transformersDatabase event transformers are a graphical alternative to CQs for modifying the output of SQL CDC sources (see CREATE CQ (query) and Working with SQL CDC readers).You can use DB event transformers with sources.If you convert a transformer to a CQ, you cannot convert it back to a transformer.Expressions may use the same Operators and Functions as CQ SELECT statements. Enclose literal string values in quotes.Custom Data AdderAdds one or more fields to the USERDATA map (see Adding user-defined data to WAEvent streams). For each field, enter a name, select a data type, and enter an expression.Data ModifierModifies one field in the DATA array. Select the table, select the column, and enter an expression.This transformer can be used only with MSSQLReader, MySQLReader, or OracleReader, and Striim must be able to connect to the database to download table descriptions.Metadata FilterFilters the input events based on one or more metadata field values.Operation FilterPasses only events with the selected operation type to the output stream. Select DELETE, INSERT, or UPDATE.Table FilterPasses only the selected table to the output stream.This transformer can be used only with MSSQLReader, MySQLReader, or OracleReader, and Striim must be able to connect to the database to download table descriptions.To EventCreates a Striim Type for the output stream based on the column names and data types of the selected table and converts WAEvent input to typed output (see Parsing the data field of WAEvent). Only events from the selected table are included in the output stream.This transformer can be used only with MSSQLReader, MySQLReader, or OracleReader, and Striim must be able to connect to the database to download table descriptions.To StagingCopies the values of the OperationName field in the METADATA map to the OrigOperationName field in the USERDATA map (see Adding user-defined data to WAEvent streams). Typically you would use this in an application that writes to a data warehouse or other target where UPDATE and DELETE operations are handled as INSERTs (see How update and delete operations are handled in writers).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-24\n", "metadata": {"source": "https://www.striim.com/docs/en/using-database-event-transformers.html", "title": "Using database event transformers", "language": "en"}} {"page_content": "\n\nCreating apps using templatesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating apps using templatesPrevNextCreating apps using templatesTemplates can make creating Striim applications faster and easier. Apps created with templates may be modified using Flow Designer or by exporting TQL, editing it, and importing the modified TQL.Striim has templates for many source-target combinations. Which are available in your installation will vary depending on which licenses you have (contact\u00a0Striim support for more information).When you use a template, the app's exception store is enabled automatically. If you do not want an exception store, turn it off after you create the app (see CREATE EXCEPTIONSTORE).CREATE EXCEPTIONSTOREUse the Search Templates tool on the App Wizard page to see which templates are available for a particular source or target.Some initial load templates have Auto Schema Conversion, which allows you to automatically create schemas and tables in the target that correspond to those in the source. You must create the target database manually. In this release, tables can be created in the target only when Data type support & mapping for schema conversion & evolution includes mappings between all source and target column data types; other tables will be omitted.Sources includeTargets includeAmazon S3Azure Blob StorageCosmosDB ReaderGoogle Cloud StorageHDFSIncremental Batch ReaderMariaDB Initial Load (using Database Reader)MSJet (for initial load for MSJet, use SQL Server Initial Load)MariaDB Initial Load (using Database Reader)MongoDB CDC (using MongoDB Reader in Incremental mode)MongoDB Initial Load (using MongoDB Reader in Initial Load mode)MySQL CDCMySQL Initial Load (using Database Reader)Oracle CDCOracle Initial Load (using Database Reader)OJetPostgreSQL CDCPostgreSQL Initial Load (using Database Reader)SalesforceSalesforce PardotSQL Server CDCSQL Server Initial Load (using Database Reader)Amazon Kinesis Data StreamsAmazon RedshiftAmazon S3Apache CassandraApache Hive - ClouderaApache KuduAzure Blob StorageAzure Cosmos DB (after you configure the source and select schemas and tables, the wizard will prompt you to select the SQL, Cassandra, or MongoDB API)Azure Data Lake Store Gen1Azure Data Lake Store Gen2Azure Event HubsAzure PostgreSQLAzure SQL DatabaseAzure Synapse AnalyticsBigQueryCloud SQL for MySQLCloud SQL for PostgreSQLCloud SQL for SQL ServerDatabase (see Database Writer for supported databases)Databricks in AWS and AzureNote: Auto Schema Conversion is not supported when using Databricks' Unity Catalog.File (using File Writer)Google Cloud StorageHazelcast WriterHBaseHortonworks HiveJMSJPAWriterKafka 0.8.0, 0.9.0, 0.10.0, 0.11.0, 2.1.0MapR StreamsMariaDBMongoDBMQTTMySQL (using DatabaseWriter)Oracle Database (using DatabaseWriter)PostgreSQL (using DatabaseWriter)SAP HANA (using DatabaseWriter)ServiceNowSinglestore (MemSQL)SnowflakeSpannerSQL Server (using DatabaseWriter)Striim (app contains only the source)App template prerequisite checklistYou\u00a0will need the assistance of a database administrator for some of these tasks.Configure the source database as detailed in the relevant topic in Sources.\u00a0You will need to provide the user name and password created during database configuration when configuring the source. If using an initial load template with Auto Schema Conversion, also assign the privileges listed for the source in Database privileges required for Auto Schema Conversion.Configure the target as detailed in the relevant topic in Targets. If using an initial load template with Auto Schema Conversion, also assign the privileges listed for the target in Database privileges required for Auto Schema Conversion.If the source cannot be read directly by Striim\u00a0(for example, if Striim is running in AWS, Azure, or Google Cloud, and the source is on your premises):Install the Forwarding Agent and appropriate JDBC driver on the source host as detailed in\u00a0Striim Forwarding Agent installation and configuration.Before launching a template,\u00a0make sure the Forwarding Agent is running.If not using an initial load template with Auto Schema Conversion, create tables in the target that match the tables in the source using any tool you wish.Database privileges required for Auto Schema ConversionTo use Auto Schema Conversion, a database administrator must assign privileges to the users or service accounts to be specified for the source and target connections using the following commands or procedures:Azure Synapse - target onlyUSE <DATABASENAME>; EXEC sp_addrolemember 'db_owner', '<USERNAME>';BigQuery - target onlyFollow the instructions in BigQuery setup to assign roles or permissions to the service account to be specified in the target properties.Cloud Spanner - target onlyGrant the Cloud Spanner Database User or higher role (see Cloud Spanner Roles) for the database to the service account to be specified in the target properties.Databricks - target onlyGRANT CREATE ON SCHEMA <SCHEMANAME> TO <USERNAME>;MariaDB / MySQL - sourceCreate a user as described in MariaDB setup or MySQL setup.MySQL / MariaDB setupAlternatively, assign the SELECT privilege:GRANT SELECT ON *.* TO <USERNAME>@<HOSTNAME>;MariaDB / MySQL - targetGRANT CREATE ON *.* TO <USERNAME>@<HOSTNAME>;Oracle - sourceCreate a user as described in Create an Oracle user with LogMiner privileges. or Running the OJet setup script on Oracle.Alternatively, assign the following privileges:grant CREATE SESSION to <username>;\ngrant SELECT ANY TABLE to <username>;\ngrant SELECT ANY DICTIONARY to <username>;Oracle - targetGRANT CREATE SESSION TO <USERNAME>;\nGRANT CREATE USER TO <USERNAME>;\nGRANT CREATE ANY TABLE TO <USERNAME>;\nGRANT CREATE ANY INDEX TO <USERNAME>;\nGRANT UNLIMITED TABLESPACE TO <USERNAME>;PostgreSQL - sourceCreate a user as described in PostgreSQL setup.Alternatively, assign the following privileges:GRANT CONNECT ON DATABASE <DATABASENAME> TO <USERNAME>;\nGRANT SELECT ON <TABLENAMES> to <USERNAME>;PostgreSQL - targetGRANT CONNECT ON DATABASE <DATABASENAME> TO <USERNAME>;\nGRANT CREATE ON <TABLENAMES> to <USERNAME>;Snowflake - target onlyGRANT USAGE ON <DATABASE | WAREHOUSE | SCHEMA | TABLE> TO ROLE <USERNAME>;SQL Server - sourceCreate a user as described in SQL Server setup.Alternatively, give the user the db_owner role:USE <DATABASENAME>;\nEXEC sp_addrolemember 'db_owner', '<USERNAME>';SQL Server - targetUSE <DATABASENAME>; EXEC sp_addrolemember 'db_owner', '<USERNAME>';Creating an application using a templateComplete the App template prerequisite checklist.In the Striim web UI, select Apps > Create New.Find and click the template you want to use. If you don't see it, enter part of the source name in Search for templates, select the one you want, then do the same for the target.Enter a name for the app, select or enter the namespace in which you want to create it, and click Save.Continue with\u00a0Configuring an app template source.Configuring an app template sourceComplete the steps in\u00a0Creating an application using a template.Enter the host name, port, the user name and password for Striim, and any other properties required by the wizard. See the relevant reader properties reference in Sources or Change Data Capture (CDC) for more information.For an Incremental Batch Reader source, also specify the check column and start position as detailed in Incremental Batch Reader.If using a Forwarding Agent, set Deploy source on Agent on. (This option appears only when an agent is connected to Striim. (For file sources, this option does not appear, but the source components are put in their own flow which can be deployed on the agent.)Click Next. The wizard will connect to the database to validate the settings. If all the checks pass, click Next.If using a template with Auto Schema Conversion, continue with Using Auto Schema Conversion. Otherwise, select the tables to be read and click Next.If your template's target is not\u00a0to Striim, continue with\u00a0Configuring an app template target.If your template's target is to Striim, your new application will open in the Flow Designer.Using Auto Schema ConversionNoteKnown issue (DEV-31163): when the target is Snowflake, you must create the schemas manually.Select the schemas you want to copy to the target, then click Next.If validation is successful, click Next. Otherwise, click Back and resolve any issues.For each schema, select the tables to copy. Any tables with data types that can not be mapped to the target (see Data type support & mapping for schema conversion & evolution) will be greyed out and can not be copied. When done, click Next.Continue with\u00a0Configuring an app template target.Configuring an app template targetComplete the steps in Configuring an app template source.Configuring an app template sourceEnter the appropriate properties. For more information, see the\u00a0Targets entry for the selected target type.By default, the case of source table schema and table names will be preserved in the target. To avoid that, edit the Tables property and remove all double quotes.With a CDC source and a to Azure SQL Database, to Azure SQL Data Warehouse, or to Database\u00a0target, for the Tables property, specify the source and target table names, separated by commas, separating each source-target pair with a semicolon: for example, dbo.Jobs,mydb.Jobs;dbo.Region,mydb.Region. Alternatively, use wildcards (see the discussion of the Tables property in\u00a0Database Writer).If using a template with Auto Schema Conversion and you want to start initial load, click Migrate my schemas and move data. Otherwise, click Take me to Flow Designer.If using a template without Auto Schema Conversion, click Next. Your new application will open in the Flow Designer.Before enabling recovery for a template-created app with a\u00a0to Azure SQL DB or\u00a0to Database target, follow the instructions for the target DBMS in\u00a0Creating the checkpoint table.In this section: Creating apps using templatesApp template prerequisite checklistDatabase privileges required for Auto Schema ConversionCreating an application using a templateConfiguring an app template sourceUsing Auto Schema ConversionConfiguring an app template targetSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-24\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-apps-using-templates.html", "title": "Creating apps using templates", "language": "en"}} {"page_content": "\n\nCreating apps by importing TQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating apps by importing TQLPrevNextCreating apps by importing TQLYou can import TQL using the console (see @), the web UI (see Apps page), or the REST API (see POST / tungsten).Apps pageIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-09-23\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-apps-by-importing-tql.html", "title": "Creating apps by importing TQL", "language": "en"}} {"page_content": "\n\nCreating or modifying apps using Source PreviewSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating or modifying apps using Source PreviewPrevNextCreating or modifying apps using Source PreviewSource Preview can create\u00a0File Reader (click Browse) or\u00a0HDFS Reader (click Connect) sources and caches in a new or existing application. FileReader source files can be selected on the Striim server or uploaded from a directory readable from your local system.Source Preview requires you to select a single file, but after creating a source, you can modify its wildcard property to select multiple files.See\u00a0Creating sources and caches using Source Preview for a hands-on walkthrough.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-17\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-or-modifying-apps-using-source-preview.html", "title": "Creating or modifying apps using Source Preview", "language": "en"}} {"page_content": "\n\nCreating a data validation dashboardSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideCreating a data validation dashboardPrevNextCreating a data validation dashboardA data validation dashboard gives you a visual representation of data read and written by a Striim application. Supported sources and targets are:SourcesTargetsDatabase ReaderFile ReaderHP NonStop SQL/MX ReaderIncremental Batch ReaderJMS ReaderKafka Reader versions 0.8, 0.9, 0.10, and 0.11MariaDB ReaderMongoDB ReaderMS SQL Reader (SQL Server)MySQL ReaderOracle ReaderPostgreSQL ReaderSalesforce Platform Event ReaderSalesforce ReaderSalesforce Pardot ReaderSpanner Batch ReaderADLS Gen1 WriterADLS Gen2 WriterAzure Blob WriterAzure Event Hub WriterAzure SQL Data Warehouse WriterBigQuery WriterCassandra Cosmos DB WriterCassandra WriterCloudera Hive WriterCosmos DB WriterDatabase WriterDatabricks WriterFile WriterGCS WriterGoogle PubSub WriterHazelcast WriterHBase WriterHDFS WriterHortonworks Hive WriterKafka Writer versions 0.8, 0.9, 0.10, and 0.11Kinesis WriterKudu WriterMapR DB WriterMongoDB WriterS3 WriterServiceNow WriterSnowflake WriterSpanner WriterThe application may have multiple sources and/or targets so long as they are all of the same type. For example, you could have multiple sources all using DatabaseReader and multiple targets all using KafkaWriter.To create and view a data validation dashboard:Undeploy the application.Open the application in the Flow Designer, select Configuration > App Settings, enable data validation, and click Save.Deploy and start the application, then select Configuration > View Validation Dashboard.Hover the mouse pointer over a bar to see additional details for that source or target for the indicated time period. Click Pause to stop updating the dashboard temporarily so you can compare event counts and verify that all source data was written by the target.\u00a0Validation dashboards do not appear on the Dashboard page.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-31\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-a-data-validation-dashboard.html", "title": "Creating a data validation dashboard", "language": "en"}} {"page_content": "\n\nSwitching from initial load to continuous replicationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideSwitching from initial load to continuous replicationPrevNextSwitching from initial load to continuous replicationIf there may be open transactions in the source database when the initial load completes, take the following steps to ensure that they are replicated to the target.Before running the initial load application, run the following command on the source database using an appropriate database client and record the value returned:MariaDB:select @@gtid_current_pos;MySQL:select current_timestamp;Oracle:select min(start_scn) from gv$transaction;PostgreSQL:For PostgreSQL version 10 or higher:SELECT pg_current_wal_lsn();For earlier versions:SELECT pg_current_xlog_location();SQL Server:SELECT sys.fn_cdc_get_max_lsn()AS max_lsn; When creating the continuous replication application:Enable recovery.Use the following setting for the CDC reader:MariaDB Reader: set Start Position to the GTID you recorded before performing the initial load.MySQL Reader, set Start Time to the timestamp you recorded before performing the initial load.Oracle Reader, set Start SCN to the SCN you recorded before performing the initial load.PostgreSQL Reader, set Start LSN to the LSN you recorded before performing the initial load.SQL Server (MSSQL Reader): set Start Position to the LSN you recorded before performing the initial load.Use the following setting for Database Writer's Ignorable Exception Code property:DUPLICATE_ROW_EXISTS, NO_OP_UPDATE, NO_OP_DELETEWhen you know all open transactions have completed and been written to the target, undeploy and stop the application, edit the target, clear the Ignorable Exception Code value, save, and deploy and start the application. Since recovery is enabled, writing will resume where it left off, and there should be no missing or duplicate rows.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-18\n", "metadata": {"source": "https://www.striim.com/docs/en/switching-from-initial-load-to-continuous-replication.html", "title": "Switching from initial load to continuous replication", "language": "en"}} {"page_content": "\n\nHanding apps off to QA or productionSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideHanding apps off to QA or productionPrevNextHanding apps off to QA or productionWhen handing applications off to QA, production, or anyone else not using the same instance of Striim that you use, give them the following:\u00a0an exported TQL file for the application (see\u00a0Exporting applications and dashboards)exported JSON file(s) for any dashboard(s)the SCM file(s) for any Open Processor components (see Creating an open processor component)When writing applications that will be handed off, consider creating vaults with the same name in the various namespaces used by the teams. If the vaults' entries have the same names but different values, the applications can use different connection URLs, user names, passwords, keys, and so on with no need to revise the TQL. See Using vaults for more details.See also\u00a0Managing the application lifecycle.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-08\n", "metadata": {"source": "https://www.striim.com/docs/en/handing-apps-off-to-qa-or-production.html", "title": "Handing apps off to QA or production", "language": "en"}} {"page_content": "\n\nIntermediate TQL programming: common patternsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsPrevNextIntermediate TQL programming: common patternsThe topics in this section assume that you have read Fundamentals of TQL programming.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-09-06\n", "metadata": {"source": "https://www.striim.com/docs/en/intermediate-tql-programming--common-patterns.html", "title": "Intermediate TQL programming: common patterns", "language": "en"}} {"page_content": "\n\nGetting data from sourcesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsGetting data from sourcesPrevNextGetting data from sourcesThe appropriate programming pattern for getting data from a source depends on the output type of the source's adapter.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/getting-data-from-sources.html", "title": "Getting data from sources", "language": "en"}} {"page_content": "\n\nSources with WAEvent outputSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsGetting data from sourcesSources with WAEvent outputPrevNextSources with WAEvent outputThe basic programming pattern for a source using an adapter with the WAEvent output type is:source > stream > CQ > streamYou do not need to define the streams explicitly: instead, define them implicitly as part of the source and CQ, and they will be created automatically. For example, from PosApp:CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'posdata.csv',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n)\nOUTPUT TO CsvStream;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')) as hourValue,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM CsvStream;In addition to the explicitly defined CsvDataSource and CsvToPosData, this will create the \"implicit\" stream CsvStream (of type WAEvent), PosDataStream_Type, and PosDataStream:You can inspect these using the DESCRIBE command, for example:W (admin) > describe type Samples.PosDataStream_Type;\nProcessing - describe type Samples.PosDataStream_Type\nTYPE Samples.PosDataStream_Type CREATED 2016-01-22 12:35:34\nATTRIBUTES (\n merchantId java.lang.String\n dateTime org.joda.time.DateTime\n hourValue java.lang.Integer\n amount java.lang.Double\n zip java.lang.String\n)For more information, see Parsing the data field of WAEvent.The code for XML sources is slightly different. For example, from MultiLogApp:CREATE SOURCE Log4JSource USING FileReader (\n directory:'Samples/MultiLogApp/appData',\n wildcard:'log4jLog.xml',\n positionByEOF:false\n) \nPARSE USING XMLParser(\n rootnode:'/log4j:event',\n columnlist:'log4j:event/@timestamp,\n log4j:event/@level,\n log4j:event/log4j:message,\n log4j:event/log4j:throwable,\n log4j:event/log4j:locationInfo/@class,\n log4j:event/log4j:locationInfo/@method,\n log4j:event/log4j:locationInfo/@file,\n log4j:event/log4j:locationInfo/@line'\n)\nOUTPUT TO RawXMLStream;Here, fields 0-7 in the WAEvent data array are defined by columnlist. These are then parsed by the CQ ParseLog4J:CREATE TYPE Log4JEntry (\n logTime DateTime,\n level String,\n message String,\n api String,\n sessionId String,\n userId String,\n sobject String,\n xception String,\n className String,\n method String,\n fileName String,\n lineNum String\n);\nCREATE STREAM Log4JStream OF Log4JEntry;\n\nCREATE CQ ParseLog4J\nINSERT INTO Log4JStream\nSELECT TO_DATE(TO_LONG(data[0])),\n data[1],\n data[2], \n MATCH(data[2], '\\\\\\\\[api=([a-zA-Z0-9]*)\\\\\\\\]'),\n MATCH(data[2], '\\\\\\\\[session=([a-zA-Z0-9\\\\-]*)\\\\\\\\]'),\n MATCH(data[2], '\\\\\\\\[user=([a-zA-Z0-9\\\\-]*)\\\\\\\\]'),\n MATCH(data[2], '\\\\\\\\[sobject=([a-zA-Z0-9]*)\\\\\\\\]'),\n data[3],\n data[4],\n data[5],\n data[6],\n data[7]\nFROM RawXMLStream;See MultiLogApp for a more detailed discussion including explanation of the MATCH function.If you preferred, instead of separately defining Log4JStream as above, you could define it within the ParseLog4J CQ, as follows:CREATE CQ ParseLog4J\nINSERT INTO Log4JStream\nSELECT TO_DATE(TO_LONG(data[0])) as logTime,\n TO_STRING(data[1]) as level,\n TO_STRING(data[2]) as message,\n TO_STRING(MATCH(data[2], '\\\\\\\\[api=([a-zA-Z0-9]*)\\\\\\\\]')) as api,\n TO_STRING(MATCH(data[2], '\\\\\\\\[session=([a-zA-Z0-9\\\\-]*)\\\\\\\\]')) as sessionId,\n TO_STRING(MATCH(data[2], '\\\\\\\\[user=([a-zA-Z0-9\\\\-]*)\\\\\\\\]')) as userId,\n TO_STRING(MATCH(data[2], '\\\\\\\\[sobject=([a-zA-Z0-9]*)\\\\\\\\]')) as sobject,\n TO_STRING(data[3]) as xception,\n TO_STRING(data[4]) as className,\n TO_STRING(data[5]) as method,\n TO_STRING(data[6]) as fileName,\n TO_STRING(data[7]) as lineNum\nFROM RawXMLStream;With this approach, Log4JStream's automatically generated type would be Log4JStream_Type, so you would have to replace the four other references to Log4JEntry in the application with Log4JStream_Type.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-09-01\n", "metadata": {"source": "https://www.striim.com/docs/en/sources-with-waevent-output.html", "title": "Sources with WAEvent output", "language": "en"}} {"page_content": "\n\nJSON sources with custom output typesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsGetting data from sourcesJSON sources with custom output typesPrevNextJSON sources with custom output typesThe basic programming pattern for sources using adapters with the JSONParser is:\ntype, stream, source\nFirst define the type for the output stream, then the stream, then the source. For example:CREATE TYPE ScanResultType (\n timestamp1 String,\n rssi String\n);\nCREATE STREAM ScanResultStream OF ScanResultType;\n\nCREATE SOURCE JSONSource USING FileReader (\n directory: 'Samples',\n WildCard: 'sample.json',\n positionByEOF: false\n)\nPARSE USING JSONParser (\n eventType: 'ScanResultType',\n fieldName: 'scanresult'\n)\nOUTPUT TO ScanResultStream;See JSONParser for a more detailed discussion.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/json-sources-with-custom-output-types.html", "title": "JSON sources with custom output types", "language": "en"}} {"page_content": "\n\nFiltering data in a sourceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsFiltering data in a sourcePrevNextFiltering data in a sourceThe following examples give an idea of various possibilities for filtering source data using OUTPUT TO and SELECT\u00a0in sources. This syntax is supported both by the server and the Forwarding Agent (see Using the Striim Forwarding Agent).In sources,\u00a0SELECT statements must use\u00a0DATA[#] function functions (see also\u00a0Parsing the data field of WAEvent. To select using\u00a0DATA(x) and DATAORDERED(x) functions\u00a0or\u00a0META() function\u00a0functions, you must create a CQ on the output of the source (see also\u00a0Using the DATA(), DATAORDERED(), BEFORE(), and BEFOREORDERED() functions).Select only events where the fifth column value is greater than 10,000:CREATE SOURCE ...\nOUTPUT TO CSVStream (a String, b String, c String, d String,e String)\n SELECT * WHERE TO_INT(data[4]) > 10000;Filter out second and third columns:... OUTPUT TO CSVStream (a String, b String, c String)\n SELECT data[0], data[3],data[4] ;Cast the first and third columns as integers:... OUTPUT TO CSVStream (a Integer, b String, c Integer, d String, e String)\n SELECT TO_INT(data[0]), data[1], TO_INT(data[2]), data[3], data[4];Cast the first and third columns as integers and select only events where the fifth column value is greater than 10,000:... OUTPUT TO CSVStream (a Integer, b String, c Integer, d String, e String)\nSELECT TO_INT(data[0]), data[1], TO_INT(data[2]), data[3], data[4]\n where TO_INT(data[4]) > 10000)Add the first and third columns and output as a single field a:... OUTPUT TO CSVStream (a Integer, b String, c String, d String) \n SELECT TO_INT(data[0])+TO_INT(data[2]), data[1], data[3],data[4] ;You can also use OUTPUT TO to split events among multiple streams based on their field values.When the fifth column value is over 10,000, output the event to to HighOrderStream, when the value is 10,000 or less, output it to LowOrderStream:... OUTPUT to HighOrderStream (a Integer, b String, c Integer, d String, e String)\n SELECT TO_INT(data[0]), data[1], TO_INT(data[2]), data[3], data[4]\n WHERE TO_INT(data[4]) > 10000),\nOUTPUT to LowOrderStream (a Integer, b String, c Integer, d String, e String)\n SELECT TO_INT(data[0]), data[1], TO_INT(data[2]), data[3], data[4]\n WHERE TO_INT(data[4]) <= 10000);Output all events to FullStream and only events where the fifth column value is 10,000 or less to LowOrderStream:... OUTPUT to FullStream,\nOUTPUT to LowOrderStream (a Integer, b String, c Integer, d String, e String)\nSELECT TO_INT(data[0]), data[1], TO_INT(data[2]), data[3], data[4]\n WHERE TO_INT(data[4]) <= 10000);In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-10-04\n", "metadata": {"source": "https://www.striim.com/docs/en/filtering-data-in-a-source.html", "title": "Filtering data in a source", "language": "en"}} {"page_content": "\n\nParsing sources with regular expressionsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsParsing sources with regular expressionsPrevNextParsing sources with regular expressionsRegular expressions are frequently used to parse source data. See Using regular expressions (regex) for an introduction.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/parsing-sources-with-regular-expressions.html", "title": "Parsing sources with regular expressions", "language": "en"}} {"page_content": "\n\nParsing HTTP log entriesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsParsing sources with regular expressionsParsing HTTP log entriesPrevNextParsing HTTP log entriesThe following example uses the FreeFormTextParser regex property to match several patterns in the following log entry.log {start 1234567890.123456} {addr 123.456.789.012} {port 12345} {method POST} {url /abc/def.ghi} {agent {Mozilla/4.0 (compatible; MSIE 6.0; MS Web Services Client Protocol 1.1.12345.1234)}} {bytes 1234} {status 200} {end 1234567890.123456} {host 123.456.789.012}\n...In this case we use a positive lookbehind construct to match the start, addr, port, method, url, bytes, status, and end patterns, while excluding the log and agent patterns:regex:'((?<=start ).[^}]+)|((?<=addr ).[^}]+)|((?<=port ).[^}]+)|((?<=method ).[^}]+)|((?<=url ).[^}]+)|((?<=bytes ).[^}]+)|(?<=\\\\(\\\\#)[^\\\\)]+|((?<=status ).[^}]+)|((?<=end ).[^}]+)|((?<=host ).[^}]+)'Note that each capture group uses two sets of parentheses. For example,((?<=start ).[^}]+)The inner parentheses are used for the positive lookbehind syntax:(?<=start )The outer parentheses are used for the capture group. In this example, group[1]=1234567890.123456, which is used by the parser in its data array.Here is the TQL of the PARSE statement using the regex expression within a FreeFormTextParser:PARSE USING FreeFormTextParser (\n RecordBegin:'^log ',\n RecordEnd:'\\n',\n regex:'((?<=start ).[^}]+)|((?<=addr ).[^}]+)|((?<=port ).[^}]+)|((?<=method ).[^}]+)|((?<=url ).[^}]+)|((?<=bytes ).[^}]+)|(?<=\\\\(\\\\#)[^\\\\)]+|((?<=status ).[^}]+)|((?<=end ).[^}]+)|((?<=host ).[^}]+)',\n separator:'~'\n)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/parsing-http-log-entries.html", "title": "Parsing HTTP log entries", "language": "en"}} {"page_content": "\n\nExtracting substrings from log entriesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsParsing sources with regular expressionsExtracting substrings from log entriesPrevNextExtracting substrings from log entriesThe MATCH function allows you to match a string using a regex expression (see Functions for details about this function). The 3rd parameter indicates which capture group is used, which may be useful when the input string to a regex can result in multiple capture groups of differing values. In the TQL example, log data in which session information is captured is extracted via the use of basic regex expressions:MATCH(data[5], \"(?<=process: )([a-zA-Z0-9//$]*)\",1), /* process */ \nMATCH(data[5], \"(?<=pathway: )([a-zA-Z0-9//$]*)\",1), /* pathway */ \nMATCH(data[5], \"(?<=service code: )([a-zA-Z0-9//_]*)\",1), /* service code */ \nMATCH(data[5], \"(?<=model: )([a-zA-Z0-9]*)\",1), /* model */ \nMATCH(data[5], \"(?<=user id: )([0-9]*)\",1), /* userId */ \nMATCH(data[5], \"(?<=session IP: )([a-zA-Z0-9//.]*)\",1), /* session IP */ \nMATCH(data[5], \"(?<=source: )([a-zA-Z0-9//.//_///]*)\",1), /* source */ \nMATCH(data[5], \"(?<=detail: )(.*$)\",1) /* detail message */Here is an example of a typical log entry that may contain SEVERE or WARNING messages:Aug 22, 2014 11:17:19 AM org.apache.solr.common.SolrException log\nSEVERE: org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: Cannot parse '((suggest_title:(04) AND suggest_title:(gmc) AND suggest_title: AND suggest_title:(6.6l) AND suggest_title:(lb7) AND suggest_title:(p1094,p0234)) AND NOT (deleted:(true)))': Encountered \" <AND> \"AND \"\" at line 1, column 64.\nWas expecting one of:\n <BAREOPER> ...\n \"(\" ...\n \"*\" ...\n <QUOTED> ...\n <TERM> ...\n <PREFIXTERM> ...\n <WILDTERM> ...\n <REGEXPTERM> ...\n \"[\" ...\n \"{\" ...\n <LPARAMS> ...\n <NUMBER> ...\n \n at org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:147)\n at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:187)\n at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) ...We would like to reduce this information to the following:fftpInfo: fftpOutType_1_0{\n msg: \"SEVERE: org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError: Cannot parse '((suggest_title:(04) AND suggest_title:(gmc) AND suggest_title: AND suggest_title:(6.6l) AND suggest_title:(lb7) AND suggest_title:(p1094,p0234)) AND NOT (deleted:(true)))': Encountered \\\" <AND> \\\"AND \\\"\\\" at line 1, column 64.\"\n origTs: 1408731439000\n};To do this, we will include the following regex in the FreeFormTextParser properties:regex:'(SEVERE:.*|WARNING:.*)'Here is the complete TQL:create source fftpSource using FileReader (\n directory:'Samples/',\n WildCard:'catalina*.log',\n charset:'UTF-8',\n positionByEOF:false\n)\nparse using FreeFormTextParser (\n-- Timestamp format in log is \"Aug 21, 2014 8:33:56 AM\"\n TimeStamp:'%mon %d, %yyyy %H:%M:%S %p',\n RecordBegin:'%mon %d, %yyyy %H:%M:%S %p',\n regex:'(SEVERE:.*|WARNING:.*)'\n)\nOUTPUT TO fftpInStream;\n\nCREATE TYPE fftpOutType (\n msg String,\n origTs long\n);\n\ncreate stream fftpOutStream of fftpOutType;\n\ncreate cq fftpOutCQ\ninsert into fftpOutStream\nselect data[0],\n TO_LONG(META(x,'OriginTimestamp'))\nfrom fftpInStream x;\n\ncreate Target fftpTarget using SysOut(name:fftpInfo) input from fftpOutStream;\nSee FreeFormTextParser for more information.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/extracting-substrings-from-log-entries.html", "title": "Extracting substrings from log entries", "language": "en"}} {"page_content": "\n\nMatching IPv4 subnet octetsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsParsing sources with regular expressionsMatching IPv4 subnet octetsPrevNextMatching IPv4 subnet octetsAs noted previously, multiple escapes for [ and ] in regular expressions are required because they are reserved characters in both the Striim TQL compiler and Java.match(s.srcIp,'(^\\\\\\\\d{1,3}\\\\\\\\.)') - first octet\nmatch(s.srcIp,'(^\\\\\\\\d{1,3}\\\\\\\\.\\\\\\\\d{1,3})\\\\\\\\.'), - first and second octet\nmatch(s.srcIp,'(^\\\\\\\\d{1,3}\\\\\\\\.\\\\\\\\d{1,3}\\\\\\\\.\\\\\\\\d{1,3}\\\\\\\\.)') - first, second, and third octetIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/matching-ipv4-subnet-octets.html", "title": "Matching IPv4 subnet octets", "language": "en"}} {"page_content": "\n\nParsing SOAP entriesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsParsing sources with regular expressionsParsing SOAP entriesPrevNextParsing SOAP entriesWe will use the FreeFormTextParser regex property to match several patterns in the following log entry:>> 2015/1/14 16:20:10: :<< Received request from remote address: 123.45.6.789\n>> 2015/1/14 16:20:10: :<< Path Name: $name1, Class Name: CLASS-1\n>> 2015/1/14 16:20:11: :<< Service Name: Service_1, Response Time: 123.456789 milliseconds\n<model>E</model>\n<userid>0000000103</userid>\n...In this case we also use a positive lookbehind construct to match the remote address, path name, service name, model, and user ID:regex:'((?<=remote address: )[\\\\d\\\\.]+)|((?<=Path Name: )[^ ]+)|((?<=\\\\<\\\\< Service Name: )[^,]+)|\n ((?<=Response Time: )[^ ]+)|((?<=\\\\<model\\\\>)([a-zA-Z0-9]+))|((?<=\\\\<userid\\\\>)([0-9]+))',Here is the TQL of the PARSE statement using the regex expression within a FreeFormTextParser: PARSE USING FreeFormTextParser (\n RecordBegin:'Start>>> POST INPUT',\n TimeStamp:'>> %yyyy/%m/%d %H:%M:%S: :<<',\n linecontains:'>> %yyyy/%m/%d %H:%M:%S: :<<',\n RecordEnd:' milliseconds',\n regex:'((?<=remote address: )[\\\\d\\\\.]+)|((?<=Path Name: )[^ ]+)|\n ((?<=\\\\<\\\\< Service Name: )[^,]+)|\n ((?<=Response Time: )[^ ]+)|((?<=\\\\<model\\\\>)([a-zA-Z0-9]+))|\n ((?<=\\\\<userid\\\\>)([0-9]+))',\n separator:'~'\n )In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-06-26\n", "metadata": {"source": "https://www.striim.com/docs/en/parsing-soap-entries.html", "title": "Parsing SOAP entries", "language": "en"}} {"page_content": "\n\nBounding data with windowsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsBounding data with windowsPrevNextBounding data with windowsWhen a CQ's source is a stream, it is limited to acting on single events, as in this example from MultiLogApp:CREATE CQ GetErrors \nINSERT INTO ErrorStream \nSELECT log4j \nFROM Log4ErrorWarningStream log4j WHERE log4j.level = 'ERROR';This triggers an alert whenever an error appears in the log.To aggregate, join, or perform calculations on the data, you must create a bounded data set. The usual way to do this is with a window, which bounds the stream by a specified number of events or period of time. As discussed in the Concepts Guide (see Window), this may be a sliding window, which always contains the most recent set of events, or a jumping window, which breaks the stream up into successive chunks.The basic programming pattern for a window is:stream > window > CQWhat properties a window should have depend on the nature of the data in the input stream and what you want the CQ to do with it. There are six basic variations depending on whether you are bounding the data batches (jumping) or continuously (sliding) and whether you are bounding by by time, event count, or both.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-11-19\n", "metadata": {"source": "https://www.striim.com/docs/en/bounding-data-with-windows.html", "title": "Bounding data with windows", "language": "en"}} {"page_content": "\n\nBound data in batches by timeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsBounding data with windowsBound data in batches by timePrevNextBound data in batches by timeThe following uses aggregate functions to summarize events in a stream every fifteen minutes:CREATE TYPE OrderType(\n storeId String,\n orderId String,\n sku String,\n orderAmount double,\n dateTime DateTime\n);\nCREATE STREAM RetailOrders Of OrderType;\n...\nCREATE JUMPING WINDOW ProductData_15MIN\nOVER RetailOrders\nKEEP WITHIN 15 MINUTE ON dateTime;\n\nCREATE CQ GetProductActivity\nINSERT INTO ProductTrackingStream\nSELECT pd.sku, COUNT(*), SUM(pd.orderAmount), FIRST(pd.dateTime)\nFROM ProductData_15MIN pd;Every 15 minutes, the output stream receives one event per SKU for which there was at least one order. Each SKU's event includes the number of orders (COUNT(*)), the total amount of those orders (SUM(pd.orderAmount)), and the timestamp of the first order in the batch (FIRST(pd.dateTime)). You could use this data to graph changes over time or trigger alerts when the count or total amount vary significantly from what you expect.Say that orders are not received 24 hours a day, seven days a week, but instead drop to zero after closing hours. In that case, the code above could leave events in the window overnight, where they would be mistakenly reported as occurring in the morning. To avoid that, use RANGE to add a timeout:CREATE JUMPING WINDOW ProductData_15MIN\nOVER RetailOrders\nKEEP RANGE 15 MINUTE ON dateTime WITHIN 16 MINUTE;Now, so long as orders are being placed, the window will jump every 15 minutes, but if orders stop, the window will jump after 16 minutes.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-11-19\n", "metadata": {"source": "https://www.striim.com/docs/en/bound-data-in-batches-by-time.html", "title": "Bound data in batches by time", "language": "en"}} {"page_content": "\n\nBound data in batches by event countSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsBounding data with windowsBound data in batches by event countPrevNextBound data in batches by event countAlternatively, you can aggregate data in batches of n number of events. For example:CREATE JUMPING WINDOW ProductData_100\nOVER RetailOrders\nKEEP 100 ROWS\nPARTITION BY storeId;\n\nCREATE CQ GetProductActivity\nINSERT INTO ProductTrackingStream\nSELECT pd.storeId, COUNT(*), SUM(pd.orderAmount), FIRST(pd.dateTime)\nFROM ProductData_100 pd;The output stream receives an event for each batch of 100 events from each store. You might use code like this to trigger a potential fraud alert when a store's order count or amount is anomalously high. (PARTITION BY storeId means the window will contain the data for the most recent 100 orders for each store. Without this clause, the window would contain 100 events total for all stores.)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/bound-data-in-batches-by-event-count.html", "title": "Bound data in batches by event count", "language": "en"}} {"page_content": "\n\nBound data continuously by timeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsBounding data with windowsBound data continuously by timePrevNextBound data continuously by timeThe following variation on ProductData_15MIN summarizes events continuously:CREATE WINDOW ProductData_15MIN\nOVER RetailOrders\nKEEP WITHIN 15 MINUTE ON dateTime;\n\nCREATE CQ GetProductActivity\nINSERT INTO ProductTrackingStream\nSELECT pd.sku, COUNT(*), SUM(pd.orderAmount)\nFROM ProductData_15MIN pd;Omitting JUMPING from the window definition creates a sliding window (see Window in the Concepts Guide). Every time a new order is received, or one is dropped because it has been in the window for 15 minutes, the output stream receives a new event updating the number of orders and total amount of those orders for the past 15 minutes. FIRST(pd.dateTime) is unnecessary since you know the window always contains the most recent orders.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/bound-data-continuously-by-time.html", "title": "Bound data continuously by time", "language": "en"}} {"page_content": "\n\nBound data continuously by event countSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsBounding data with windowsBound data continuously by event countPrevNextBound data continuously by event countAlternatively, you could bound the window by event count:CREATE WINDOW ProductData_100\nOVER RetailOrders\nKEEP 100 ROWS\nPARTITION BY sku;\n\nCREATE CQ GetProductActivity100\nINSERT INTO ProductTrackingStream\nSELECT pd.sku, SUM(pd.orderAmount)\nFROM ProductData_100 pd;Every time an order is received, the oldest order for that SKU is dropped and the output stream receives a new event updating the number of orders and the total amount for those orders. COUNT(*) is unnecessary since the window always contains 100 events. (PARTITION BY sku means the window will contain the data for the most recent 100 orders for each SKU. Without this clause, the window would contain 100 events total for all SKUs.)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/bound-data-continuously-by-event-count.html", "title": "Bound data continuously by event count", "language": "en"}} {"page_content": "\n\nBounding by both time and event countSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsBounding data with windowsBounding by both time and event countPrevNextBounding by both time and event countWhen appropriate, you may bound a window by both time and event count. For example:CREATE WINDOW StoreTimeoutWindow\nOVER RetailOrders\nKEEP 1 ROWS WITHIN 5 MINUTE \nPARTITION BY storeId;\n\nCREATE TYPE StoreNameData(\n storeId String KEY,\n storeName String\n);\nCREATE CACHE StoreNameLookup using FileReader (\n directory: 'stores',\n wildcard: 'StoreNames.csv'\n)\nPARSE USING DSVParser(\n header: Yes\n) QUERY(keytomap:'storeId') OF StoreNameData;\n\nCREATE StoreTimeoutCheck\nINSERT INTO StoreTimeoutStream\nSELECT c.storeId\nFROM StoreTimeoutWindow w\nLEFT OUTER JOIN StoreNameData c\nON w.storeId = c.storeId\nWHERE w.storeId IS NULL\nGROUP BY c.storeId;This five-minute sliding window contains the most recent order event (KEEP 1 ROWS) for each store. The output from the CQ could be used to generate an alert whenever five minutes have passed without a new order (indicating that the store is probably offline).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-11-19\n", "metadata": {"source": "https://www.striim.com/docs/en/bounding-by-both-time-and-event-count.html", "title": "Bounding by both time and event count", "language": "en"}} {"page_content": "\n\nUsing a window to define an alert thresholdSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsUsing a window to define an alert thresholdPrevNextUsing a window to define an alert thresholdThe following could be used to send an alert when the number of events in the window exceeds 15:CREATE TYPE myEventType (\n EventTime DateTime,\n KeyVal String,\n EventText String\n);\n...\nCREATE WINDOW oneHourSlidingWindow \n OVER eventStream\n KEEP WITHIN 1 HOUR ON EventTime\n PARTITION BY KeyVal;\n\nCREATE CQ WactionStore_q INSERT INTO WactionStore_ws\nSELECT ISTREAM\n KeyVal,\n EventTime,\n EventText,\n COUNT(KeyVal)\nFROM oneHourSlidingWindow\nGROUP BY KeyVal\nHAVING COUNT(KeyVal) > 15;The ISTREAM option stops the window from emitting events when COUNT(KeyVal) decreases due to events being removed from the window after the one-hour timeout.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/using-a-window-to-define-an-alert-threshold.html", "title": "Using a window to define an alert threshold", "language": "en"}} {"page_content": "\n\nJoining cache data with CQsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsJoining cache data with CQsPrevNextJoining cache data with CQsThe standard programming pattern for getting data from a cache is:\ntype, cache > CQ\nCaches are similar to sources and use the same adapters. The type must must correctly describe the data source, with the correct data type for each field or column.The following is typical TQL for a cache that is loaded from a file on disk:CREATE TYPE ZipCache_Type(\n Zip String, \n City String, \n State String, \n LatVal Double, \n LongVal Double \n);\nCREATE CACHE ZipCache USING FileReader ( \n directory: 'Samples/PosApp/appData/',\n wildcard: 'USAddressesPreview.txt',\n charset: 'UTF-8',\n blockSize: '64',\n positionbyeof: 'false'\n) \nPARSE USING DSVPARSER ( \n columndelimiter: '\\t',\n header: 'true'\n) \nQUERY (keytomap: 'Zip') OF ZipCache_Type;\n\nCREATE GenerateWactionContext\nINSERT INTO PosSourceData\nSELECT p.MERCHANTID,\n p.DATETIME,\n p.AUTHAMOUNT,\n z.Zip,\n z.City,\n z.State,\n z.LatVal,\n z.LongVal\nFROM PosSource_TransformedStream p, ZipCache z\nWHERE p.ZIP = z.Zip;GenerateWactionContext enriches PosSource_TransformedStream with location information from ZipCache by matching values in the stream's ZIP field with values in the cache's Zip field. (Though the keyword JOIN does not appear, this is an inner join.) The PosSourceData WActionStore can then be used to populate maps. To track any un-joinable events, see TrapZipMatchingErrors in Handling nulls with CQs.When a cache's source is a database, use the refreshinterval property to control how often the cache is updated:CREATE CACHE ZipCache USING DatabaseReader (\n ConnectionURL:'jdbc:mysql://10.1.10.149/datacenter',\n Username:'username',\n Password:'passwd',\n Query: \"SELECT * FROM ZipData\"\n)\nPARSE USING DSVPARSER ( \n columndelimiter: '\\t',\n header: 'true'\n) \nQUERY (keytomap: 'Zip', , refreshinterval: '60000000') OF ZipCache_Type;See CREATE CACHE for more details.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/joining-cache-data-with-cqs.html", "title": "Joining cache data with CQs", "language": "en"}} {"page_content": "\n\nFiltering data with CQsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsFiltering data with CQsPrevNextFiltering data with CQsThe basic pattern for a CQ that filters data is:\nstream / window / cache / WActionStore > CQ > stream / WActionStore\nA CQ may join data from multiple inputs.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/filtering-data-with-cqs.html", "title": "Filtering data with CQs", "language": "en"}} {"page_content": "\n\nSimple filtering by a single criterionSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsFiltering data with CQsSimple filtering by a single criterionPrevNextSimple filtering by a single criterionThe GetErrors query, from the MultiLogApp sample application, filters the log file data in Log4ErrorWarningStream to pass only error messages to ErrorStream:CREATE CQ GetErrors \nINSERT INTO ErrorStream \nSELECT log4j \nFROM Log4ErrorWarningStream log4j WHERE log4j.level = 'ERROR';Messages with any other error level (such as WARN or INFO) are discarded.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/simple-filtering-by-a-single-criterion.html", "title": "Simple filtering by a single criterion", "language": "en"}} {"page_content": "\n\nFiltering fieldsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsFiltering data with CQsFiltering fieldsPrevNextFiltering fieldsA CQ can select desired fields from a stream, cache, or WActionStore and discard the rest. For example, this CQ from MultiLogApp selects only two of the fields (accessTime and srcIp) from its input stream:CREATE TYPE AccessLogEntry (\n srcIp String KEY,\n userId String,\n sessionId String,\n accessTime DateTime ...\n\nCREATE STREAM HackerStream OF AccessLogEntry;\n...\n\nCREATE CQ SendHackingAlerts \nINSERT INTO HackingAlertStream \nSELECT 'HackingAlert', ''+accessTime, 'warning', 'raise',\n 'Possible Hacking Attempt from ' + srcIp + ' in ' + IP_COUNTRY(srcIp)\nFROM HackerStream; In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/filtering-fields.html", "title": "Filtering fields", "language": "en"}} {"page_content": "\n\nSelecting events based on cache entriesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsFiltering data with CQsSelecting events based on cache entriesPrevNextSelecting events based on cache entriesThis CQ, from MultiLogApp, selects only events where the IP address is found in a blacklist cache. Events with IP addresses that are not on the blacklist are discarded.CREATE CQ FindHackers\nINSERT INTO HackerStream\nSELECT ale \nFROM AccessStream ale, BlackListLookup bll\nWHERE ale.srcIp = bll.ip;In this context, SELECT ale selects all the fields from AccessStream (since its alias is ale) and none from BlackListLookup.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/selecting-events-based-on-cache-entries.html", "title": "Selecting events based on cache entries", "language": "en"}} {"page_content": "\n\nUsing multiple CQs for complex criteriaSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsFiltering data with CQsUsing multiple CQs for complex criteriaPrevNextUsing multiple CQs for complex criteriaApplications often combine multiple CQs and windows to select events based on complex criteria. For example, from MultiLogApp:CREATE CQ GetLog4JErrorWarning\nINSERT INTO Log4ErrorWarningStream\nSELECT l FROM Log4JStream l\nWHERE l.level = 'ERROR' OR l.level = 'WARN';\n\nCREATE WINDOW Log4JErrorWarningActivity \nOVER Log4ErrorWarningStream KEEP 300 ROWS;\n...\n\nCREATE CQ FindLargeRT\nINSERT INTO LargeRTStream\nSELECT ale\nFROM AccessStream ale\nWHERE ale.responseTime > 2000;\n\nCREATE WINDOW LargeRTActivity \nOVER LargeRTStream KEEP 100 ROWS; \n...\n\nCREATE CQ MergeLargeRTAPI\nINSERT INTO LargeRTAPIStream\nSELECT lrt.accessTime, lrt.sessionId, lrt.srcIp, lrt.userId ...\nFROM LargeRTActivity lrt, Log4JErrorWarningActivity log4j\nWHERE lrt.sessionId = log4j.sessionId\n AND lrt.accessTime = log4j.logTime; The Log4JErrorWarningActivity window, populated by The GetLog4JErrorWarning CQ, contains the most recent 300 error and warning messages from the application log.The LargeRTActivity window, populated by the FindLargeRT CQ, contains the most recent 100 messages from the web server access log with response times over 2000 microseconds.The MergeLargeRTAPI CQ joins events from the two windows that have matching session IDs and access times and filters out unneeded fields. This filtered and joined data triggers alerts about the unusually long response times and is also used to populate dashboard displays.See MultiLogApp for more details. See TQL programming rules and best practices for discussion of why the windows are required for the join.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/using-multiple-cqs-for-complex-criteria.html", "title": "Using multiple CQs for complex criteria", "language": "en"}} {"page_content": "\n\nAggregating data with CQsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsAggregating data with CQsPrevNextAggregating data with CQsTo aggregate data or perform any sort of calculation on it (see Functions), a CQ must select from a window (see Bounding data with windows). The basic pattern for a CQ that filters data is:\nwindow / cache / WActionStore > CQ > stream / WActionStore\nA CQ that aggregates data may have multiple inputs, including one or more streams, but at least one input must be a window, cache, or WactionStores. See TQL programming rules and best practices for more about this limitation.Here is a simple example, from PosApp:CREATE JUMPING WINDOW PosData5Minutes\nOVER PosDataStream KEEP WITHIN 5 MINUTE ON dateTime PARTITION BY merchantId;\n\nCREATE CQ GenerateMerchantTxRateOnly\nINSERT INTO MerchantTxRateOnlyStream\nSELECT p.merchantId,\n FIRST(p.zip),\n FIRST(p.dateTime),\n COUNT(p.merchantId),\n SUM(p.amount) ...\nFROM PosData5Minutes p ...\nGROUP BY p.merchantId;Every time the PosData5Minutes window jumps, the GenerateMerchantTxRateOnly CQ will generate a summary event for each merchant including the merchant ID, the first Zip code and timestamp of the set, the number of transactions, and the total amount of the transactions. For more details, see PosApp.Applications may use multiple windows and multiple CQs to perform more complex aggregation tasks. For example, from MultiLogApp:CREATE JUMPING WINDOW ApiWindow \nOVER ApiEnrichedStream KEEP WITHIN 1 HOUR ON logTime \nPARTITION BY api;\n\nCREATE CQ GetApiUsage \nINSERT INTO ApiUsageStream \nSELECT a.api, a.sobject, COUNT(a.userId), FIRST(a.logTime) \nFROM ApiWindow a ...\n\nCREATE JUMPING WINDOW ApiSummaryWindow \nOVER ApiUsageStream KEEP WITHIN 1 HOUR ON logTime \nPARTITION BY api;\n\nCREATE CQ GetApiSummaryUsage \nINSERT INTO ApiActivity \nSELECT a.api, sum(a.count), first(a.logTime)\nFROM ApiSummaryWindow a ...The jumping ApiWindow aggregates the application log events from the ApiEnrichedStream window into one-hour sets for each API call (PARTITION BY api) based on the events' log times.Once an hour, when ApiWindow emits its latest set of data, the GetApiUsage CQ sends a summary event for each sobject in each API call including the name of the API call, the name of the sobject, the count of the sobject, and the log time of the first occurrence of that sobject during that hour (SELECT a.api, a.sobject, COUNT(a.userId), FIRST(a.logTime)).The ApiSummaryWindow contains the summary events emitted by GetApiUsage. This window jumps in sync with ApiWindow since both use KEEP WITHIN 1 HOUR ON logTime.The GetApiSummaryUsage discards the sobject details and generates summary events for the API call.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/aggregating-data-with-cqs.html", "title": "Aggregating data with CQs", "language": "en"}} {"page_content": "\n\nHandling nulls with CQsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsHandling nulls with CQsPrevNextHandling nulls with CQsThe following CQ would insert events from PosSource_TransformedStream (see Joining cache data with CQs) for which there was no matching zip code in ZipCache into NoMatchingZipStream:CREATE TrapZipMatchingErrors\nINSERT INTO NoMatchingZipStream\nSELECT p.MERCHANTID,\n p.DATETIME,\n p.AUTHAMOUNT\nFROM PosSource_TransformedStream p\nLEFT OUTER JOIN ZipCache z\nON p.ZIP = z.Zip WHERE z.Zip IS NULL;The following CQ joins events from two streams using the MATCH_PATTERN (see Using pattern matching) clause. If after 24 hours no matching event has been received, the event is output with <Timeout> and null in place of the matching event's data source name and record.CREATE CQ MatchedRecordsCQ \nINSERT INTO PatternMatchStream\nSELECT a.unique_id, a.data_source_name, a.data_source_record,\n CASE WHEN b IS NOT NULL THEN b.data_source_name ELSE \"<Timeout>\" END, \n CASE WHEN b IS NOT NULL THEN b.data_source_record ELSE null END\nFROM MergedDataSourcesStream \n MATCH_PATTERN (T a (b | W))\n DEFINE T = timer(interval 24 hour),\n a = MergedDataSourcesStream(),\n b = MergedDataSourcesStream(unique_id == a.unique_id),\n W = wait(T)\nPARTITION BY unique_id;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/handling-nulls-with-cqs.html", "title": "Handling nulls with CQs", "language": "en"}} {"page_content": "\n\nHandling variable-length events with CQsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsHandling variable-length events with CQsPrevNextHandling variable-length events with CQsThe following TQL shows an example of handling events with a varying number of fields. The events may have 6, 8, 10, or 12 fields depending on whether they have zero, one, two, or three pairs of objectName and objectValue fields. This TQL discards the events without object fields and consolidates all the object fields from the other events into objectStream.CREATE TYPE objectStreamType(\n dateTime org.joda.time.DateTime KEY , \n eventName java.lang.String KEY , \n objectName java.lang.String KEY , \n objectValue java.lang.Long \n);\nCREATE STREAM objectStream OF objectStreamType;\n\nCREATE CQ RawParser1 \nINSERT INTO objectStream\n-- the first pair of object fields is present when there are at least 8 fields in the record\nSELECT \n TO_DATEF(data[0],'yyyy-MM-dd HH:mm:ss.SSSZ') AS dateTime,\n TO_STRING(data[1]) AS eventName,\n TO_STRING(data[7]) AS objectName,\n TO_STRING(data[8]) AS objectValue\nFROM rawData\nWHERE arlen(data) >= 8;\n\nCREATE CQ RawParser2 \nINSERT INTO objectStream\n-- the second pair of object fields is present when there are at least 10 fields in the record\nSELECT \n TO_DATEF(data[0],'yyyy-MM-dd HH:mm:ss.SSSZ') AS dateTime,\n TO_STRING(data[1]) AS eventName,\n TO_STRING(data[9]) AS objectName,\n TO_STRING(data[10]) AS objectValue\nFROM rawData\nWHERE arlen(data) >= 15;\n\nCREATE CQ RawParser33 \nINSERT INTO objectStream\n-- the third pair of object fields is present when there are at least 12 fields in the record\nSELECT \n TO_DATEF(data[0],'yyyy-MM-dd HH:mm:ss.SSSZ') AS dateTime,\n TO_STRING(data[1]) AS eventName,\n TO_STRING(data[11]) AS objectName,\n TO_STRING(data[12]) AS objectValue\nFROM rawData\nWHERE arlen(data) >= 12;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/handling-variable-length-events-with-cqs.html", "title": "Handling variable-length events with CQs", "language": "en"}} {"page_content": "\n\nSending data to targetsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsSending data to targetsPrevNextSending data to targetsThe basic pattern for a target is:CQ > stream > targetFor example:CREATE CQ JoinDataCQ\nINSERT INTO JoinedDataStream ...\n \nCREATE TARGET JoinedDataTarget\nUSING SysOut(name:JoinedData)\nINPUT FROM JoinedDataStream;This writes the output from JoinedDataStream to SysOut.When writing an application with a more complex target, it may be most efficient to write an end-to-end application that simply gets the data from the source and writes it to the target, then add the \"intelligence\" to the application once you know that the data is being read and written correctly. For example, this application will read PosApp's sample data, parse it (see Getting data from sources), and write it to Hadoop:CREATE SOURCE CSVSource USING FileReader (\n directory:'Samples/PosApp/AppData',\n WildCard:'posdata.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:'yes'\n)\nOUTPUT TO CsvStream;\n\nCREATE TYPE CSVType (\n merchantId String,\n dateTime DateTime,\n hourValue Integer,\n amount Double,\n zip String\n);\nCREATE STREAM TypedCSVStream OF CSVType;\n\nCREATE CQ CsvToPosData\nINSERT INTO TypedCSVStream\nSELECT data[1],\n TO_DATEF(data[4],'yyyyMMddHHmmss'),\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')),\n TO_DOUBLE(data[7]),\n data[9]\nFROM CsvStream;\n\nCREATE TARGET hdfsOutput USING HDFSWriter(\n filename:'hdfstestOut',\n hadoopurl:'hdfs://192.168.1.13:8020/output/',\n flushinterval: '1'\n)\nFORMAT USING DSVFormatter (\n)\nINPUT FROM TypedCSVStream;\nAfter verifying that the data is being written to the target correctly, you could then add additional components between CsvToPosData and hdfsOutput to filter, aggregate, or enrich the data.For additional end-to-end examples, see Database Writer, JMSWriter, and Kafka Writer.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/sending-data-to-targets.html", "title": "Sending data to targets", "language": "en"}} {"page_content": "\n\nSending data to WActionStoresSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsSending data to WActionStoresPrevNextSending data to WActionStoresThe development pattern for WActionStores is:CQ > WActionStoreMore than one CQ may output to the same WActionStore.Here is a simple example created by following the instructions in Modifying an application using the Flow Designer. Note that the WActionStore's types must be created before the WActionStore, which must be created before the CQ that populates it.CREATE TYPE PosSourceContext (\n MerchantId String KEY , \n DateTime org.joda.time.DateTime , \n Amount Double , \n Zip String , \n City String , \n State String , \n LatVal Double , \n LongVal Double \n);\n\nCREATE WACTIONSTORE PosSourceData\nCONTEXT OF PosSourceContext\nEVENT TYPES (PosSourceContext) \nPERSIST NONE USING();\n\nCREATE CQ GenerateWactionContext \nINSERT INTO PosSourceData\nSELECT p.MERCHANTID,\n p.DATETIME,\n p.AUTHAMOUNT,\n z.Zip,\n z.City,\n z.State,\n z.LatVal,\n z.LongVal\nFROM PosSource_TransformedStream p, ZipCache z\nWHERE p.ZIP = z.Zip;This is used to populate the dashboard created by following the instructions in Creating a dashboard.In MultiLogApp, the CQs GenerateHackerContext, output to the WActionStore UnusualActivityCREATE TYPE AccessLogEntry (\n srcIp String KEY,\n userId String,\n sessionId String,\n accessTime DateTime,\n request String,\n code integer,\n size integer,\n referrer String,\n userAgent String,\n responseTime integer\n);\n...\n\nCREATE TYPE UnusualContext (\n typeOfActivity String,\n accessTime DateTime,\n accessSessionId String,\n srcIp String KEY,\n userId String,\n country String,\n city String,\n lat double,\n lon double\n);\nCREATE TYPE MergedEntry (\n accessTime DateTime,\n accessSessionId String,\n srcIp String KEY,\n userId String,\n request String,\n code integer,\n size integer,\n referrer String,\n userAgent String,\n responseTime integer,\n logTime DateTime,\n logSessionId String,\n level String,\n message String,\n api String,\n sobject String,\n xception String,\n className String,\n method String,\n fileName String,\n lineNum String\n);\nCREATE WACTIONSTORE UnusualActivity \nCONTEXT OF UnusualContext \nEVENT TYPES (\n MergedEntry,\n AccessLogEntry)\n);\n...\n\nCREATE CQ GenerateHackerContext\nINSERT INTO UnusualActivity\nSELECT 'HackAttempt', accessTime, sessionId, srcIp, userId,\n IP_COUNTRY(srcIp), IP_CITY(srcIP), IP_LAT(srcIP), IP_LON(srcIP)\nFROM HackerStream\nLINK SOURCE EVENT;\n...\n\nCREATE CQ GenerateLargeRTContext\nINSERT INTO UnusualActivity\nSELECT 'LargeResponseTime', accessTime, accessSessionId, srcIp, userId,\n IP_COUNTRY(srcIp), IP_CITY(srcIP), IP_LAT(srcIP), IP_LON(srcIP)\nFROM LargeRTAPIStream\nLINK SOURCE EVENT;\n...\n\nCREATE CQ GenerateProxyContext\nINSERT INTO UnusualActivity\nSELECT 'ProxyAccess', accessTime, sessionId, srcIp, userId,\n IP_COUNTRY(srcIp), IP_CITY(srcIP), IP_LAT(srcIP), IP_LON(srcIP)\nFROM ProxyStream\nLINK SOURCE EVENT;\n...\n\nCREATE CQ GenerateZeroContentContext\nINSERT INTO UnusualActivity\nSELECT 'ZeroContent', accessTime, accessSessionId, srcIp, userId,\n IP_COUNTRY(srcIp), IP_CITY(srcIP), IP_LAT(srcIP), IP_LON(srcIP)\nFROM ZeroContentAPIStream\nLINK SOURCE EVENT;This WActionStore stores not just events of the UnusualContext type used by these four CQs, but also of the linked source events of MergedEntry and AccessLogEntry types. See Using EVENTLIST for details on querying the linked source events.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-09\n", "metadata": {"source": "https://www.striim.com/docs/en/sending-data-to-wactionstores.html", "title": "Sending data to WActionStores", "language": "en"}} {"page_content": "\n\nUsing FIRST and LASTSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsUsing FIRST and LASTPrevNextUsing FIRST and LASTThe FIRST and LAST functions return a Java.lang.Object type no matter what type they operate on. They support all Supported data types except Byte.For example, in the following, suppose x is an integer:SELECT\nFIRST(x) \u2013 LAST(x) AS difference\nFROM\nMyWindow;This results in an \u201cinvalid type of operand\u201d error because you can\u2019t perform simile arithmetic on a Java.lang.Object. To work around this, you must recast the object as an integer:SELECT\nTO_INT(FIRST(x)) \u2013 TO_INT(LAST(x)) AS difference\nFROM\nMyWindow;The following example uses the PosApp sample data:CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n)\nOUTPUT TO CsvStream;\n \nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream\nSELECT TO_INT(data[3]) AS PosData\nFROM CsvStream;\n\nCREATE WINDOW Window1 OVER PosDataStream KEEP 2 ROWS ;\n \nCREATE CQ CQ1\nINSERT INTO output_ST\nSELECT TO_INT(FIRST( Window1.PosData ))\n - TO_INT(LAST( Window1.PosData )) AS LastPosData\nFROM Window1;\n\nCREATE TARGET Target1 USING SysOut (\n name: 'SubFirstFromLast'\n)\nINPUT FROM output_ST;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-03-06\n", "metadata": {"source": "https://www.striim.com/docs/en/using-first-and-last.html", "title": "Using FIRST and LAST", "language": "en"}} {"page_content": "\n\nDetecting device status changesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideIntermediate TQL programming: common patternsDetecting device status changesPrevNextDetecting device status changesCode similar to the following can be used to detect when a device goes offline:CREATE TYPE MyUser ( \n\tname java.lang.String , \n myid java.lang.String \n ) ;\n\nCREATE CACHE LoadIDsCache USING FileReader ( \n wildcard: 'user.csv',\n directory: '.'\n) \nPARSE USING Global.DSVParser () \nQUERY ( keytomap: 'myid' ) OF MyUser;\n\nCREATE CQ PopulateMasterStream \nINSERT INTO MasterStream\nSELECT * FROM admin.LoadIDsCache c, heartbeat(interval 5 second) h;\n\nCREATE WINDOW MasterWinow OVER MasterStream KEEP 1 ROWS PARTITION BY myid;\n\nCREATE SOURCE LiveEventStream USING TCPReader ( \n IPAddress: 'localhost',\n portno: 1234\n ) \n PARSE USING DSVParser ()\n headerlineno: 0\n ) \nOUTPUT TO liveEvStream ;\n\nCREATE OR REPLACE CQ CreateLiveEventsCQ \n INSERT INTO liveObjectStream\n SELECT TO_STRING(data[0]) as myId FROM liveEvStream l;\n\nCREATE OR REPLACE JUMPING WINDOW LiveObjWindow\n OVER liveObjectStream KEEP 1 ROWS WITHIN 10 second PARTITION BY myId;\n\n/* select * from admin.NonExistObjStream;\n[\n myID = 1\n cnt = 1\n status = existent\n]\n[\n myID = null \n cnt = 4\n status = non-existent\n]\n*/\n\nselect lw.myId as myID, count(*) as cnt, \nCASE WHEN lw.myId IS NULL \n THEN \"non-existent\" \n ELSE \"existent\" END as status\nfrom MasterWinow mw \nleft join LiveObjWindow lw on mw.myid = lw.myId\ngroup by lw.myId;\nThe cache file has the format:device_name_a,1\ndevice_name_b,2\n...Code similar to the following can be used detect both when a device goes offline and comes back online:CREATE OR REPLACE CQ UserStateCQ\nINSERT INTO CurrentUserStateStream\nSELECT\n\tlw.myId as myID, \n\tcount(*) as cnt, \n\tDNOW() as StateTime,\n\tCASE WHEN lw.myId IS NULL THEN \"offline\" ELSE \"online\" \n\tEND as status\n\t\nFROM MasterWinow mw \nLEFT JOIN LiveObjWindow lw on mw.myid = lw.myId\nGROUP BY lw.myId;\n\nCREATE SLIDING WINDOW UserState OVER CurrentUserStateStream KEEP 2 ROW PARTITION BY myId;\n\nCREATE OR REPLACE CQ WatchUserStateCQ\nINSERT INTO ChangedUserStateStream\nSELECT\n\tlw.myId as myID, \n\tTO_LONG(LAST(w.StateTime) - FIRST(w.StateTime)) as StateChangeInterval,\n\tCASE \n\t\tWHEN LAST(w.status) == 'online' AND FIRST(w.status) == 'online' THEN \"up\"\n\t\tWHEN LAST(w.status) == 'online' AND FIRST(w.status) == 'offline' THEN \"online\"\n\t\tWHEN LAST(w.status) == 'offline' AND FIRST(w.status) == 'online' THEN \"down\"\n\t\tWHEN LAST(w.status) == 'offline' AND FIRST(w.status) == 'offline' THEN \"offline\"\n\tEND as ChangeLabel\nFROM UserState w \nGROUP BY w.myID\nHAVING ChangeLabel = 'down' OR ChangeLabel ='online';The two-row window acts as a two-node state machine. The addition of a CQ can generate events that meet business requirements:alerts on state changesaccumulating statistics on device uptime, downtime, frequency of change, and so onIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-10-04\n", "metadata": {"source": "https://www.striim.com/docs/en/detecting-device-status-changes.html", "title": "Detecting device status changes", "language": "en"}} {"page_content": "\n\nAdvanced TQL programmingSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingPrevNextAdvanced TQL programmingThe topics in this section assume that you are familiar with material covered in Fundamentals of TQL programming and Intermediate TQL programming: common patterns.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-10-27\n", "metadata": {"source": "https://www.striim.com/docs/en/advanced-tql-programming.html", "title": "Advanced TQL programming", "language": "en"}} {"page_content": "\n\nWriting exceptions to a WActionStoreSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingWriting exceptions to a WActionStorePrevNextWriting exceptions to a WActionStoreExceptions (see\u00a0Handling exceptions) are sent to Global.exceptionsStream and written to Striim's server log. If useful, you may store exceptions in a WActionStore using an application such as this:CREATE APPLICATION StoreExceptions;\n\nCREATE TYPE myExceptionType (\n entityName java.lang.String,\n className java.lang.String,\n message java.lang.String);\n\nCREATE STREAM myExceptionsStream OF myExceptionType;\n\nCREATE CQ selectExceptions \nINSERT INTO myExceptionsStream\nSELECT entityName,\n className,\n message\nFROM Global.exceptionsStream;\n\nCREATE WACTIONSTORE ExceptionsWAS\nCONTEXT OF myExceptionType\nEVENT TYPES (myExceptionType);\n\nCREATE CQ stream2WActionStore\n INSERT INTO ExceptionsWAS\n SELECT * from myExceptionsStream;\n\nEND APPLICATION StoreExceptions;To store only certain events, add\u00a0an appropriate WHERE clause to the CQ.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-09\n", "metadata": {"source": "https://www.striim.com/docs/en/writing-exceptions-to-a-wactionstore.html", "title": "Writing exceptions to a WActionStore", "language": "en"}} {"page_content": "\n\nUsing the Forwarding AgentSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing the Forwarding AgentPrevNextUsing the Forwarding AgentThe Forwarding Agent is a stripped-down version of a Striim server that can be used to run sources and CQs locally on a remote host. Windows, caches, and other components are not supported.To use the Agent, first follow the instructions in Striim Forwarding Agent installation and configuration that are appropriate for your environment. Then, in your application, create a flow for the source that will run on the agent, and deploy it to the agent's deployment group.Here is a simple example that reads from a file on the remote host and writes to a file on the Striim server:CREATE APPLICATION agentTest;\n\nCREATE FLOW AgentFlow WITH ENCRYPTION;\nCREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'posdata.csv',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO CsvStream;\nEND FLOW AgentFlow;\n \nCREATE FLOW ServerFlow;\nCREATE TARGET t USING FileWriter( filename:'AgentOut')\nFORMAT USING DSVFormatter ()\nINPUT FROM CsvStream\nEND FLOW ServerFlow;\n\nEND APPLICATION agentTest;\n\nDEPLOY APPLICATION agentTest ON default\nWITH AgentFlow ON ALL IN agent;Be sure the Agent is running when you load the application. If there are multiple agents in the deployment group the source will automatically combine their data. The WITH ENCRYPTION option will encrypt the stream connecting the agent to the server (see CREATE APPLICATION ... END APPLICATION).CREATE APPLICATION ... END APPLICATIONNoteCQs running on an Agent may not select from Kafka streams or include AS <field name>, GROUP BY, HAVING, ITERATOR, or MATCH_PATTERN.The following variation on the beginning of the PosApp sample application filters out unneeded columns and partitions the stream:CREATE FLOW AgentFlow; \n CREATE SOURCE CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'posdata.csv',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO CsvStream;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream partition by merchantId\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')) as hourValue,\n TO_DOUBLE(data[7]) as amount,\n TO_STRING(data[9]) as zip\nFROM CsvStream;\n\nEND FLOW AgentFlow; ...PARTITION BY merchantId specifies that events with the same merchantId values will be processed on the same server. This is required for ServerFlow to\u00a0be deployed to multiple servers. An unpartitioned source is automatically deployed to a single server.See Filtering events using OUTPUT TO and WHERE for additional examples of filter syntax that are compatible with the Forwarding Agent.To support recovery when a source running on the Forwarding Agent is in one application and the server-side components that consume its output are in one or more other applications, persist the source's output to Kafka (see Persisting a stream to Kafka). For example:Persisting a stream to KafkaCREATE APPLICATION agentApp;\nCREATE FLOW AgentFlow WITH ENCRYPTION;\nCREATE STREAM PersistedStream OF Global.WAEvent PERSIST; \nCREATE SOURCE OracleCDCIn USING OracleReader (\n Username:'striim',\n Password:'passwd',\n ConnectionURL:'203.0.113.49:1521:orcl',\n Tables:'myschema.%'\n) \nOUTPUT TO PersistedStream;\nEND FLOW AgentFlow;\nEND APPLICATION agentApp;\nDEPLOY APPLICATION agentApp ON default\nWITH AgentFlow ON ALL IN agent;\n \n \nCREATE APPLICATION ServerApp;\nCREATE TARGET KafkaTarget USING KafkaWriter VERSION '0.8.0' (\n Mode: 'Sync',\n Topic: 'OracleData',\n brokerAddress: '198.51.100.55:9092'\n)\nFORMAT USING AvroFormatter ( schemaFileName: 'OracleData.avro' )\nINPUT FROM PersistedStream;\nEND APPLICATION ServerApp;\nDEPLOY APPLICATION serverApp ON default;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/using-the-forwarding-agent.html", "title": "Using the Forwarding Agent", "language": "en"}} {"page_content": "\n\nCreating a custom Kafka partitionerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingCreating a custom Kafka partitionerPrevNextCreating a custom Kafka partitionerThe following simple example sends KafkaWriter output to partition 0 or 1 based on the PARTITION BY field of the target's input stream or the PartitionKey property value. This example assumes you are using Eclipse, but it may be adapted to any Java development environment.Create a new Java project.After naming the project and making any other necessary changes, click Next > Libraries > Add External JARs, navigate to\u00a0Striim/lib, and double-click Common-4.2.0.jar.Finish creating the new Java project.Add a new class with package name com.custompartitioner and class name\u00a0KafkaCityPartitioner.Replace the default contents of the new class with the following:package com.custompartitioner;\nimport com.webaction.kafka.PartitionerIntf;\nimport java.util.ArrayList;\n\npublic class KafkaCityPartitioner implements PartitionerIntf {\n\n\t@Override\n\tpublic void close() {\n\t\t// TODO Auto-generated method stub\n\t\t\n\t}\n\n\t@Override\n\tpublic int partition(String topic, Object keylist, Object event, int noOfPartitions) {\n\t\tif(noOfPartitions < 2) {\n\t\t\tthrow new RuntimeException(\"Number of partitions is less than 2\");\n\t\t}\n\t\tif(keylist != null) {\n\t\t\tArrayList<String> partitionKeyList = (ArrayList<String>) keylist;\n\t\t\tString partitionKey = partitionKeyList.get(0);\n\t\t\tif(partitionKey.equalsIgnoreCase(\"Amsterdam\")) {\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t\treturn 1; \n\t}\n\n}\nIf the partition key field value is Amsterdam, the event will be written to Kafka partition 0, otherwise it will be written to partition 1. This logic may be extended to handle additional values and partitions by adding else clauses. To guarantee that there will be no duplicate events after recovery, for each partition key the partitioning logic must always write to the same partition number.If Eclipse's Build Automatically option is disabled, build the project.Click File > Export > Java > JAR File > Next, select the class, and click Finish.Save the JAR file in Striim/lib.Restart Striim.To test the custom partitioner, create an application using this TQL. If you are not using Striim's internal Kafka instance, change the broker address to that of your Kafka instance.CREATE SOURCE CSVSource USING FileReader ( \n positionbyeof: false,\n directory: 'Samples',\n wildcard: 'city.csv'\n) \nPARSE USING DSVParser () \nOUTPUT TO CSVStream;\n\nCREATE CQ CQ1\nINSERT INTO CQStream\nSELECT data[0] as data0 java.lang.Object,\n data[1] as data1 java.lang.Object,\n data[2] as data2 java.lang.Object,\n data[3] as data3 java.lang.Object\nFROM CSVStream;\n\n\nCREATE TARGET KafkaTarget USING KafkaWriter VERSION '0.9.0' ( \n Topic: 'test01',\n brokerAddress: 'localhost:9092',\n KafkaConfig: 'partitioner.class=com.kafka.custompartitioner.KafkaCityPartitioner'\n) \nFORMAT USING DSVFormatter()\nINPUT FROM CQStream;Save the following as\u00a0Striim/Samples/city.csv:Amsterdam,0,0,0\nLos Angeles,1,1,1\nAmsterdam,2,2,0\nLos Angeles,3,3,1\nAmsterdam,4,4,0\nLos Angeles,5,5,1\nAmsterdam,6,6,0\nLos Angeles,7,7,1\nAmsterdam,8,8,0\nLos Angeles,9,9,1Deploy and run the application.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-a-custom-kafka-partitioner.html", "title": "Creating a custom Kafka partitioner", "language": "en"}} {"page_content": "\n\nReading a Kafka stream with an external Kafka consumerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingReading a Kafka stream with an external Kafka consumerPrevNextReading a Kafka stream with an external Kafka consumerTo read a Kafka stream's persisted data in an external application: Include\u00a0dataformat:'avro' in the stream's property set.Generate an Avro schema file for the stream using the console command\u00a0EXPORT <namespace>.<stream name>. This will create a schema file\u00a0<namespace>_<stream name>_schema.avsc in the Striim program directory. Optionally, you may specify a path or file name using\u00a0EXPORT <namespace>.<stream name> '<path>' '<file name>'. The .avsc extension will be added automatically.Copy the schema file to a location accessible by the external application.In the external application, use the Zookeeper and broker addresses in the stream's property set, and reference the stream using <namespace>_<stream name>.NoteRecovery (see\u00a0Recovering applications) is not supported for Avro-formatted Kafka streams.For example, the following program would read from Samples.PosDataStream:import java.io.File;\nimport java.nio.ByteBuffer;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport org.apache.avro.Schema;\nimport org.apache.avro.generic.GenericData;\nimport org.apache.avro.io.BinaryDecoder;\nimport org.apache.avro.io.DecoderFactory;\nimport org.apache.avro.specific.SpecificDatumReader;\n\nimport kafka.api.FetchRequest;\nimport kafka.api.FetchRequestBuilder;\nimport kafka.api.PartitionOffsetRequestInfo;\nimport kafka.common.TopicAndPartition;\nimport kafka.javaapi.FetchResponse;\nimport kafka.javaapi.OffsetResponse;\nimport kafka.javaapi.consumer.SimpleConsumer;\nimport kafka.javaapi.message.ByteBufferMessageSet;\nimport kafka.message.MessageAndOffset;\n\npublic class SimpleKafkaConsumer\n{\n /**\n * This method issues exactly one fetch request to a kafka topic/partition and prints out\n * all the data from the response of the fetch request.\n * @param topic_name Topic to fetch messages from\n * @param schema_filename Avro schema file used for deserializing the messages\n * @param host_name Host where the kafka broker is running\n * @param port Port on which the kafka broker is listening\n * @param clientId Unique id of a client doing the fetch request\n * @throws Exception\n */\n public void read(String topic_name, String schema_filename, String host_name, int port, String clientId) throws Exception\n {\n SimpleConsumer simpleConsumer = new SimpleConsumer(host_name, port, 100000, 64 * 1024, clientId);\n // This is just an example to read from partition 1 of a topic.\n int partitionId = 1;\n\n // Finds the first offset in the logs and starts fetching messages from that offset.\n long offset = getOffset(simpleConsumer, topic_name, partitionId, kafka.api.OffsetRequest.EarliestTime(), clientId);\n\n // Builds a fetch request.\n FetchRequestBuilder builder = new FetchRequestBuilder();\n builder.clientId(clientId);\n builder.addFetch(topic_name, partitionId, offset, 43264200);\n FetchRequest fetchRequest = builder.build();\n\n // Get the response of the fetch request.\n FetchResponse fetchResponse = simpleConsumer.fetch(fetchRequest);\n\n // Instantiates an avro deserializer based on the schema file.\n SpecificDatumReader datumReader = getAvroReader(schema_filename);\n\n if (fetchResponse.hasError())\n {\n System.out.println(\"Error processing fetch request: Reason -> \"+fetchResponse.errorCode(topic_name, 1));\n }\n else\n {\n ByteBufferMessageSet bbms = fetchResponse.messageSet(topic_name, partitionId);\n int count = 0;\n for (MessageAndOffset messageAndOffset : bbms)\n {\n ByteBuffer payload = messageAndOffset.message().payload();\n {\n // The message format is Striim specific and it looks like :\n // 1. First 4 bytes represent an integer(k) which tells the size of the actual message in the byte buffer.\n // 2. Next 'k' bytes stores the actual message.\n // 3. This logic runs in a while loop until the byte buffer limit.\n while (payload.hasRemaining())\n {\n int size = payload.getInt();\n\n // if the size is invalid, or if there aren't enough bytes to process this chunk, then just bail.\n if ((payload.position() + size > payload.limit()) || size == 0)\n {\n break;\n }\n else\n {\n byte[] current_bytes = new byte[size];\n payload.get(current_bytes, 0, size);\n BinaryDecoder recordDecoder = DecoderFactory.get().binaryDecoder(current_bytes, null);\n GenericData.Record record = (GenericData.Record) datumReader.read(null, recordDecoder);\n System.out.println(count++ +\":\"+record);\n }\n }\n }\n }\n }\n }\n\n private SpecificDatumReader getAvroReader(String schema_filename) throws Exception\n {\n File schemaFile = new File(schema_filename);\n if(schemaFile.exists() && !schemaFile.isDirectory())\n {\n Schema schema = new Schema.Parser().parse(schemaFile);\n SpecificDatumReader avroReader = new SpecificDatumReader(schema);\n return avroReader;\n }\n return null;\n }\n\n public long getOffset(SimpleConsumer consumer, String topic, int partition, long whichTime, String clientName)\n {\n TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);\n Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap();\n requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));\n kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo,\n kafka.api.OffsetRequest.CurrentVersion(), clientName);\n OffsetResponse response = consumer.getOffsetsBefore(request);\n if (response.hasError())\n {\n System.out.println(\"Error fetching data Offset Data the Broker. Reason: \" + response.errorCode(topic, partition) );\n return 0;\n }\n long[] offsets = response.offsets(topic, partition);\n return offsets[0];\n }\n\n public static void main(String[] args)\n {\n SimpleKafkaConsumer readKafka = new SimpleKafkaConsumer();\n String topic_name = \"Samples_PosDataStream\";\n String schema_filename = \"./Platform/conf/Samples_PosDataStream_schema.avsc\";\n try\n {\n readKafka.read(topic_name, schema_filename, \"localhost\", 9092, \"ItsAUniqueClient\");\n } catch (Exception e)\n {\n System.out.println(e);\n }\n }\n}In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-02-28\n", "metadata": {"source": "https://www.striim.com/docs/en/reading-a-kafka-stream-with-an-external-kafka-consumer.html", "title": "Reading a Kafka stream with an external Kafka consumer", "language": "en"}} {"page_content": "\n\nChanging and masking field values using MODIFYSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingChanging and masking field values using MODIFYPrevNextChanging and masking field values using MODIFYThis section covers the use of MODIFY on streams of user-defined types. For streams of type WAEvent, see\u00a0Modifying and masking values in the WAEvent data array using MODIFY.Using MODIFY in a CQ's SELECT statement allows you to change or mask the contents of specified fields. When a stream has many fields, this can be much simpler and more efficient approach than writing a SELECT statement that handles each of the fields.The syntax is:SELECT <field name> FROM <stream name> MODIFY (<field name> = <expression>)The expression can use the same operators and functions as SELECT. The MODIFY clause may include CASE statements.The following simple example would convert a monetary amount in the Amount field using an exchange rate of 1.09:CREATE CQ ConvertAmount \nINSERT INTO ConvertedStream\nSELECT * FROM UnconvertedStream\nMODIFY(Amount = TO_FLOAT(Amount) * 1.09);The next example illustrates the use of CASE statements. It uses the maskPhoneNumber function (see\u00a0Masking functions) to mask individually identifiable information from US and India telephone numbers (as dialed from the US) while preserving the country and area codes. The US numbers have the format ###-###-####, where the first three digits are the area code. India numbers have the format 91-###-###-####, where 91 is the country code and the third through fifth digits are the subscriber trunk dialing (STD) code. The telephone numbers are in the\u00a0PhoneNum\u00a0field and the country codes are in the\u00a0\u00a0Country field.CREATE CQ maskData \nINSERT INTO maskedDataStream\nSELECT * FROM unmaskedDataStream\nMODIFY(\ndata[4] = CASE\n WHEN Country == \"US\" THEN maskPhoneNumber(PhoneNum, \"###-xxx-xxx\")\n ELSE maskPhoneNumber(PhoneNum), \"#####x#xxx#xxxx\")\n END\n);This could be extended with additional WHEN statements to mask numbers from additional countries, or with additional masking functions to mask individually identifiable information such as credit card, Social Security, and national identification numbers.See\u00a0Masking functions for additional examples.See also\u00a0Modifying output using ColumnMap. In some cases that may be a more straightforward and efficient solution than using MODIFY.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-11-06\n", "metadata": {"source": "https://www.striim.com/docs/en/changing-and-masking-field-values-using-modify.html", "title": "Changing and masking field values using MODIFY", "language": "en"}} {"page_content": "\n\nUsing namespacesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing namespacesPrevNextUsing namespacesEvery TQL component exists in a namespace, which is used to assign privileges and control access. Every user account has a personal namespace with the same name. For example, the admin user has an admin namespace.When you create an application in the console, by default it will be created in the current namespace, which is shown in the prompt (for example, W (admin) >). You can override that default by preceding CREATE APPLICATION with USE <namespace name>; to change the current namespace.When you create an application in the UI, by default it will be created in your personal namespace. The \"Create new application\" dialog allows you to override the default by selecting a different namespace or creating a new one.NoteWhen you import a TQL file in the UI, any CREATE NAMESPACE <name>; and USE <namespace name>; statements in the file will override the choice in the UI.If useful, you may create an empty namespace:CREATE NAMESPACE <name>;One use for this is to keep applications that use components with the same names from interfering with each other. For example, you might have the current version of an application in one namespace and the next version in another.Another use for a namespace is to hold a library of common types to be shared by various applications. For example:CREATE NAMESPACE CommonTypes;\nUSE CommonTypes;\nCREATE TYPE MerchantName(\n merchantId String KEY,\n companyName String\n);\nUSE my_user_name;\nCREATE APPLICATION PosMonitor;\nCREATE STREAM MerchantNames OF CommonTypes.MerchantName;\n...The MerchantName type is now available in any application by specifying CommonTypes.MerchantName. The USE my_user_name command stops creation of components in the CommonTypes namespace. If you left that out, the PosMonitor application would be created in the CommonTypes namespace rather than in your personal namespace.See Managing users, permissions, and roles for information on the role namespaces play in the Striim security model.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/using-namespaces.html", "title": "Using namespaces", "language": "en"}} {"page_content": "\n\nUsing EVENTLISTSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing EVENTLISTPrevNextUsing EVENTLISTWhen LINK SOURCE EVENT is specified in the CQ that populates the WActionStore, and the type for the linked events is specified in the EVENT TYPES clause of the CREATE WACTIONSTORE statement, you can use EVENTLIST in a subquery and ITERATOR over the subquery to return values from fields in the linked events.For example, in MultiLogApp, the WActionStore ApiActivity includes linked events of the type ApiUsage:CREATE TYPE ApiUsage (\n api String key, \n sobject String, \n count Integer, \n logTime DateTime\n);\nCREATE TYPE ApiContext (\n api String key, \n count Integer, \n logTime DateTime\n);\nCREATE WACTIONSTORE ApiActivity \nCONTEXT OF ApiContext \nEVENT TYPES (ApiUsage) ...\n\nCREATE CQ GetApiSummaryUsage \nINSERT INTO ApiActivity \nSELECT a.api, \n sum(a.count),\n first(a.logTime)\nFROM ApiSummaryWindow a \nGROUP BY a.api\nLINK SOURCE EVENT;The following query will return all fields from all linked events:SELECT it.api, it.count, it.logTime, it.sobject FROM\n (SELECT EVENTLIST(t) AS list_of_events FROM Samples.ApiActivity t) z,\n ITERATOR(z.list_of_events) it;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-09\n", "metadata": {"source": "https://www.striim.com/docs/en/using-eventlist.html", "title": "Using EVENTLIST", "language": "en"}} {"page_content": "\n\nUsing ITERATORSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing ITERATORPrevNextUsing ITERATORSELECT [DISTINCT] { <field name>, ... }\n[ ISTREAM ]\nFROM ITERATOR\n (<nested collection name>.<member name>[, <type>])\n <iterator name>, \n <nested collection name>UsageDescriptionITERATOR(<var>)<var> is of any supported type (see below)ITERATOR(<var>,<type>)<var> is bound to the specified <type>The ITERATOR function allows you to access the members of nested collections in a dataset, so long as the collection implements the java.lang.Iterable interface or is a JsonNode.For example, suppose you have the following statement:create stream s1 (\n id string, \n json_array JsonNode,\n list_of_objects java.util.List\n);The json_array and list_of_objects are both nested data collections.Since java.util.List implements the java.lang.Iterable interface, you can create an iterator and use it to access the members of list_of_objects :SELECT lst from s1, ITERATOR(s1.list_of_objects) lst;Suppose the list_of_objects contains objects of this Java type:package my.package;\nclass MyType {\n int attrA;\n int attrB;\n}You would access its members using this statement:SELECT attrA, attrB FROM s1, ITERATOR(s1.list_of_objects, my.package.MyType) lst;The stream also includes a JsonNode member. Suppose the following events are added to the stream:('a', [1,2,3], null)\n('b', [1,3,4], null)Here is how you can iterate through json_array:SELECT id, a FROM s1, iterator(s1.json_array) a;\n \nOUTPUT:\n=======\na 1\na 2\na 3\nb 1\nb 3\nb 4This statement illustrates a cross join between json_array and list_of_objects:SELECT a, b FROM ITERATOR(s1.json_array) a, ITERATOR(s1.list_of_objects) b, s1;You can iterate through multiple levels of nesting. For example, this statement iterates through 2 levels, where json_array contains another_nested_collection:SELECT x FROM s1, ITERATOR(s1.json_array) a, ITERATOR(a.another_nested_collection) x;WarningArbitrary data types, such as a Java List, cannot be used with a WActionStore that is persisted to Elasticsearch. Also, it is not possible to convert a JSON List into a Java List.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-02-04\n", "metadata": {"source": "https://www.striim.com/docs/en/using-iterator.html", "title": "Using ITERATOR", "language": "en"}} {"page_content": "\n\nUsing the META() functionSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing the META() functionPrevNextUsing the META() functionYou can use the META() function to query the metadata map of the WAEvent type, which is used by the output streams of several Striim readers. For example, the following creates a stream containing invalid records:CREATE STREAM ExceptionStream of ExceptionRecord;\nCREATE CQ CQExceptionRecord\nINSERT INTO ExceptionStream\nSELECT data[0]\nFROM CsvStream\nWHERE META(CsvStream, \u2018RecordStatus\u2019).toString() == \u2018INVALID_RECORD\u2019;The elements of the metadata map vary depending on the reader and parser used.readermetadata elementsDatabase ReaderTableName (only if using the Tables property, omitted if using Query): the fully qualified name of the table to which the record belongs, either in the form <CATALOG>.<SCHEMA>.<TABLE> or <SCHEMA>.<TABLE>, depending on the databaseOperationName: always SELECTColumnCount: number of columns in this recordFile ReaderFileName: fully qualified file nameFileOffset: offset in bytes from the beginning of the file to the start of the current eventGG Trail ReaderSee GG Trail Reader WAEvent fields.HDFS ReaderFileName: Hadoop URL including the fully qualified file nameFileOffset: offset in bytes from the beginning of the file to the start of the current eventHP NonStop readersSee HP NonStop reader WAEvent fields.HTTP ReaderClientIPAddress: IP address of the clientClientProtocolVersion: name and version of the protocol the request uses in the form protocol/majorVersion.minorVersionClientContentLength: length of the request body in bytes, -1 if length is not known.ClientURL: URL the client used to make the request along with the query stringReferrer: HTTP header field specified by client that contains the address of the webpage linked to the resource being requestedJMS Readerno metadata returnedKafka ReaderTopicName: Kafka topic from which the current event was readPartitionID: Kafka partition from which the current event was readRecordOffset: offset of the current event within the partitionMS SQL Reader / MSJetSee SQL Server readers WAEvent fields.MariaDB Reader / MySQL ReaderSee MySQL Reader WAEvent fields.MySQL Reader WAEvent fieldsMultiFile ReaderFileName: fully qualified file nameFileOffset: offset in bytes from the beginning of the file to the start of the current eventOracle Reader / OJetSee Oracle Reader and OJet WAEvent fields.PostgreSQL ReaderSee PostgreSQL Reader WAEvent fields.The following parsers append metadata elements to those of the associated reader:parsermetadata elementsDSV ParserRecordStatus: value is always VALID_RECORDRecordOffset: offset in characters from the beginning of the record to the start of the current eventOriginTimeStamp: event origin timestampRecordEnd: ending character offset of this record in the sourceFreeForm Text ParserRecordOffset: starting character offset of this record in the sourceRecordStatus: value is always VALID_RECORDOriginTimeStamp: event origin timestampRecordEnd: ending character offset of this record in the sourceNetflow Parser (version 5)version: 5count: number of flows (data records) exported in this packet.sys_uptime: current time in milliseconds since the export device bootedunix_secs: current count of seconds since 0000 UTC 1970unix_nsec: residual nanoseconds since 0000 UTC 1970flow_sequence: sequence counter of total flows seenengine_type: type of flow-switching engineengine_id: slot number of the flow-switching enginesampling_interval: first two bits hold the sampling mode, remaining 14 bits hold value of sampling intervalNetflow Parser (version 9)Count: total number of records in the Export Packet, which is the sum of Options FlowSet records, Template FlowSet records, and Data FlowSet recordsErrorMsg (invalid record only): can be used for debuggingpackage_sequence: incremental sequence counter of all export packets sent from the current observation domain by the exporterReason (invalid record only): can be used for debuggingRecordType: Data (Netflow data record), Template (template details) or Options Template\u00a0 (information about the Netflow process running in the export device)source_id: 32-bit value that identifies the exporter observation domainSourceIP: source address for packetSourcePort: source port for packetStatus: Valid for\u00a0valid record and template, INVALID for invalid recordsys_uptime: time in milliseconds since this export device was first bootedTemplate_ID: ID of the template record (template record is nothing but metdata for a data record) used to decode the data recordTemplateStructure (invalid record only): can be used for debuggingunix_secs: time in seconds since 0000 UTC 1970, at which the export packet leaves the exporterversion: 9SNMP ParserType: SNMP typeVersion: SNMP versionCommunity: Community nameAgentIp: SNMP agent IP addressAgentPort: SNMP agent port noTrapTime: time in milliseconds since the trap was startedEnterprise: the management enterprise under whose registration authority the trap was definedXML ParserRecordOffset: starting character offset of this record in the sourceRecordStatus: value is always VALID_RECORDOriginTimeStamp: event origin timestampRecordEnd: ending character offset of this record in the sourceIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-31\n", "metadata": {"source": "https://www.striim.com/docs/en/using-the-meta---function.html", "title": "Using the META() function", "language": "en"}} {"page_content": "\n\nReading from and writing to Kafka using AvroSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingReading from and writing to Kafka using AvroPrevNextReading from and writing to Kafka using AvroThis section provides examples of using KafkaWriter with AvroFormatter and KafkaReader with AvroParser and the various configurations which are supported. We will discuss about the wire format (with or without schema registry) of the Kafka Message when the data is produced using Striim\u2019s KafkaWriter with AvroFormatter and how the external Kafka based consumers or any other applications can deserialise the Striim\u2019s wire format.And then on the reader side, we will see how it can read Avro records in different wire formats. KafkaReader will just need in change in a couple of configurations if the KafkaMessage has Avro records written by some other application than Striim.NoteNote : In this document we have used Confluent\u2019s schema registry for schema evolution and samples showing the configurations for the same. But other schema registry can also be used.KafkaWriter + AvroFormatter with schema evolutionThe Kafka Messages produced by KafkaWriter can contain 1..n Avro records if the mode is Sync and 1 Avro Record if the mode was Async. In either of the mode the wire format of the Kafka Message would be 4 bytes of length of the payload and then the payload, in which the first four bytes would be schema registry id and then the Avro record bytes.Sample Striim application loading data from Oracle to Kafka in Avro formatThe schema evolution of the tables read from a OLTP source will be effective only if Avro formatter is configured with \u201cFormatAs : Table or Native\u201d. These modes are discussed in detail in Using the Confluent or Hortonworks schema registry. The wire format of the Kafka Message will be the same when KafkaWriter is configured with Sync or Async mode with AvroFormatter using \u201cFormatAs:Native/Table\u201d.The following sample code uses OracleReader to capture the continuous changes happening on an Oracle Database and used AvroFormatter to convert the DMLs in Avro records (close to source table\u2019s schema) and the respective tables\u2019s schema is registered to schema registry and the id is added to every Avro record. These Avro records are written to the Kafka topic KafkaTest created in Confluent cloud Kafka. The user must create the topic before running the application. The confluent cloud authentication properties are specified in the KafkaConfig property in KafkaWriter (these are not required in case of apache kafka running locally) and the schema registry authentication credentials are specified as a part of the schema registry configuration in the AvroFormatter.CREATE APPLICATION KafkaConfluentProducer RECOVERY 5 SECOND INTERVAL;\n\nCREATE SOURCE OrcaleReader1 USING Global.OracleReader ( \n TransactionBufferDiskLocation: '.striim/LargeBuffer', \n Password: 'TxGrqYn+1TjUdQXwkEQ2UQ==', \n DDLCaptureMode: 'All', \n Compression: false, \n ReaderType: 'LogMiner', \n connectionRetryPolicy: 'timeOut=30, retryInterval=30, maxRetries=3', \n FetchSize: 1, \n Password_encrypted: 'true', \n SupportPDB: false, \n QuiesceMarkerTable: 'QUIESCEMARKER', \n DictionaryMode: 'OnlineCatalog', \n QueueSize: 2048, \n CommittedTransactions: true, \n TransactionBufferSpilloverSize: '1MB', \n Tables: 'QATEST.POSAUTHORIZATION', \n Username: 'qatest', \n TransactionBufferType: 'Memory', \n ConnectionURL: 'localhost:1521:xe', \n FilterTransactionBoundaries: true, \n SendBeforeImage: true ) \nOUTPUT TO outputStream12;\n\nCREATE OR REPLACE TARGET KafkaWriter1 USING Global.KafkaWriter VERSION '2.1.0'( \n KafkaConfigValueSeparator: '++', \n Topic: 'kafkaTest', \n KafkaConfig: 'max.request.size++10485760|batch.size++10000120|security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";', \n KafkaConfigPropertySeparator: '|', \n adapterName: 'KafkaWriter', \n brokerAddress: 'pkc-4yyd6.us-east1.gcp.confluent.cloud:9092', \n Mode: 'ASync' ) \nFORMAT USING Global.AvroFormatter ( \n formatAs: 'Table', \n handler: 'com.webaction.proc.AvroFormatter', \n formatterName: 'AvroFormatter', \n schemaregistryurl: 'https://psrc-4rw99.us-central1.gcp.confluent.cloud',\nschemaregistryconfiguration:'basic.auth.user.info=<SR_API_KEY>:<SR_API_PASSWORD>,basic.auth.credentials.source=USER_INFO') \nINPUT FROM outputStream12;\n\nEND APPLICATION KafkaConfluentProducer;\nDeploy and start the above application and refer to application monitoring metrics to check the progress.Reading Avro Records in Striim wire format with schema registryNow that the data is in Kafka topic it can be read by KafkaReader or any other external application. Striim\u2019s KafkaReader doesn\u2019t require the deserialiser to be specified in the KafkaConfig. Just setting the \u201cSchemaRegistryURL\u201d in AvroParser will do.Following is a sample java KafkaConsumer which takes input from a config file having configurations (same as provided in Striim application), like broker address, topic name, topic.name, schemaregistry.url, basic.auth.user.info, sasl.jaas.config, security.protocol , basic.auth.credentials.source, sasl.mechanism and ssl.endpoint.identification.algorithm.\u201cvalue.deserializer\u201d will be com.striim.kafka.deserializer.KafkaAvroDeserializer.\nimport org.apache.avro.generic.GenericRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecords;\nimport org.apache.kafka.clients.consumer.KafkaConsumer;\nimport org.apache.kafka.common.TopicPartition;\nimport java.io.FileInputStream;\nimport java.io.InputStream;\nimport java.util.ArrayList;\nimport java.util.Properties;\nimport java.util.List;\n\npublic class KafkaAvroConsumerUtilWithStriimDeserializer {\n\n private KafkaConsumer<byte[], Object> consumer;\n\n public KafkaAvroConsumerUtilWithStriimDeserializer(String configFileName) throws Exception {\n\n Properties props = new Properties();\n InputStream in = new FileInputStream(configFileName);\n props.load(in);\n\n this.consumer = new KafkaConsumer<byte[], Object>(props);\n\n TopicPartition tp = new TopicPartition(props.getProperty(\"topic.name\"), 0);\n List<TopicPartition> tpList = new ArrayList<TopicPartition>();\n tpList.add(tp);\n this.consumer.assign(tpList);\n this.consumer.seekToBeginning(tpList);\n }\n\n public void consume() throws Exception {\n while(true) {\n ConsumerRecords<byte[], Object> records = consumer.poll(1000);\n for(ConsumerRecord<byte[], Object> record : records) {\n System.out.println(\"Topic \" + record.topic() + \" partition \" + record.partition()\n + \" offset \" + record.offset() + \" timestamp \" + record.timestamp());\n List<GenericRecord> avroRecordList = (List<GenericRecord>) record.value();\n for(GenericRecord avroRecord : avroRecordList) {\n System.out.println(avroRecord);\n }\n }\n }\n }\n\n public void close() throws Exception {\n if(this.consumer != null) {\n this.consumer.close();\n this.consumer = null;\n }\n }\n\n public static void help() {\n System.out.println(\"Usage :\\n x.sh {path_to_config_file}\");\n }\n\n public static void main(String[] args) throws Exception {\n if(args.length != 1) {\n help();\n System.exit(-1);\n }\n String configFileName = args[0];\n System.out.println(\"KafkaConsumer config file : \" + configFileName);\n KafkaAvroConsumerUtilWithStriimDeserializer consumerutil = null;\n try {\n consumerutil = new KafkaAvroConsumerUtilWithStriimDeserializer(configFileName);\n consumerutil.consume();\n } finally {\n if(consumerutil != null) {\n consumerutil.close();\n consumerutil = null;\n }\n }\n }Include or edit the following properties in the KafkaConfig file:bootstrap.servers=pkc-4yyd6.us-east1.gcp.confluent.cloud:9092\ntopic.name=kafkaTest\nschemaregistry.url=https://psrc-4rw99.us-central1.gcp.confluent.cloud\nkey.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer\nvalue.deserializer=com.striim.kafka.deserializer.KafkaAvroDeserializer\ngroup.id=group1\nbasic.auth.user.info=<SR_API_KEY>:<SR_API_SECRET>\nsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";\nsecurity.protocol=SASL_SSL\nbasic.auth.credentials.source=USER_INFO\nsasl.mechanism=PLAIN\nssl.endpoint.identification.algorithm=https\nCommand to run KafkaAvroConsumerUtilWithStriimDeserializer:java -jar KafkaAvroConsumerUtilWithDeserializer.jar <path-to-config-file>Sample Output of \u201cKafkaAvroConsumerUtil\u201d with Deserialiser (reading the content written by KafkaWriter (Async mode) and AvroFormatter (FormatAs : Table and SchemaRegistryURL) whose input stream was from a OracleReader).Topic kafkaTest partition 0 offset 0 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 1\", \"MERCHANT_ID\": \"1\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 1 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 2\", \"MERCHANT_ID\": \"2\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 2 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 3\", \"MERCHANT_ID\": \"3\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 3 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 4\", \"MERCHANT_ID\": \"27\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 4 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 14\", \"MERCHANT_ID\": \"30\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 5 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 15\", \"MERCHANT_ID\": \"1\", \"CITY\": \"CBE\"}Reading via Kafka Reader with Avro Parser, schema registry URL specifiedThe sample application KafkaConfluentConsumer reads the data written to confluent Kafka cloud by Striim\u2019s Kafka Writer. There won\u2019t any change required to \u201cvalue.deserializer\u201d config in KafkaConfig (default is \u201ccom.striim.avro.deserializer.LengthDelimitedAvroRecordDeserializer\u201d). Just add required SASL authentication credentials to KafkaConfig. The schema registry authentication credentials are specified in the AvroParser.CREATE APPLICATION KafkaConfluentConsumer;\n\nCREATE OR REPLACE SOURCE KafkaAvroConsumer USING Global.KafkaReader VERSION '2.1.0' ( \n AutoMapPartition: true, \n KafkaConfig: 'value.deserializer++com.striim.avro.deserializer.LengthDelimitedAvroRecordDeserializer|security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";', \n KafkaConfigValueSeparator: '++', \n Topic: 'kafkaTest', \n KafkaConfigPropertySeparator: '|', \n brokerAddress: 'pkc-4yyd6.us-east1.gcp.confluent.cloud:9092', \n adapterName: 'KafkaReader', \n startOffset: 0 ) \nPARSE USING Global.AvroParser ( \n handler: 'com.webaction.proc.AvroParser_1_0', \n schemaregistryurl: 'https://psrc-4rw99.us-central1.gcp.confluent.cloud',\nschemaregistryconfiguration:'basic.auth.user.info=<SR_API_KEY>:<SR_API_PASSWORD>,basic.auth.credentials.source=USER_INFO', \n parserName: 'AvroParser' ) \nOUTPUT TO outputStream1;\n\nCREATE OR REPLACE CQ CQ1 \nINSERT INTO outputStream2 \nSELECT AvroToJson(o.data),\n o.metadata\n FROM outputStream1 o;\n\nCREATE OR REPLACE TARGET FileWriter1 USING Global.FileWriter ( \n filename: 'confluentOutput1', \n rolloveronddl: 'false', \n flushpolicy: 'EventCount:10000,Interval:30s', \n encryptionpolicy: '', \n adapterName: 'FileWriter', \n rolloverpolicy: 'EventCount:10000,Interval:30s' ) \nFORMAT USING Global.JSONFormatter ( \n handler: 'com.webaction.proc.JSONFormatter', \n jsonMemberDelimiter: '\\n', \n EventsAsArrayOfJsonObjects: 'true', \n formatterName: 'JSONFormatter', \n jsonobjectdelimiter: '\\n' ) \nINPUT FROM outputStream2;\n\nEND APPLICATION KafkaConfluentConsumer;\nSample Output of KafkaAvroConsumer (read the content written by KafkaWriter inAsync mode with AvroFormatter with FormatAs : Table and SchemaRegistryURL set):Topic kafkaTest partition 0 offset 0 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 1\", \"MERCHANT_ID\": \"1\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 1 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 2\", \"MERCHANT_ID\": \"2\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 2 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 3\", \"MERCHANT_ID\": \"3\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 3 timestamp 1599572818966\n{\"BUSINESS_NAME\": \"COMPANY 4\", \"MERCHANT_ID\": \"27\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 4 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 14\", \"MERCHANT_ID\": \"30\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 5 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 15\", \"MERCHANT_ID\": \"1\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 6 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 16\", \"MERCHANT_ID\": \"2\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 7 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 17\", \"MERCHANT_ID\": \"3\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 8 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 18\", \"MERCHANT_ID\": \"27\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 9 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 19\", \"MERCHANT_ID\": \"28\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 10 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 20\", \"MERCHANT_ID\": \"29\", \"CITY\": \"CBE\"}\nTopic kafkaTest partition 0 offset 11 timestamp 1599572818967\n{\"BUSINESS_NAME\": \"COMPANY 21\", \"MERCHANT_ID\": \"30\", \"CITY\": \"CBE\"}\nKafkaWriter + AvroFormatter without schema evolutionThe data from non OLTP sources whose schema might not evolve during the lifetime of the Striim\u2019s application can use KafkaWriter in Sync or Async mode with AvroFormatter with just \u201cSchemaFileName\u201d property specified (the schema of the Avro records will be stored in this file). The same schema file has to be referred by consumers while its trying to deserialize the Avro records in the Kafka Message.The Kafka Messages produced by KafkaWriter can contain 1..n Avro records if the mode is Sync and 1 Avro Record if the mode was Async. In either of the mode the wire format of the Kafka Message would be 4 bytes of length of the payload and then the payload having only the Avro record bytes.Sample Striim application loading data from file to Kafka in Avro format with schema file name specifiedThe following sample application writes the CSV records from file \u201cposdata.csv\u201d to a Kafka topic kafkaDSVTest in Avro Format using Kafka Writer with Avro Formatter (Schema file name configured). The respective schema will be written to a schema file.\nCREATE APPLICATION KafkaWriterApplication;\n\nCREATE SOURCE FileReader1 USING Global.FileReader ( \n rolloverstyle: 'Default', \n wildcard: 'posdata.csv', \n blocksize: 64, \n skipbom: true, \n directory: '/Users/priyankasundararajan/Product/Samples/AppData', \n includesubdirectories: false, \n positionbyeof: false ) \nPARSE USING Global.DSVParser ( \n trimwhitespace: false, \n commentcharacter: '', \n linenumber: '-1', \n columndelimiter: ',', \n trimquote: true, \n columndelimittill: '-1', \n ignoreemptycolumn: false, \n separator: ':', \n quoteset: '\\\"', \n charset: 'UTF-8', \n ignoremultiplerecordbegin: 'true', \n ignorerowdelimiterinquote: false, \n header: false, \n blockascompleterecord: false, \n rowdelimiter: '\\n', \n nocolumndelimiter: false, \n headerlineno: 0 ) \nOUTPUT TO Stream1;\n\nCREATE TARGET kafkawriter1 USING Global.KafkaWriter VERSION '2.1.0'( \n brokerAddress: 'pkc-4yyd6.us-east1.gcp.confluent.cloud:9092', \n KafkaConfigValueSeparator: '++', \n MessageKey: '', \n MessageHeader: '', \n ParallelThreads: '', \n Topic: 'kafkaDSVTest', \n KafkaConfig: 'max.request.size++10485760|batch.size++10000120|security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";', \n KafkaConfigPropertySeparator: '|', \n Mode: 'ASync' ) \nFORMAT USING Global.AvroFormatter ( \n schemaFileName: 'schema1', \n formatAs: 'default' ) \nINPUT FROM Stream1;\n\nEND APPLICATION KafkaWriterApplication;\nDeploy and start the above application and refer to application monitoring metrics to check the progress.Reading Avro Records in Striim wire format with schema fileNow that the data is in Kafka topic it can be read by KafkaReader or any other external application. Striim\u2019s KafkaReader doesn\u2019t require the deserialiser to be specified in the KafkaConfig. Just setting the \u201cSchemaFileName\u201d in AvroParser will do.Following is a sample java KafkaConsumer which takes input from a config file (having configurations like broker address, topic name, value.deserializer (this is provided by Striim), topic.name, schemaFileName).\nimport org.apache.avro.generic.GenericRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecords;\nimport org.apache.kafka.clients.consumer.KafkaConsumer;\nimport org.apache.kafka.common.TopicPartition;\nimport java.io.FileInputStream;\nimport java.io.InputStream;\nimport java.util.ArrayList;\nimport java.util.Properties;\nimport java.util.List;\n\npublic class KafkaAvroConsumerUtilWithSchemaFile {\n\n private KafkaConsumer<byte[], Object> consumer;\n\n public KafkaAvroConsumerUtilWithSchemaFile(String configFileName) throws Exception {\n\n\n Properties props = new Properties();\n InputStream in = new FileInputStream(configFileName);\n props.load(in);\n\n this.consumer = new KafkaConsumer<byte[], Object>(props);\n\n TopicPartition tp = new TopicPartition(props.getProperty(\"topic.name\"), 0);\n List<TopicPartition> tpList = new ArrayList<TopicPartition>();\n tpList.add(tp);\n this.consumer.assign(tpList);\n this.consumer.seekToBeginning(tpList);\n }\n\n public void consume() throws Exception {\n while(true) {\n ConsumerRecords<byte[], Object> records = consumer.poll(1000);\n for(ConsumerRecord<byte[], Object> record : records) {\n System.out.println(\"Topic \" + record.topic() + \" partition \" + record.partition()\n + \" offset \" + record.offset() + \" timestamp \" + record.timestamp());\n List<GenericRecord> avroRecordList = (List<GenericRecord>) record.value();\n for(GenericRecord avroRecord : avroRecordList) {\n System.out.println(avroRecord);\n }\n }\n }\n }\n\n public void close() throws Exception {\n if(this.consumer != null) {\n this.consumer.close();\n this.consumer = null;\n }\n }\n\n public static void help() {\n System.out.println(\"Usage :\\n x.sh {path_to_config_file}\");\n }\n\n public static void main(String[] args) throws Exception {\n if(args.length != 1) {\n help();\n System.exit(-1);\n }\n String configFileName = args[0];\n System.out.println(\"KafkaConsumer config file : \" + configFileName);\n KafkaAvroConsumerUtilWithSchemaFile consumerutil = null;\n try {\n consumerutil = new KafkaAvroConsumerUtilWithSchemaFile(configFileName);\n consumerutil.consume();\n } finally {\n if(consumerutil != null) {\n consumerutil.close();\n consumerutil = null;\n }\n }\n }\nInclude the following properties in the KafkaConfig file:\nbootstrap.servers=localhost:pkc-4yyd6.us-east1.gcp.confluent.cloud:9092\ntopic.name=kafkaDSVTest\nschemaFileName=./schema1.avsc\nkey.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer\nvalue.deserializer=com.striim.kafka.deserializer.StriimAvroLengthDelimitedDeserializer\ngroup.id=KafkaAvroDemoConsumer\nsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";\nsecurity.protocol=SASL_SSL\nsasl.mechanism=PLAIN\nssl.endpoint.identification.algorithm=https\nCommand to run KafkaAvroConsumerUtilWithSchemaFile:java -jar KafkaAvroConsumerUtilWithSchemaFile.jar <path-to-config-file>Sample output of KafkaAvroConsumerUtilWithSchemaFile:Sample data read by KafkaAvroConsumerUtilWithSchemaFilean external consumer, from the Kafka topic \u201ckafkaDSVTest\u201d.\nTopic kafkaTest1 partition 0 offset 0 timestamp 1599564126433\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"138\", \"RecordOffset\": \"0\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"BUSINESS NAME\", \"1\": \" MERCHANT ID\", \"2\": \" PRIMARY ACCOUNT NUMBER\", \"3\": \" POS DATA CODE\", \"4\": \" DATETIME\", \"5\": \" EXP DATE\", \"6\": \" CURRENCY CODE\", \"7\": \" AUTH AMOUNT\", \"8\": \" TERMINAL ID\", \"9\": \" ZIP\", \"10\": \" CITY\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 1 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"268\", \"RecordOffset\": \"138\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 1\", \"1\": \"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\", \"2\": \"6705362103919221351\", \"3\": \"0\", \"4\": \"20130312173210\", \"5\": \"0916\", \"6\": \"USD\", \"7\": \"2.20\", \"8\": \"5150279519809946\", \"9\": \"41363\", \"10\": \"Quicksand\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 2 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"399\", \"RecordOffset\": \"268\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 2\", \"1\": \"OFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1\", \"2\": \"4710011837121304048\", \"3\": \"4\", \"4\": \"20130312173210\", \"5\": \"0815\", \"6\": \"USD\", \"7\": \"22.78\", \"8\": \"5985180438915120\", \"9\": \"16950\", \"10\": \"Westfield\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 3 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"530\", \"RecordOffset\": \"399\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 3\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"2553303262790204445\", \"3\": \"6\", \"4\": \"20130312173210\", \"5\": \"0316\", \"6\": \"USD\", \"7\": \"218.57\", \"8\": \"0663011190577329\", \"9\": \"18224\", \"10\": \"Freeland\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 4 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"663\", \"RecordOffset\": \"530\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 4\", \"1\": \"FZXC0wg0LvaJ6atJJx2a9vnfSFj4QhlOgbU\", \"2\": \"2345502971501633006\", \"3\": \"3\", \"4\": \"20130312173210\", \"5\": \"0813\", \"6\": \"USD\", \"7\": \"18.31\", \"8\": \"4959093407575064\", \"9\": \"55470\", \"10\": \"Minneapolis\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 5 timestamp 1599564126442\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"796\", \"RecordOffset\": \"663\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 5\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"6388500771470313223\", \"3\": \"2\", \"4\": \"20130312173210\", \"5\": \"0415\", \"6\": \"USD\", \"7\": \"314.94\", \"8\": \"7116826188355220\", \"9\": \"39194\", \"10\": \"Yazoo City\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 6 timestamp 1599564126442\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"925\", \"RecordOffset\": \"796\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 6\", \"1\": \"g6vtmaWp0CdIPEaeWfAeeu576BE7IuDk9H5\", \"2\": \"5202363682168656195\", \"3\": \"3\", \"4\": \"20130312173210\", \"5\": \"0215\", \"6\": \"USD\", \"7\": \"328.52\", \"8\": \"0497135571326680\", \"9\": \"85739\", \"10\": \"Tucson\"}, \"before\": null, \"userdata\": null}\nTopic kafkaLocalTest123 partition 0 offset 7 timestamp 1599564126442\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"1056\", \"RecordOffset\": \"925\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 7\", \"1\": \"FYYQLlKwnmIor4nxrpKu0EnYXFC3aBy8oWl\", \"2\": \"8704922945605006285\", \"3\": \"0\", \"4\": \"20130312173210\", \"5\": \"0814\", \"6\": \"USD\", \"7\": \"261.11\", \"8\": \"1861218392021391\", \"9\": \"97423\", \"10\": \"Coquille\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 8 timestamp 1599564126445\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"1183\", \"RecordOffset\": \"1056\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 8\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"1241441620952009753\", \"3\": \"1\", \"4\": \"20130312173210\", \"5\": \"0816\", \"6\": \"USD\", \"7\": \"34.29\", \"8\": \"3594534131211228\", \"9\": \"40017\", \"10\": \"Defoe\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 9 timestamp 1599564126445\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"1313\", \"RecordOffset\": \"1183\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 9\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"2049824339216248859\", \"3\": \"2\", \"4\": \"20130312173210\", \"5\": \"0714\", \"6\": \"USD\", \"7\": \"31.51\", \"8\": \"6871027833256174\", \"9\": \"71334\", \"10\": \"Ferriday\"}, \"before\": null, \"userdata\": null}\n Kafka Reader + Avro Parser, schema file name specifiedThe sample application KafkaConfluentConsumer reads the data written to Confluent cloud Kafka by Striim\u2019s Kafka Writer. There won\u2019t any change required to \u201cvalue.deserializer\u201d config in KafkaConfig (default is \u201ccom.striim.avro.deserializer.LengthDelimitedAvroRecordDeserializer\u201d). Just add required SASL authentication credentials to KafkaConfig and specify the \u201cSchemaFileName\u201d property in AvroParser.CREATE APPLICATION kafkaConfluentConsumer;\n\nCREATE OR REPLACE SOURCE AvroKafkaConsumer USING Global.KafkaReader VERSION '2.1.0' ( \n AutoMapPartition: true, \n KafkaConfig: 'value.deserializer++com.striim.avro.deserializer.LengthDelimitedAvroRecordDeserializer|security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";', \n KafkaConfigValueSeparator: '++', \n Topic: 'kafkaDSVTest', \n KafkaConfigPropertySeparator: '|', \n brokerAddress: 'pkc-4yyd6.us-east1.gcp.confluent.cloud:9092', \n adapterName: 'KafkaReader', \n startOffset: 0 ) \nPARSE USING Global.AvroParser ( \n handler: 'com.webaction.proc.AvroParser_1_0', \n schemaFileName: './Product/Samples/AppData/AvroSchema/schema1.avsc', \n parserName: 'AvroParser' ) \nOUTPUT TO outputStream1;\n\nCREATE OR REPLACE CQ CQ1 \nINSERT INTO outputStream2 \nSELECT AvroToJson(o.data),\n o.metadata\n FROM outputStream1 o;\n\nCREATE OR REPLACE TARGET FileWriter1 USING Global.FileWriter ( \n filename: 'confluentOutput1', \n rolloveronddl: 'false', \n flushpolicy: 'EventCount:10000,Interval:30s', \n encryptionpolicy: '', \n adapterName: 'FileWriter', \n rolloverpolicy: 'EventCount:10000,Interval:30s' ) \nFORMAT USING Global.JSONFormatter ( \n handler: 'com.webaction.proc.JSONFormatter', \n jsonMemberDelimiter: '\\n', \n EventsAsArrayOfJsonObjects: 'true', \n formatterName: 'JSONFormatter', \n jsonobjectdelimiter: '\\n' ) \nINPUT FROM outputStream2;\n\nEND APPLICATION kafkaConfluentConsumer;\nSample output of Striim application, sample data read by an external consumer from the Kafka topic \u201ckafkaDSVTest\u201d.Topic kafkaTest1 partition 0 offset 0 timestamp 1599564126433\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"138\", \"RecordOffset\": \"0\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"BUSINESS NAME\", \"1\": \" MERCHANT ID\", \"2\": \" PRIMARY ACCOUNT NUMBER\", \"3\": \" POS DATA CODE\", \"4\": \" DATETIME\", \"5\": \" EXP DATE\", \"6\": \" CURRENCY CODE\", \"7\": \" AUTH AMOUNT\", \"8\": \" TERMINAL ID\", \"9\": \" ZIP\", \"10\": \" CITY\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 1 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"268\", \"RecordOffset\": \"138\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 1\", \"1\": \"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu\", \"2\": \"6705362103919221351\", \"3\": \"0\", \"4\": \"20130312173210\", \"5\": \"0916\", \"6\": \"USD\", \"7\": \"2.20\", \"8\": \"5150279519809946\", \"9\": \"41363\", \"10\": \"Quicksand\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 2 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"399\", \"RecordOffset\": \"268\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 2\", \"1\": \"OFp6pKTMg26n1iiFY00M9uSqh9ZfMxMBRf1\", \"2\": \"4710011837121304048\", \"3\": \"4\", \"4\": \"20130312173210\", \"5\": \"0815\", \"6\": \"USD\", \"7\": \"22.78\", \"8\": \"5985180438915120\", \"9\": \"16950\", \"10\": \"Westfield\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 3 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"530\", \"RecordOffset\": \"399\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 3\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"2553303262790204445\", \"3\": \"6\", \"4\": \"20130312173210\", \"5\": \"0316\", \"6\": \"USD\", \"7\": \"218.57\", \"8\": \"0663011190577329\", \"9\": \"18224\", \"10\": \"Freeland\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 4 timestamp 1599564126441\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"663\", \"RecordOffset\": \"530\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 4\", \"1\": \"FZXC0wg0LvaJ6atJJx2a9vnfSFj4QhlOgbU\", \"2\": \"2345502971501633006\", \"3\": \"3\", \"4\": \"20130312173210\", \"5\": \"0813\", \"6\": \"USD\", \"7\": \"18.31\", \"8\": \"4959093407575064\", \"9\": \"55470\", \"10\": \"Minneapolis\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 5 timestamp 1599564126442\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"796\", \"RecordOffset\": \"663\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 5\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"6388500771470313223\", \"3\": \"2\", \"4\": \"20130312173210\", \"5\": \"0415\", \"6\": \"USD\", \"7\": \"314.94\", \"8\": \"7116826188355220\", \"9\": \"39194\", \"10\": \"Yazoo City\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 6 timestamp 1599564126442\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"925\", \"RecordOffset\": \"796\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 6\", \"1\": \"g6vtmaWp0CdIPEaeWfAeeu576BE7IuDk9H5\", \"2\": \"5202363682168656195\", \"3\": \"3\", \"4\": \"20130312173210\", \"5\": \"0215\", \"6\": \"USD\", \"7\": \"328.52\", \"8\": \"0497135571326680\", \"9\": \"85739\", \"10\": \"Tucson\"}, \"before\": null, \"userdata\": null}\nTopic kafkaLocalTest123 partition 0 offset 7 timestamp 1599564126442\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"1056\", \"RecordOffset\": \"925\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 7\", \"1\": \"FYYQLlKwnmIor4nxrpKu0EnYXFC3aBy8oWl\", \"2\": \"8704922945605006285\", \"3\": \"0\", \"4\": \"20130312173210\", \"5\": \"0814\", \"6\": \"USD\", \"7\": \"261.11\", \"8\": \"1861218392021391\", \"9\": \"97423\", \"10\": \"Coquille\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 8 timestamp 1599564126445\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"1183\", \"RecordOffset\": \"1056\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 8\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"1241441620952009753\", \"3\": \"1\", \"4\": \"20130312173210\", \"5\": \"0816\", \"6\": \"USD\", \"7\": \"34.29\", \"8\": \"3594534131211228\", \"9\": \"40017\", \"10\": \"Defoe\"}, \"before\": null, \"userdata\": null}\nTopic kafkaTest1 partition 0 offset 9 timestamp 1599564126445\n{\"metadata\": {\"FileOffset\": \"0\", \"RecordEnd\": \"1313\", \"RecordOffset\": \"1183\", \"FileName\": \"pos10.csv\", \"RecordStatus\": \"VALID_RECORD\"}, \"data\": {\"0\": \"COMPANY 9\", \"1\": \"ljh71ujKshzWNfXMdQyN8O7vaNHlmPCCnAx\", \"2\": \"2049824339216248859\", \"3\": \"2\", \"4\": \"20130312173210\", \"5\": \"0714\", \"6\": \"USD\", \"7\": \"31.51\", \"8\": \"6871027833256174\", \"9\": \"71334\", \"10\": \"Ferriday\"}, \"before\": null, \"userdata\": null}\nKafkaWriter + AvroFormatter with schema evolution - Confluent wire format with schema registryConfluent Cloud is a resilient, scalable streaming data service based on Apache Kafka, delivered as a fully managed service. The only difference between local and cloud schema registry is the SASL setup required in the clients.The confluent serializer writes data in Confluent wire format.Sample Striim application loading data from Oracle to Kafka in Confluent wire formatThe following sample code writes data from OracleDatabase to the kafkaTest topic in Confluent Cloud. Kafka Writer uses Confluent\u2019s Avro serializer which takes care of registering the schema of the Avro record in the confluent cloud schema registry and adds the schema registry id with the respective Kafka messages. The avro records can be formatted in confluent wire format only in Async mode and Sync mode with batch disabled. The user must create the topic before running the application. The confluent cloud authentication properties are specified in the KafkaConfig property in KafkaWriter (these are not required in case of apache kafka running locally) and the schema registry authentication credentials are specified as a part of the schema registry configuration in the AvroFormatter.Add the following configuration to Kafka Config in KafkaWritervalue.serializer=io.confluent.kafka.serializers.KafkaAvroSerializerNote: If Sync mode is enabled add the following configuration to Kafka Configbatch.size=-1Set \u201cschemaregistryurl and schemaregistryconfiguration\u201d in Avro Formatterschemaregistryurl: 'https://psrc-2225o.us-central1.gcp.confluent.cloud',\nschemaregistryconfiguration:'basic.auth.user.info=<SR_API_KEY>:<SR_API_PASSWORD>,basic.auth.credentials.source=USER_INFO') and other required KafkaConfig for cloud based schema registry.KafkaConfig: 'security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";'The following application writes Kafka messages in Confluent\u2019s Wire format with schema registry enabled.create application confluentWireFormatTest;\nCreate Source oracleSource\nUsing OracleReader\n(\nUsername: '<user-name>',\nPassword: '<password>',\nConnectionURL: '<connection-url>',\nTables:'<table-name>',\nFetchSize:1,\nCompression:true\n)\nOutput To DataStream;\ncreate Target t using KafkaWriter VERSION '0.11.0'(\nbrokerAddress:'pkc-4yyd6.us-east1.gcp.confluent.cloud:9092',\nmode :'ASync',\nTopic:'kafkaTest',\nKafkaConfig: 'value.serializer++io.confluent.kafka.serializers.KafkaAvroSerializer|security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";', \n KafkaConfigValueSeparator: '++', \n KafkaConfigPropertySeparator: '|' \n)\nformat using AvroFormatter (\nformatAS:'Table',\nschemaregistryurl: 'https://psrc-4rw99.us-central1.gcp.confluent.cloud',\nschemaregistryconfiguration:'basic.auth.user.info=:,basic.auth.credentials.source=USER_INFO'\n)\ninput from DataStream;\nend application confluentWireFormatTest;Deploy and start the above application and refer to application monitoring metrics to check the progress.Reading Avro Records in Confluent wire format with schema registryNow that the data is in Kafka topic it can be read by KafkaReader or any other external application. Striim\u2019s KafkaReader and external application requires the io.confluent.kafka.serializers.KafkaAvroDeserializer deserialiser to be specified in the KafkaConfig. Schema registry url and schema registry authentication credentials have to be specified.Following is a sample java KafkaConsumer which takes input from a config file (having configurations like broker address, topic name, value.deserializer (confluent deserializer), topic.name, schemaregistry.url, sasl.jaas.config, security.protocol, sasl.mechanism, ssl.endpoint.identification. algorithm, basic.auth.user.info ).\nimport org.apache.avro.generic.GenericRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecord;\nimport org.apache.kafka.clients.consumer.ConsumerRecords;\nimport org.apache.kafka.clients.consumer.KafkaConsumer;\nimport org.apache.kafka.common.TopicPartition;\nimport java.io.FileInputStream;\nimport java.io.InputStream;\nimport java.util.ArrayList;\nimport java.util.Properties;\nimport java.util.List;\n\npublic class KafkaAvroConsumerUtilConfluentWireFormat {\n\n private KafkaConsumer<byte[], Object> consumer;\n\n public KafkaAvroConsumerUtilConfluentWireFormat(String configFileName) throws Exception {\n\n\n Properties props = new Properties();\n InputStream in = new FileInputStream(configFileName);\n props.load(in);\n\n this.consumer = new KafkaConsumer<byte[], Object>(props);\n\n TopicPartition tp = new TopicPartition(props.getProperty(\"topic.name\"), 0);\n List<TopicPartition> tpList = new ArrayList<TopicPartition>();\n tpList.add(tp);\n this.consumer.assign(tpList);\n this.consumer.seekToBeginning(tpList);\n }\n\n public void consume() throws Exception {\n while(true) {\n ConsumerRecords<byte[], Object> records = consumer.poll(1000);\n for(ConsumerRecord<byte[], Object> record : records) {\n System.out.println(\"Topic \" + record.topic() + \" partition \" + record.partition()\n + \" offset \" + record.offset() + \" timestamp \" + record.timestamp());\n List<GenericRecord> avroRecordList = (List<GenericRecord>) record.value();\n for(GenericRecord avroRecord : avroRecordList) {\n System.out.println(avroRecord);\n }\n }\n }\n }\n\n public void close() throws Exception {\n if(this.consumer != null) {\n this.consumer.close();\n this.consumer = null;\n }\n }\n\n public static void main(String[] args) throws Exception {\n String configFileName = args[0];\n System.out.println(\"KafkaConsumer config file : \" + configFileName);\n KafkaAvroConsumerUtilWithSchemaFile consumerutil = null;\n try {\n consumerutil = new KafkaAvroConsumerUtilConfluentWireFormat(configFileName);\n consumerutil.consume();\n } finally {\n if(consumerutil != null) {\n consumerutil.close();\n consumerutil = null;\n }\n }\n }\nInclude the following properties in the KafkaConfig file.bootstrap.servers=pkc-4yyd6.us-east1.gcp.confluent.cloud:9092\ntopic.name=kafkaTest\nschemaregistry.url=https://psrc-4rw99.us-central1.gcp.confluent.cloud\nkey.deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer\nvalue.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer\ngroup.id=group1\nbasic.auth.user.info=<SR_API_KEY>:<SR_API_SECRET>\nsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";\nsecurity.protocol=SASL_SSL\nbasic.auth.credentials.source=USER_INFO\nsasl.mechanism=PLAIN\nssl.endpoint.identification.algorithm=https\nCommand to run KafkaAvroConsumerUtilConfluentWireFormat:java -jar KafkaAvroConsumerUtilConfluentWireFormat.jar <path-to-config-file>Sample output of KafkaAvroConsumerUtilConfluentWireFormat, sample data read by KafkaAvroConsumerUtilConfluentWireFormat an external consumer, from the Kafka topic \u201ckafkaTest\u201d.offset = 0, key = null, value = {\"ID\": 301, \"STUDENT\": \"jack301\", \"AGE\": 12} \noffset = 1, key = null, value = {\"ID\": 302, \"STUDENT\": \"jack302\", \"AGE\": 12} \noffset = 2, key = null, value = {\"ID\": 303, \"STUDENT\": \"jack303\", \"AGE\": 12} \noffset = 3, key = null, value = {\"ID\": 304, \"STUDENT\": \"jack304\", \"AGE\": 12} \noffset = 4, key = null, value = {\"ID\": 305, \"STUDENT\": \"jack305\", \"AGE\": 12} Reading via Kafka Reader with Avro Parser, schema registry specifiedThe sample application KafkaConfluentConsumer reads the data written to Confluent cloud Kafka by Striim\u2019s Kafka Writer in confluent wire format. The \u201cvalue.deserializer\u201d config in KafkaConfig has to changed to \u201cio.confluent.kafka.serializers.KafkaAvroDeserializer\u201d. Add the required SASL authentication credentials to KafkaConfig and specify the schema registry url and schema registry authentication credentials in Avro Parser.Add the following configuration to Kafka Config in KafkaReader:value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializerSet \u201cschemaregistryurl and schemaregistryconfiguration\u201d in Avro Parser:schemaregistryurl: 'https://psrc-4rw99.us-central1.gcp.confluent.cloud',\nschemaregistryconfiguration:'basic.auth.user.info=<SR_API_KEY>:<SR_API_PASSWORD>,basic.auth.credentials.source=USER_INFO') for cloud based schema registry, also set KafkaConfig:KafkaConfig: 'max.request.size==10485760:batch.size==10000120:sasl.mechanism==PLAIN:security.protocol==SASL_SSL:sasl.jaas.config==org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<kafka-cluster-api-key>\\\" password=\\\"<kafka-cluster-api-secret>\\\";'The following application will be able to read Kafka Messages in Confluent\u2019s Wire format with schema registry enabled.\nCREATE APPLICATION KafkaConfluentConsumer;\n\nCREATE OR REPLACE SOURCE ConfluentKafkaAvroConsumer USING Global.KafkaReader VERSION '2.1.0' ( \n AutoMapPartition: true, \n KafkaConfigValueSeparator: '++', \n KafkaConfig: 'value.deserializer++io.confluent.kafka.serializers.KafkaAvroDeserializer|security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";', \n Topic: 'kafkaTest', \n KafkaConfigPropertySeparator: '|', \n brokerAddress: 'pkc-4yyd6.us-east1.gcp.confluent.cloud:9092', \n adapterName: 'KafkaReader', \n startOffset: 0 ) \nPARSE USING Global.AvroParser ( \n handler: 'com.webaction.proc.AvroParser_1_0', \n schemaregistryurl: 'https://psrc-4rw99.us-central1.gcp.confluent.cloud', \nschemaregistryconfiguration:'basic.auth.user.info=<SR_API_KEY>:<SR_API_PASSWORD>,basic.auth.credentials.source=USER_INFO',\n parserName: 'AvroParser' ) \nOUTPUT TO outputStream1;\n\nCREATE OR REPLACE CQ CQ1 \nINSERT INTO outputStream2 \nSELECT AvroToJson(o.data),\n o.metadata\n FROM outputStream1 o;\n\nCREATE OR REPLACE TARGET FileWriter1 USING Global.FileWriter ( \n filename: 'confluentOutput1', \n rolloveronddl: 'false', \n flushpolicy: 'EventCount:10000,Interval:30s', \n encryptionpolicy: '', \n adapterName: 'FileWriter', \n rolloverpolicy: 'EventCount:10000,Interval:30s' ) \nFORMAT USING Global.JSONFormatter ( \n handler: 'com.webaction.proc.JSONFormatter', \n jsonMemberDelimiter: '\\n', \n EventsAsArrayOfJsonObjects: 'true', \n formatterName: 'JSONFormatter', \n jsonobjectdelimiter: '\\n' ) \nINPUT FROM outputStream2;\n\nEND APPLICATION KafkaConfluentConsumer;\nSample data read by Kafka Reader with Avro Parser from the kafka topic \u201ckafkaTest\u201d.[\n {\n \"ID\":301,\"STUDENT\":\"jack301\",\"AGE\":12\n },\n {\n \"ID\":302,\"STUDENT\":\"jack302\",\"AGE\":12\n },\n {\n \"ID\":303,\"STUDENT\":\"jack303\",\"AGE\":12\n }\n ]KafkaReader + AvroParser reading Confluent wire formatThis producer uses Confluent\u2019s Avro serializer which takes care of registering the schema in the confluent cloud schema registry and adds the schema registry id with the respective Avro records in Kafka messages. The confluent cloud configuration properties (server url, sasl config, schema registry url, schema registry authentication credentials) are specified as a part of the KafkaProducer properties. It could be any other application which produces the data using \u201cio.confluent.kafka.serializers.KafkaAvroSerializer.class\u201d as value.deserilzierimport org.apache.avro.Schema;\nimport org.apache.avro.generic.GenericData;\nimport org.apache.avro.generic.GenericRecord;\nimport org.apache.kafka.clients.producer.KafkaProducer;\nimport org.apache.kafka.clients.producer.ProducerConfig;\nimport org.apache.kafka.clients.producer.ProducerRecord;\nimport org.apache.kafka.common.errors.SerializationException;\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.Properties;\n\npublic class ConfluentKafkaProducer {\n public static void main(String[] args) throws IOException {\n if(args.length != 1) {\n help();\n System.exit(-1);\n }\n String schemaFilename = args[0];\n Properties props = new Properties();\n props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,\n org.apache.kafka.common.serialization.StringSerializer.class);\n props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,\n io.confluent.kafka.serializers.KafkaAvroSerializer.class); \n props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,\"pkc-4yyd6.us-east1.gcp.confluent.cloud:9092\");\n props.put(\"schema.registry.url\", \"https://psrc-4rw99.us-central1.gcp.confluent.cloud\");\n props.put(\"security.protocol\",\"SASL_SSL\");\n props.put(\"sasl.jaas.config\",\"org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";\");\n props.put(\"ssl.endpoint.identification.algorithm\",\"https\");\n props.put(\"sasl.mechanism\",\"PLAIN\");\n props.put(\"basic.auth.credentials.source\",\"USER_INFO\");\n props.put(\"basic.auth.user.info\",\"<SR_API_KEY>:<SR_API_SECRET>\");\n \n Schema.Parser parser = new Schema.Parser();\n \n /* Schema used in this sample\n * {\n * \"namespace\": \"mytype.avro\",\n * \"type\" : \"record\",\n * \"name\": \"Array_Record\",\n * \"fields\": [\n * {\"name\" : \"ID\", \"type\" : [ \"null\" , \"int\" ] },\n * {\"name\" : \"Name\", \"type\" : [ \"null\" , \"string\" ] }\n * ]\n * }\n */\n Schema schema = parser.parse(new File(schemaFilename));\n for (int i = 0; i < 10; i++) {\n KafkaProducer producer = new KafkaProducer(props);\n GenericRecord avroRecord;\n avroRecord = new GenericData.Record(schema);\n avroRecord.put(\"ID\", i);\n avroRecord.put(\"Name\", \"xxx\");\n ProducerRecord<String, Object> record = new ProducerRecord<String, Object>(\u201c<topic>\u201d, \"null\", avroRecord);\n try {\n producer.send(record);\n } catch (SerializationException e) {\n e.printStackTrace();\n }\n finally {\n producer.flush();\n producer.close();\n }\n }\n }\n \n public static void help() {\n System.out.println(\"Usage :\\n x.sh {path_to_schema_file}\");\n }\n \n}\nCommand to run ConfluentKafkaProducer:java -jar KafkaAvroConsumerUtilWithSchemaFile.jar <path-to-schemafile>KafkaReader +_ AvroParser configuration to read Confluent wire formatStriim\u2019s KafkaReader-AvroParser (from v3.10.3.4) has the ability to parse the Avro records written in Confluent\u2019s wire format. Avro record in Confluent\u2019s Wire format will always have schema registered to schema registry.Add the following configuration to Kafka Config in KafkaReadervalue.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializerSet \u201cschemaregistryurl and schemaregistryconfiguration\u201d in Avro Parserschemaregistryurl: 'https://psrc-4rw99.us-central1.gcp.confluent.cloud',\nschemaregistryconfiguration:'basic.auth.user.info=<SR_API_KEY>:<SR_API_PASSWORD>,basic.auth.credentials.source=USER_INFO') and other required KafkaConfig for cloud based schema registry.KafkaConfig: 'max.request.size==10485760:batch.size==10000120:sasl.mechanism==PLAIN:security.protocol==SASL_SSL:sasl.jaas.config==org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<kafka-cluster-api-key>\\\" password=\\\"<kafka-cluster-api-secret>\\\";'The following application will be able to read Kafka Messages in Confluent\u2019s Wire format with schema registry enabled.\nCREATE APPLICATION KafkaConfluentConsumer;\n\nCREATE OR REPLACE SOURCE ConfluentKafkaAvroConsumer USING Global.KafkaReader VERSION '2.1.0' ( \n AutoMapPartition: true, \n KafkaConfigValueSeparator: '++', \n KafkaConfig: 'value.deserializer++io.confluent.kafka.serializers.KafkaAvroDeserializer|security.protocol++SASL_SSL|ssl.endpoint.identification.algorithm++https|sasl.mechanism++PLAIN|sasl.jaas.config++org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";', \n Topic: 'kafkaTest', \n KafkaConfigPropertySeparator: '|', \n brokerAddress: 'pkc-4yyd6.us-east1.gcp.confluent.cloud:9092', \n adapterName: 'KafkaReader', \n startOffset: 0 ) \nPARSE USING Global.AvroParser ( \n handler: 'com.webaction.proc.AvroParser_1_0', \n schemaregistryurl: 'https://psrc-4rw99.us-central1.gcp.confluent.cloud', \nschemaregistryconfiguration:'basic.auth.user.info=<SR_API_KEY>:<SR_API_PASSWORD>,basic.auth.credentials.source=USER_INFO',\n parserName: 'AvroParser' ) \nOUTPUT TO outputStream1;\n\nCREATE OR REPLACE CQ CQ1 \nINSERT INTO outputStream2 \nSELECT AvroToJson(o.data),\n o.metadata\n FROM outputStream1 o;\n\nCREATE OR REPLACE TARGET FileWriter1 USING Global.FileWriter ( \n filename: 'confluentOutput1', \n rolloveronddl: 'false', \n flushpolicy: 'EventCount:10000,Interval:30s', \n encryptionpolicy: '', \n adapterName: 'FileWriter', \n rolloverpolicy: 'EventCount:10000,Interval:30s' ) \nFORMAT USING Global.JSONFormatter ( \n handler: 'com.webaction.proc.JSONFormatter', \n jsonMemberDelimiter: '\\n', \n EventsAsArrayOfJsonObjects: 'true', \n formatterName: 'JSONFormatter', \n jsonobjectdelimiter: '\\n' ) \nINPUT FROM outputStream2;\n\nEND APPLICATION KafkaConfluentConsumer;\nSample data read by Kafka Reader with Avro Parser from the kafka topic \u201ckafkaTest\u201d:offset = 0, key = null, value = {\"ID\": 0, \"Name\": \"xxx\"} \noffset = 1, key = null, value = {\"ID\": 1, \"Name\": \"xxx\"} \noffset = 2, key = null, value = {\"ID\": 2, \"Name\": \"xxx\"} \noffset = 3, key = null, value = {\"ID\": 3, \"Name\": \"xxx\"} \noffset = 4, key = null, value = {\"ID\": 4, \"Name\": \"xxx\"} \noffset = 5, key = null, value = {\"ID\": 5, \"Name\": \"xxx\"} \noffset = 6, key = null, value = {\"ID\": 6, \"Name\": \"xxx\"} \noffset = 7, key = null, value = {\"ID\": 7, \"Name\": \"xxx\"} \noffset = 8, key = null, value = {\"ID\": 8, \"Name\": \"xxx\"} \noffset = 9, key = null, value = {\"ID\": 9, \"Name\": \"xxx\"} KafkaReader + AvroParser reading Avro records with no delimitersThere can be producers which will write one Avro record in a Kafka Message and will have not have delimiters (like a magic byte or length delimiter). All bytes in a KafkaMessage read should be used to construct the Avro record. In this variation since there is no way to add schema registry ID, a schema file name is required for the Striim\u2019s Avro Parser to parse the records.The producer produces one Avro record per Kafka message that is not length delimited.import org.apache.avro.Schema;\nimport org.apache.avro.generic.GenericData;\nimport org.apache.avro.generic.GenericRecord;\nimport org.apache.avro.io.DatumWriter;\nimport org.apache.avro.io.DirectBinaryEncoder;\nimport org.apache.avro.io.EncoderFactory;\nimport org.apache.avro.specific.SpecificDatumWriter;\nimport org.apache.kafka.clients.producer.KafkaProducer;\nimport org.apache.kafka.clients.producer.ProducerConfig;\nimport org.apache.kafka.clients.producer.ProducerRecord;\nimport java.io.ByteArrayOutputStream;\nimport java.io.File;\nimport java.util.Properties;\n\npublic class SingleAvroRecordKafkaProducer {\n\n public static void main(String[] args) {\n if(args.length != 1) {\n help();\n System.exit(-1);\n }\n String schemaFilename = args[0];\n /* Schema used in this sample\n * {\n * \"namespace\": \"mytype.avro\",\n * \"type\" : \"record\",\n * \"name\": \"Array_Record\",\n * \"fields\": [\n * {\"name\" : \"ID\", \"type\" : [ \"null\" , \"int\" ] },\n * {\"name\" : \"Name\", \"type\" : [ \"null\" , \"string\" ] }\n * ]\n * }\n */\n Schema schema = new Schema.Parser().parse(schemaFilename);\n\n Properties props = new Properties();\n props.put(\"bootstrap.servers\", \"pkc-4yyd6.us-east1.gcp.confluent.cloud:9092\");\n props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,\"org.apache.kafka.common.serialization.ByteArraySerializer\");\n props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,\"org.apache.kafka.common.serialization.ByteArraySerializer\");\n props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,\"pkc-4yyd6.us-east1.gcp.confluent.cloud:9092\");\n props.put(\"security.protocol\",\"SASL_SSL\");\n props.put(\"sasl.jaas.config\",\"org.apache.kafka.common.security.plain.PlainLoginModule required username=\\\"<CLUSTER_API_KEY>\\\" password=\\\"<CLUSTER_API_SECRET>\\\";\");\n props.put(\"ssl.endpoint.identification.algorithm\",\"https\");\n props.put(\"sasl.mechanism\",\"PLAIN\");\n\n KafkaProducer<byte[], byte[]> producer = new KafkaProducer<>(props);\n \n for (int nEvents = 0; nEvents < 10; nEvents++) {\n byte[] serializedBytes = null;\n GenericRecord avrorecord;\n try {\n avrorecord = new GenericData.Record(schema);\n avrorecord.put(\"ID\", (int) nEvents);\n avrorecord.put(\"Name\", \"xxx\");\n DatumWriter<GenericRecord> datumWriter = new SpecificDatumWriter(schema);//<GenericRecord>(schema);\n ByteArrayOutputStream out = new ByteArrayOutputStream();\n DirectBinaryEncoder binaryEncoder = (DirectBinaryEncoder) EncoderFactory.get().directBinaryEncoder(out, null);\n datumWriter.write(avrorecord, binaryEncoder);\n binaryEncoder.flush();\n out.close();\n serializedBytes = out.toByteArray();\n ProducerRecord<byte[], byte[]> record = new ProducerRecord<byte[], byte[]>(<topicName>, nEvents, null, serializedBytes);\n producer.send(record);\n } catch (Exception e) {\n e.printStackTrace();\n } finally {\n producer.flush();\n producer.close();\n }\n }\n }\n \n public static void help() {\n System.out.println(\"Usage :\\n x.sh {path_to_schema_file}\");\n }\n}Command to run SingleAvroRecordKafkaProducer:java -jar SingleAvroRecordKafkaProducer.jar <path-to-schemafile>Kafka Reader with Avro Parser - one Avro record in Kafka messageKafka Reader with an Avro parser can read Single Avro record in a Kafka Message, which is not length delimited. Add the following configuration \u201cvalue.deserializer= com.striim.avro.deserializer.SingleRecordAvroRecordDeserializer\u201d to Kafka Config.CREATE APPLICATION SingleAvroRecordKafkaConsumerTest;\n\nCREATE OR REPLACE SOURCE SingleAvroRecordKafkaConsumer USING Global.KafkaReader VERSION '2.1.0' ( \n AutoMapPartition: true, \n brokerAddress: 'localhost:9092', \n KafkaConfigValueSeparator: '++', \n KafkaConfigPropertySeparator: '|', \n KafkaConfig: 'value.deserializer++com.striim.avro.deserializer.SingleRecordAvroRecordDeserializer', \n adapterName: 'KafkaReader', \n Topic: 'avroTest1', \n startOffset: 0 ) \nPARSE USING Global.AvroParser ( \n schemaFileName: './Product/Samples/AppData/AvroSchema/arrayschema.avsc', \n handler: 'com.webaction.proc.AvroParser_1_0', \n parserName: 'AvroParser' ) \nOUTPUT TO outputStream1;\n\nCREATE OR REPLACE CQ CQ1 INSERT INTO outputStream2 SELECT \n-- conversion from org.apache.avro.util.Utf8 to String is required here\n o.data, o.metadata From outputStream1 o;;\n\nCREATE OR REPLACE TARGET FileWriter1 USING Global.FileWriter ( \n filename: 'confluentOutput1', \n rolloveronddl: 'false', \n flushpolicy: 'EventCount:10000,Interval:30s', \n encryptionpolicy: '', \n adapterName: 'FileWriter', \n rolloverpolicy: 'EventCount:10000,Interval:30s' ) \nFORMAT USING Global.JSONFormatter ( \n handler: 'com.webaction.proc.JSONFormatter', \n jsonMemberDelimiter: '\\n', \n EventsAsArrayOfJsonObjects: 'true', \n formatterName: 'JSONFormatter', \n jsonobjectdelimiter: '\\n' ) \nINPUT FROM outputStream2;\n\nEND APPLICATION SingleAvroRecordKafkaConsumerTest;\nSample Output of SingleAvroRecordKafkaConsumerTest:\"data\":{\"ID\": 386, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":386,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 387, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":387,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 388, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":388,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 389, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":389,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 390, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":390,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 391, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":391,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 392, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":392,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 393, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":393,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 394, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":394,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 395, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":395,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n },\n {\n \"data\":{\"ID\": 396, \"Name\": \"xxx\"},\n \"metadata\":{\"KafkaRecordOffset\":396,\"PartitionID\":0,\"TopicName\":\"avroTest1\"}\n }\nIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-03-30\n", "metadata": {"source": "https://www.striim.com/docs/en/reading-from-and-writing-to-kafka-using-avro.html", "title": "Reading from and writing to Kafka using Avro", "language": "en"}} {"page_content": "\n\nUsing pattern matchingSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing pattern matchingPrevNextUsing pattern matchingYou can use pattern matching in your queries to conveniently specify multiple combinations of events via a MATCH_PATTERN expression. Pattern expressions consist of references to stream and timer events according to this syntax : CREATE CQ <CQ name>\nINSERT INTO <output stream>\nSELECT <list of expressions>\nFROM <data source> [ <data source alias>] [, <data source> [ <data source alias> ], ...]\nMATCH_PATTERN <pattern>\nDEFINE <pattern variable> = [<data source or alias name>] \u2018(\u2018 <predicate> \u2018)\u2019 \n[ PARTITION BY <data source field name> ]; In this syntax, dataSource is a stream or timer event. The MATCH_PATTERN clause represents groups of events along with condition variables. New events are referenced in the predicates, where event variables are defined. Condition variables are expressions used to check additional conditions, such as timer events, before continuing to match the pattern. The pattern matching expression syntax uses the following notation:NotationDescription*0 or more quantifiers?0 or 1 quantifiers+1 or more quantifiers{m,n}m to n quantifiers|logical OR (alternative)&logical AND (permutation)( )grouping#restart matching markers for overlapping patterns For example, the pattern A B* C+ would contain exactly 1 new event matching A, 0 or more new events matching B, and 1 or more new events matching C. NOTE: You cannot refer to new events in conditions or expressions contained in the SELECT clause. The DEFINE clause defines a list of event variables and conditions. The conditions are described in the predicates associated with the event variables.For example, A = S(sensor < 100) matches events on stream S where the sensor attribute is less than 100.The optional PARTITION BY clause partitions the stream by key on sub-streams, and pattern matching is performed independently for every partition.Let's examine this example:CREATE CQ \nSELECT LIST(B.id) as eventValues, count(B.id) as eventCount, A.id as eventTime \nFROM eventStream S\nMATCH_PATTERN T A W B* P C\nDEFINE\n A = S(sensor < 100), \n B = S(sensor >= 100),\n C = S(sensor < 100),\n T = TIMER(interval 60 second),\n W = STOP(T), \n P = (count(B) > 5)\nPARTITION BY eventId;The pattern of event variables in the MATCH_PATTERN clause (T A W B* P C) represents the following:T A W: Event A is matched within 60 seconds. Since this pattern ends with W, subsequent events (B, P, C) can only be matched after the first 60 seconds have elapsed.B* P C: After the first 60 seconds, the match pattern consists of 0 or more instances of event B, followed by event P, followed by event C.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-02-11\n", "metadata": {"source": "https://www.striim.com/docs/en/using-pattern-matching.html", "title": "Using pattern matching", "language": "en"}} {"page_content": "\n\nEvent VariablesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing pattern matchingEvent VariablesPrevNextEvent VariablesOnce you have established event variables, you can refer to groups of matched events using 0-based array index notation.For example, B[3].sensor refers to the 4th event in group B. If you omit the index (for example, B.sensor), the last event in the group is returned.If you pass event variables to built-in or user-defined functions, the compiler will generate code differently depending on whether a scalar or aggregating function is used.In this example, the expression aggregates all sensor attributes for all matched events in event variable B.avg(B.sensor)In this example, the expression evaluates to the absolute value of the sensor attribute of the last matched event in event variable B:abs(B.sensor)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-02-11\n", "metadata": {"source": "https://www.striim.com/docs/en/event-variables.html", "title": "Event Variables", "language": "en"}} {"page_content": "\n\nReferring to Past EventsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing pattern matchingReferring to Past EventsPrevNextReferring to Past EventsIf a match on the next event depends on the value of an attribute in a previous event, use the PREV() built-in function. The only parameter is an integer constant indicating how far back to match in the event pattern. For example, PREV() or PREV(1) refers to the immediately preceding event, but PREV(2) refers to the event that occurred before the immediately preceding event. For example:DEFINE\n-- Compare the new event's sensor with the previous event's sensor.\n-- The default index value of 1 is used.\nA=streamx(sensor < PREV().sensor) \n-- Compare the new event's sensor with the sensor of the event before the previous sensor.\nB=streamx(sensor < PREV(2).sensor) In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-02-11\n", "metadata": {"source": "https://www.striim.com/docs/en/referring-to-past-events.html", "title": "Referring to Past Events", "language": "en"}} {"page_content": "\n\nTimer EventsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing pattern matchingTimer EventsPrevNextTimer EventsYou can set timer events using the following functions:FunctionDescriptiontimer(interval <int> {second|minute|hour})A condition that does not wait for a new event from any stream.When the timer expires, it send a signal as an event from the timer stream.signal( variable )The next event is expected from the timer defined by the specified variable.stop( variable )Stops the timer and cancels sending the timer event.For example, this pattern matches no events for a period of 50 seconds:PATTERN T W\nDEFINE\nT=timer(interval 50 second),\nW=signal(T)This pattern matches all events from streamA for 30 seconds (until event W is received from the timer):PATTERN T A* W -- matching all events from streamA for 30 seconds\nDEFINE\nT timer(interval 30 second),\nA=streamA,\nW=signal(T)This pattern matches an event from streamA within 30 seconds. If an event from streamA does not occur within 30 seconds, an event from the timer will be received causing the pattern matching to fail:PATTERN T A -- matching event from streamA within 30 seconds\nDEFINE\nT=timer(interval 30 second),\nA=streamAThis pattern matches events from streamA for 30 seconds. It subsequently matches events from streamB for 30 seconds:PATTERN T A C T2 B -- matching event A for 30 seconds and then event B within 30 seconds\nDEFINE\nT=timer(interval 30 second),\nA=streamA,\nC=stop(T),\nT2=timer(interval 30 second),\nB=streamBIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/timer-events.html", "title": "Timer Events", "language": "en"}} {"page_content": "\n\nAlternation ( | )Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing pattern matchingAlternation ( | )PrevNextAlternation ( | )If you would like to specify several variations of an event sequence, use the alternation operator ( | ). In case there are equivalent variations in the alternation expression, the first (leftmost) variation will be matched first. For example, the following pattern matches A but not AA since it is equivalent to A:PATTERN (A|AA|B|C)\nDEFINE\nA=streamA(sensor between 10 and 20),\nAA=streamA(sensor between 10 and 20), -- it is same as A, and will never be matched\nB=streamA(sensor > 20),\nC=streamA(sensor < 10)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-02-11\n", "metadata": {"source": "https://www.striim.com/docs/en/alternation------.html", "title": "Alternation ( | )", "language": "en"}} {"page_content": "\n\nMatching overlapping patterns ( # )Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing pattern matchingMatching overlapping patterns ( # )PrevNextMatching overlapping patterns ( # )Suppose you would like to match the sequence ABA and the stream contains events ABABA. Normally the first instance (ABA)BA will be matched, but the second instance AB(ABA) will not be matched. If you would like both instances to be matched, include a ( # ) operator in the pattern wherever you would like the engine to restart its matching (for example, AB#ABA). If the pattern contains multiple # operators, the matching restarts from the earliest occurrence of the last successful match.For example, consider this pattern:PATTERN A (B # A | C D # A)If the actual event sequence is ABACDABACDA, the following subsequences will be matched:ABAACDAABAACDAIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/matching-overlapping-patterns------.html", "title": "Matching overlapping patterns ( # )", "language": "en"}} {"page_content": "\n\nUsing analytics and regression functionsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing analytics and regression functionsPrevNextUsing analytics and regression functionsThe com.webaction.analytics classes provide you with several regression functions that enable you to make predictions based on the regression model you have chosen. To get started, begin with the following IMPORT statements in your TQL:IMPORT STATIC com.webaction.analytics.regressions.LeastSquares.*;\nIMPORT STATIC com.webaction.analytics.regressions.LeastSquaresPredictor.*;\nIMPORT STATIC com.webaction.analytics.regressions.PredictionResult.*;\nIMPORT STATIC com.webaction.analytics.regressions.Helpers.*;These imports provide you with the following regression functions, along with supporting helper methods. Use these aggregate functions in a CQ that selects from a window, which is used as the training set of the regression algorithm.Regression ClassDescriptionSIMPLE_LINEAR_REGRESSION(Object yArg, Object xArg)Performs simple linear regression using a single independent variable and a single dependent variable.CONSTRAINED_SIMPLE_LINEAR_REGRESSION(Object yArg, Object xArg)Performs simple linear regression, constrained so that the fitted line passes through the \"rightmost\" (newest) point in the window.MULTIPLE_LINEAR_REGRESSION(Object yArg, Object... xArgs)Performs multiple linear regression using multiple independent variables and a single dependent variable.CONSTRAINED_MULTIPLE_LINEAR_REGRESSION(int numFixedPoints, Object yArg, Object... xArgs)Performs multiple linear regression, constrained so that the fitted hyperplane passes through the desired number of \"rightmost\" (newest) points in the window.POLYNOMIAL_REGRESSION(int degree, Object yArg, Object xArg)Performs polynomial regression, creating a nonlinear model using a single independent variable and a single dependent variable.CONSTRAINED_POLYNOMIAL_REGRESSION(int numFixedPoints, int degree, Object yArg, Object xArg)Performs polynomial regression, constrained so that the fitted polynomial passes through the desired number of \"rightmost\" (newest) points in the window.The InventoryPredictor example illustrates the usage of these regression analytics functions through the use of a prediction algorithm, which sends alerts when the inventory is predicted to be low. The general approach to using regression in time-series predictions is as follows:Create a window.Specify the regression model.Specify the variable on which predictions are made and the independent variable (in the following example, the timestamp).Determine whether there are enough points to make a valid prediction.The syntax is:SELECT [ISTREAM] [CONSTRAINED_]\n {SIMPLE_LINEAR_REGRESSION | MULTIPLE_LINEAR_REGRESSION | POLYNOMIAL_REGRESSION}\n (properties)\n AS pred,\n CASE IS_PREDICTOR_READY(pred)\n WHEN TRUE THEN PREDICT[_MULTIPLE](properties)\n ELSE ZERO_PREDICTION_RESULT([properties])\n END AS result,\n output1,\n output2,\n [COMPUTE_BOUNDS(properties),]\n [{GREATER|LESS}_THAN_PROBABILITY()], ...In this first section of the TQL, we import the required com.webaction.analytics classes, create the application (InventoryPredictor), create the inventory stream (InventoryStream), and finally create a two-hour window over that inventory stream (InventoryChangesWindow):IMPORT STATIC com.webaction.analytics.regressions.LeastSquares.*;\nIMPORT STATIC com.webaction.analytics.regressions.LeastSquaresPredictor.*;\nIMPORT STATIC com.webaction.analytics.regressions.PredictionResult.*;\nIMPORT STATIC com.webaction.analytics.regressions.Helpers.*;\n\nCREATE APPLICATION InventoryPredictor;\n\nCREATE TYPE InventoryType(\n ts org.joda.time.DateTime,\n SKU java.lang.String,\n inventory java.lang.Double,\n location_id java.lang.Integer\n);\nCREATE STREAM InventoryStream OF InventoryType;\n \nCREATE SOURCE JSONSource USING FileReader (\n directory: 'Samples',\n WildCard: 'inventory.json',\n positionByEOF: false\n )\n PARSE USING JSONParser (\n eventType: 'InventoryType'\n )\nOUTPUT TO InventoryStream;\n\nCREATE WINDOW InventoryChangesWindow OVER InventoryStream KEEP WITHIN 2 HOUR PARTITION BY SKU;In this example, simple linear regression will be chosen to create a model in which the inventory is predicted according to the current system time. We will begin by setting up an AlertStream based on inventory predictions:CREATE STREAM AlertStream OF Global.AlertEvent;\n\nCREATE CQ AlertOnInventoryPredictions\nINSERT INTO AlertStream\n. . .The SELECT statement that follows sets up inventory alert messages based on boolean variables indicating whether the inventory is low (isInventoryLow) or ok (isInventoryOK) based on a 90% confidence interval used in the prediction model:SELECT \n \"Low Inventory Alert\",\n SKU,\n CASE WHEN isInventoryLow THEN \"warning\" WHEN isInventoryOK THEN \"info\" END,\n CASE WHEN isInventoryLow THEN \"raise\" WHEN isInventoryOK THEN \"cancel\" END,\n CASE WHEN isInventoryLow THEN \"Inventory for SKU \" + SKU + \n \" is expected to run low within 2 hours (90% confidence).\"\n WHEN isInventoryOK THEN \"Inventory status for SKU \" + SKU +\n \" looks OK for the next 2 hours (90% confidence).\"\n END ...\nThe next part of the SELECT statement sets up the simple linear regression model (pred) based on the inventory variable (inventory) and a timestamp variable (ts):FROM (SELECT ISTREAM\n SKU,\n DNOW() AS ts,\n SIMPLE_LINEAR_REGRESSION(inventory, ts) AS pred, ...Next we determine whether there are enough points to make a valid prediction by calling the IS_PREDICTOR_READY function. If there are, we compute the probability of the event that the inventory has fallen below (or exceeded) a given critical threshold by calling the LESS_THAN_PROBABILITY (or GREATER_THAN_PROBABILITY) function. If the computed probabilities are higher than 90%, we raise the appropriate flag (isInventoryLow or isInventoryOK).IS_PREDICTOR_READY(pred) AND LESS_THAN_PROBABILITY(PREDICT(pred,\n DADD(ts, DHOURS(2))), 5) > 0.9 AS isInventoryLow,\nIS_PREDICTOR_READY(pred) AND GREATER_THAN_PROBABILITY(PREDICT(pred, \n DADD(ts, DHOURS(2))), 5) > 0.9 AS isInventoryOK, ...Note that the PREDICT function feeds the data into the prediction model, which returns a PredictionResult object that is subsequently passed to the probability utility functions.Here is the entire CREATE CQ statement:CREATE CQ AlertOnInventoryPredictions\nINSERT INTO AlertStream\nSELECT \"Low Inventory Alert\",\n SKU,\n CASE WHEN isInventoryLow THEN \"warning\" WHEN isInventoryOK THEN \"info\" END,\n CASE WHEN isInventoryLow THEN \"raise\" WHEN isInventoryOK THEN \"cancel\" END,\n CASE\n WHEN isInventoryLow THEN \"Inventory for SKU \" + SKU + \n \" is expected to run low within 2 hours (90% confidence).\"\n WHEN isInventoryOK THEN \"Inventory status for SKU \" + SKU + \n \" looks OK for the next 2 hours (90% confidence).\"\n END \nFROM\n (SELECT ISTREAM\n SKU,\n DNOW() AS ts,\n SIMPLE_LINEAR_REGRESSION(inventory, ts) AS pred,\n IS_PREDICTOR_READY(pred) AND LESS_THAN_PROBABILITY(PREDICT(pred,\n DADD(ts, DHOURS(2))), 5) > 0.9 AS isInventoryLow,\n IS_PREDICTOR_READY(pred) AND GREATER_THAN_PROBABILITY(PREDICT(pred,\n DADD(ts, DHOURS(2))), 5) > 0.9 AS isInventoryOK,\n FROM InventoryChangesWindow\n GROUP BY SKU \n HAVING isInventoryLow OR isInventoryOK)\n AS subQuery;Having created the prediction model, we can send email alerts whenever there is at least 90% confidence that the inventory is low:CREATE SUBSCRIPTION InventoryEmailAlert USING EmailAdapter (\n smtp_auth: false,\n smtpurl: \"smtp.company.com\",\n subject: \"Low Inventory Alert\",\n emailList: \"sysadmin@company.com\",\n senderEmail:\"striim@company.com\" \n)\nINPUT FROM AlertStream;\nEND APPLICATION InventoryPredictor;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-01-16\n", "metadata": {"source": "https://www.striim.com/docs/en/using-analytics-and-regression-functions.html", "title": "Using analytics and regression functions", "language": "en"}} {"page_content": "\n\nUsing Apache FlumeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideAdvanced TQL programmingUsing Apache FlumePrevNextUsing Apache FlumeStriim can receive data from Apache Flume using the WebActionSink (see Apache Flume integration) as a source. Its properties are defined in configuration files on the Flume server rather than in TQL.The WebActionSink properties are:propertyvaluenotesagent.sinks.webactionSink.typecom.webaction.flume.WebActionSinkagent.sinks.webactionSink.serverUriwatp:// <ip_address > :9080the IP address and port of the Striim server (adjust the port number if you are not using the default)agent.sinks.webactionSink.usernamethe Striim login to be used by the WebActionSinkagent.sinks.webactionSink.passwordthe password for that loginagent.sinks.webactionSink.streamflume:<stream name> specify the stream name to be used in TQL (see example below)agent.sinks.webactionSink.parser.handlerDSVParserin this release, only DSVParser is supportedYou must also specify the properties for the specified parser. See the example below.The following example application assumes that Flume is running on the same system as Striim.1. Perform the first two steps described in Apache Flume integration.2. Save the following as a TQL file, then load, deploy, and start it:CREATE APPLICATION flumeTest;\nCREATE STREAM flumeStream of Global.WAEvent;\nCREATE TARGET flumeOut USING SysOut(name:flumeTest) INPUT FROM flumeStream;\nEND APPLICATION flumeTest;This application does not need a CREATE SOURCE statement because the data is being collected and parsed by Flume. The stream name must match the one specified in the WebActionSink properties and the type must be Global.WAEvent.2. Save the following as waflume.conf in the flume/conf directory, replacing the two IP addresses with the test system's IP address and the username and password with the credentials you used to load the application:# Sources, channels and sinks are defined per agent, \n# in this case called 'agent'\n\nagent.sources = netcatSrc\nagent.channels = memoryChannel\nagent.sinks = webactionSink\n\n# For each one of the sources, the type is defined\nagent.sources.netcatSrc.type = netcat\nagent.sources.netcatSrc.bind = 192.168.1.2\nagent.sources.netcatSrc.port = 41414\n\n# The channel can be defined as follows.\nagent.sources.netcatSrc.channels = memoryChannel\n\n# Each sink's type must be defined\nagent.sinks.webactionSink.type = com.webaction.flume.WebActionSink\nagent.sinks.webactionSink.serverUri = watp://192.168.1.2:9080\nagent.sinks.webactionSink.username = flumeusr\nagent.sinks.webactionSink.password = passwd\nagent.sinks.webactionSink.stream = flume:flumeStream\nagent.sinks.webactionSink.parser.handler = DSVParser\nagent.sinks.webactionSink.parser.blocksize = 256\nagent.sinks.webactionSink.parser.columndelimiter = \",\"\nagent.sinks.webactionSink.parser.rowdelimiter = \"\\n\" \nagent.sinks.webactionSink.parser.charset = \"UTF-8\"\nagent.sinks.webactionSink.parser.blockAsCompleteRecord = \"True\"\n\n#Specify the channel the sink should use\nagent.sinks.webactionSink.channel = memoryChannel\n\n# Each channel's type is defined.\nagent.channels.memoryChannel.type = memory\n\n# Other config values specific to each type of channel(sink or source)\n# can be defined as well\n# In this case, it specifies the capacity of the memory channel\nagent.channels.memoryChannel.capacity = 1003. Start Flume, specifying the configuration file:bin/flume-ng agent --conf conf --conf-file conf/waflume.conf --name agent -\nDflume.root.logger=INFO,console4. Save the following as flumetestdata.csv:100,first\n200,second\n300,third5. Open a terminal, change to the directory where you saved flumetestdata.csv, and enter the following command, replacing the IP address with the test system's:cat flumetestdata.csv | nc 192.168.1.2 41414The following output should appear in striim-node.log:flumeTest: WAEvent{\n data: [\"ID\",\"Name\"]\n metadata: {\"RecordStatus\":\"VALID_RECORD\",\"FileName\":\"\",\"FileOffset\":0}\n before: null\n dataPresenceBitMap: \"AA==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: null\n};\nflumeTest: WAEvent{\n data: [\"100\",\"first\"]\n metadata: {\"RecordStatus\":\"VALID_RECORD\",\"FileName\":\"\",\"FileOffset\":0}\n before: null\n dataPresenceBitMap: \"AA==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: null\n};\n...See Parsing the data field of WAEvent for more information about this data format.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-15\n", "metadata": {"source": "https://www.striim.com/docs/en/using-apache-flume.html", "title": "Using Apache Flume", "language": "en"}} {"page_content": "\n\nUsing Striim Open ProcessorsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideExtending Striim with custom Java codeUsing Striim Open ProcessorsPrevNextUsing Striim Open ProcessorsCreating an open processor componentA Striim open processor contains a custom Java application that reads data from a window or stream, processes it, optionally enriching it with data from a cache, and writes to an output stream.The SDK, which you may download from github.com/striim/doc-downloads includes the following:StriimOpenProcessor-SDK.jar, which contains classes to be included in the Java applicationStriimOpenProcessor-SDKdocs.zip, a Javadoc API reference for methods you may use in your Java applicationThe component must be built with Maven, since it requires the Maven Shade Plugin.An open processor can be used only in the Striim namespace from which the types are exported.The following simple example shows all the steps required to create an open processor and use it in a Striim application.Step 1: define the input and output streams in StriimThe following TQL defines the input and output streams for the example open processor you will add later. It includes a FileWriter source, a cache that will be specified in the open processor's ENRICH option, and a FileWriter target.CREATE NAMESPACE ns1;\nUSE ns1;\nCREATE APPLICATION OPExample;\n\nCREATE source CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'PosDataPreview.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n)\nOUTPUT TO CsvStream;\n \nCREATE TYPE MerchantHourlyAve(\n merchantId String,\n hourValue integer,\n hourlyAve integer\n);\n\nCREATE CACHE HourlyAveLookup using FileReader (\n directory: 'Samples/PosApp/appData',\n wildcard: 'hourlyData.txt'\n)\nPARSE USING DSVParser (\n header: Yes,\n trimquote:false,\n trimwhitespace:true\n) \nQUERY (keytomap:'merchantId') \nOF MerchantHourlyAve;\n\nCREATE CQ CsvToPosData\nINSERT INTO PosDataStream partition by merchantId\nSELECT TO_STRING(data[1]) as merchantId,\n TO_DATEF(data[4],'yyyyMMddHHmmss') as dateTime,\n DHOURS(TO_DATEF(data[4],'yyyyMMddHHmmss')) as hourValue,\n TO_DOUBLE(data[7]) as amount,\n TO_INT(data[9]) as zip\nFROM CsvStream;\n \nCREATE CQ cq2\nINSERT INTO SendToOPStream\nSELECT makeList(dateTime) as dateTime,\n makeList(zip) as zip\nFROM PosDataStream;\n \nCREATE TYPE ReturnFromOPStream_Type ( time DateTime , val Integer );\nCREATE STREAM ReturnFromOPStream OF ReturnFromOPStream_Type;\n\nCREATE TARGET OPExampleTarget \nUSING FileWriter (filename: 'OPExampleOut') \nFORMAT USING JSONFormatter() \nINPUT FROM ReturnFromOPStream;\n \nEND APPLICATION OPExample;Step 2: export the input and output stream typesIf you create OPExample in the ns1 workspace, the following Striim console command will export the types from the application to\u00a0UploadedFiles/OpExampleTypes.jar:EXPORT TYPES OF ns1.OPExample TO \"UploadedFiles/OpExampleTypes.jar\";The EXPORT TYPES command requires read permission on the namespace. See Manage Striim - Files for instructions on downloading OpExampleTypes.jar.Step 3: set up MavenInstall\u00a0the SDK and exported types .jar files:mvn install:install-file -DgroupId=com.example -DartifactId=OpenProcessorSDK \\\n -Dversion=1.0.0-SNAPSHOT -Dpackaging=jar -Dfile=/opt/striim/StriimSDK/StriimOpenProcessor-SDK.jar\nmvn install:install-file -DgroupId=com.example -DartifactId=OPExample -Dversion=1.0.0-SNAPSHOT \\\n -Dpackaging=jar -Dfile=/home/myhome/OpExampleTypes.jarCreate a Maven project in which you will create your custom Java application:mvn archetype:generate -DgroupId=com.example.opexample -DartifactId=opexample \\\n -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=falseReplace the default pom.xml created by Maven with the following, adjusting as necessary for your environment:<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n <groupId>com.example.opexample</groupId>\n <artifactId>opexample</artifactId>\n <packaging>jar</packaging>\n <version>1.0-SNAPSHOT</version>\n <name>opexample</name>\n <url>http://maven.apache.org</url>\n <dependencies>\n <dependency>\n <groupId>junit</groupId>\n <artifactId>junit</artifactId>\n <version>3.8.1</version>\n <scope>test</scope>\n </dependency>\n<!-- OpenProcessorSDK jar -->\n <dependency>\n <groupId>com.example</groupId>\n <artifactId>OpenProcessorSDK</artifactId>\n <version>1.0.0-SNAPSHOT</version>\n <scope>provided</scope>\n </dependency>\n<!-- exported types jar -->\n <dependency>\n <groupId>com.example</groupId>\n <artifactId>OPExample</artifactId>\n <version>1.0.0-SNAPSHOT</version>\n <scope>provided</scope>\n </dependency>\n </dependencies>\n <build>\n <plugins>\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-release-plugin</artifactId>\n <version>2.2.2</version>\n </plugin>\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-compiler-plugin</artifactId>\n <version>2.3.2</version>\n <configuration>\n <source>1.8</source>\n <target>1.8</target>\n </configuration>\n </plugin>\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-shade-plugin</artifactId>\n <version>2.4.3</version>\n <configuration>\n <createDependencyReducedPom>false</createDependencyReducedPom>\n<!-- \nThe output SCM filename is defined here.\n-->\n <finalName>OPExample.scm</finalName>\n <transformers>\n<transformer implementation=\"org.apache.maven.plugins.shade.resource.ManifestResourceTransformer\">\n <manifestEntries>\n <Striim-Module-Name>OPExample</Striim-Module-Name>\n <Striim-Service-Interface>\n com.webaction.runtime.components.openprocessor.StriimOpenProcessor\n </Striim-Service-Interface>\n <Striim-Service-Implementation>\n com.example.opexample.App\n </Striim-Service-Implementation>\n </manifestEntries>\n </transformer>\n </transformers>\n <artifactSet>\n <excludes>\n <exclude>org.slf4j:*</exclude>\n <exlcude>log4j:*</exlcude>\n </excludes>\n </artifactSet>\n </configuration>\n <executions>\n <execution>\n <phase>package</phase>\n <goals>\n <goal>shade</goal>\n </goals>\n </execution>\n </executions>\n </plugin>\n <plugin>\n <groupId>com.coderplus.maven.plugins</groupId>\n <artifactId>copy-rename-maven-plugin</artifactId>\n <version>1.0</version>\n <executions>\n <execution>\n <id>copy-file</id>\n <phase>package</phase>\n <goals>\n <goal>copy</goal>\n </goals>\n<!--\nThe location and name for the .scm file to be imported into Striim is defined here.\nPreferred location is module/modules folder under the Maven project main folder.\n-->\n <configuration>\n <sourceFile>/home/myhome/opexample/target/OpExample.scm.jar</sourceFile>\n <destinationFile>/home/myhome/opexample/modules/OpExample.scm</destinationFile>\n </configuration>\n </execution>\n </executions>\n </plugin>\n </plugins>\n </build>\n</project>\nStep 4: write your Java application and build the .scmReplace the default App.java with the following:package com.example.opexample;\n \nimport wa.ns1.SendToOPStream_Type_1_0;\nimport wa.ns1.ReturnFromOPStream_Type_1_0;\n \nimport com.webaction.anno.PropertyTemplateProperty;\nimport com.webaction.runtime.components.openprocessor.StriimOpenProcessor;\nimport org.joda.time.DateTime;\n \nimport com.webaction.anno.AdapterType;\nimport com.webaction.anno.PropertyTemplate;\nimport com.webaction.runtime.containers.WAEvent;\n \nimport com.webaction.runtime.containers.IBatch;\nimport java.util.*;\n \n@PropertyTemplate(name = \"TupleConverter\", type = AdapterType.process,\nproperties = {\n@PropertyTemplateProperty(name=\"ahead\", type=Integer.class, required=true, defaultValue=\"0\"),\n@PropertyTemplateProperty(name=\"lastItemSeen\", type=Boolean.class, required=true, defaultValue=\"0\")\n},\n// The names of outputType and inputType are relative to Striim: output from a native Striim\n// code to your custom component, and input from your custom component to a native component.\noutputType = SendToOPStream_Type_1_0.class,\ninputType = ReturnFromOPStream_Type_1_0.class\n)\npublic class App extends StriimOpenProcessor\n{\n \npublic void run() {\n\t\tIBatch<WAEvent> event = getAdded();\n\t\tIterator<WAEvent> it = event.iterator();\n\t\twhile (it.hasNext()) {\n\t\t\tSendToOPStream_Type_1_0 type = (SendToOPStream_Type_1_0 ) it.next().data;\n\t\t\t// ... Additional operations\n\t\t}\n\n\t\tReturnFromOPStream_Type_1_0 ReturnFromOPStream_Type_1_0 = new ReturnFromOPStream_Type_1_0 ();\n\t\tReturnFromOPStream_Type_1_0.time = DateTime.now();\n\t\tRandom rand = new Random(System.currentTimeMillis());\n\n\t\tReturnFromOPStream_Type_1_0.val= rand.nextInt(50) + 1;\n\t\tsend(ReturnFromOPStream_Type_1_0 );\n\n\t}\n\npublic void close() throws Exception {\n // TODO Auto-generated method stub\n \n }\n \n public Map getAggVec() {\n // TODO Auto-generated method stub\n return null;\n }\n \n public void setAggVec(Map aggVec) {\n // TODO Auto-generated method stub\n \n }\n}\nChange to the\u00a0opexample directory created by Maven and enter\u00a0mvn package.Step 5: import the .scm into Striim\u02dbLoading an open process\u02dbor requires the Global.admin permission (see Permissions).Copy\u00a0opexample/modules/OpExample.scm to a directory accessible by the Striim server, then use the following console command to load it:LOAD OPEN PROCESSOR \"<path>/OpExample.scm\";Alternatively, you may load it in Flow Designer at Configuration > App settings > Load / unload open processor.In either case, when the application is restarted, Striim will reload the open processor from the same location.Step 6: add the open processor to your applicationReturn to the application you created in step 1, open the B\u02dbase Components section of the component palette, drag a Striim Open Processor into the workspace, set its options as follows, and click Save. Note that Ahead and Last Item Seen are defined by the Java class. The other properties will appear in all open processor components.Return to the application you created in step 1, open the Base Components section of the component palette, drag a Striim Open Processor into the workspace, set its options as follows, and click Save. Note that Ahead and Last Item Seen are defined by the Java class. The other properties will appear in all open processor components.If you run the application, it will create output files in the striim directory.Modifying an open processorTo modify your open processor, unload it in Flow Designer at Configuration > App settings > Load / unload open processor or using the command UNLOAD OPEN PROCESSOR \"<path>/<file name>.scm\";. Then make changes to the Java application, compile, and load the new .scm.Supporting recovery of open processorsTo ensure that your open processors are recoverable:Use a send() method that includes event positions.Do not mutate the event data.Using send() functionsOpen processors publish their results to downstream components using send() functions. There are three versions and each has effects on the way the platform handles the output, especially for recovery.functiondescriptionsend(List<WAEvent> added, List<WAEvent> removed)This method publishes data to a downstream component as a list of added and removed elements. It is typically used to send a single event as the lone element in the added list with an empty removed list. It may also be used when processing aggregations, indicating the events added to and removed from the aggregation.The WAEvents in the lists have positions which indicate event ordering. The position is null when recovery is not in use but must have a valid value when using recovery. The position of each event must be strictly greater than the event which comes before it -- either before it in the batch, or before it in a previous batch.send(ITaskEvent batch)Use this method to publish data to a downstream component as a batch. This method is commonly used by customers familiar with the Striim batching interface (added, removed, and snapshot). The ITaskEvent parameter will be directly passed to the subscribing downstream components.The ITaskEvent contains WAEvents having positions which indicate event ordering. The position is null when recovery is not in use but must have a valid value when using recovery. The position of each event must be strictly greater than the event which comes before it, either before it in the batch, or before it in a previous batch.send(Object o);Use this method to publish raw Object data to a downstream component. This method is commonly used when the Open Processor emits simple values and recovery is not used. The Object parameter will be packaged as the payload in a stream event and delivered to downstream subscribing components.The Object will be assigned the batch position of the input batch. When a batch is delivered to the OP, the batch position is calculated and saved. Subsequent uses of this method will assign the previously calculated batch position to output events.If recovery is not in use, the positions are null.If each input batch comprises a single event, then the batch position is equal to that event position, which is unique, therefore output events will have unique positions which are fully compatible with recovery.If each input batch comprises multiple events, but this method is called only once per batch, then the batch position will be applied once so it will be unique and fully compatible with recovery.WarningIf the input batch comprises multiple events, and if this method is called multiple times while processing that plural batch, then each of the output events will have the same position, the input batch position. The events which share a position may not be detected by recovery, which can lead to a failure of exactly once processing.Loading and unloading open processorsThe LOAD and UNLOAD commands require the Global.admin role.CautionIf you unload an open processor, revise it, and load it again, do not change the name of the .scm file.To load an open processor using the console, enter: LOAD OPEN PROCESSOR \"<path>/<file name>.scm\";To load an open processor in the Flow Designer, select App Settings, enter <path>/<file name>.scm in the Load/Unload Open Processor field, and click Load.To unload an open processor using the console, enter: UNLOAD \"<path>/<file name>.scm\";To unload an open processor in the Flow Designer, select App Settings, enter <path>/<file name>.scm in the Load/Unload Open Processor field, and click Unload.In this section: Using Striim Open ProcessorsCreating an open processor componentStep 1: define the input and output streams in StriimStep 2: export the input and output stream typesStep 3: set up MavenStep 4: write your Java application and build the .scmStep 5: import the .scm into StriimStep 6: add the open processor to your applicationModifying an open processorSupporting recovery of open processorsUsing send() functionsLoading and unloading open processorsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-14\n", "metadata": {"source": "https://www.striim.com/docs/en/using-striim-open-processors.html", "title": "Using Striim Open Processors", "language": "en"}} {"page_content": "\n\nTQL referenceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideTQL referencePrevNextTQL referenceThis section covers TQL data types, operators, functions, and reserved keywords. For data definition statements, see DDL and component reference.Supported data typesSQL data types are not supported in TQL. The following Java data types are supported:java.lang.Byte, java.lang.Byte[]java.lang.Doublejava.lang.Floatjava.lang.Integerjava.lang.Longjava.lang.Shortjava.lang.StringFor convenience, you may specify these in TQL as byte, byte[], double, float, integer, long, short, and string, and they will be converted to the above types on import.org.joda.time.DateTime\u00a0is imported automatically and may be specified as DateTime.All these types support nulls.OperatorsTQL supports the following operators.Arithmetic operatorsSee https://www.w3resource.com/sql/arithmetic-operators/sql-arithmetic-operators.php for an introduction.+: add-: subtract*: multiply/: divide%: moduloComparison operatorsSee http://www.sqltutorial.org/sql-comparison-operators for an introduction.=: equal to!=: not equal to<>: not equal to>: greater than<: less than>=: greater than or equal to<=: less than or equal to!<: not less than!>: not greater thanLogical operatorsSee http://www.sqltutorial.org/sql-logical-operators for an introduction.ALLANDANYBETWEENEXISTSINIS NULLLIKE (see Using regular expressions (regex))NOTORUNIQUEFunctionsTQL supports the following functions.Functions for supported data typesStriim supports all native functions for these supported data types:java.lang.Bytejava.lang.Doublejava.lang.Floatjava.lang.Integerjava.lang.Longjava.lang.Shortjava.lang.Stringorg.joda.time.DateTimeAggregate functionsTo avoid unexpected results from a SELECT statement\u00a0containing an aggregate function :Always include a\u00a0GROUP BY clause.If\u00a0selecting from a window, all fields other than the one in the GROUP BY\u00a0clause\u00a0should use an aggregate function. For example, instead of\u00a0SELECT a, b, SUM(c) FROM WINDOW10s GROUP BY a, you should use SELECT a, LAST(b), SUM(c) FROM WINDOW10s GROUP BY a.If a field on a CQ with a GROUP BY clause lacks an aggregate function, the output value comes from the last event in the batch. For example, in SELECT a, b, SUM(c) FROM WINDOW10s GROUP BY a, the value of b will equal the value of b found in the final event in its batch.functionnotesAVGWorks only with Double and Float. To calculate an average for an Integer or Long field, cast it as Double or Float. For example:SELECT\n AVG(TO_FLOAT(MyWindow.PosData)) AS AvgPosData\nFROM MyWindow;COUNT [DISTINCT]FIRSTreturns Java.lang.Object: see Using FIRST and LASTLASTreturns Java.lang.Object: see Using FIRST and LASTLIST(Object,...)returns a collection of events: see\u00a0Using pattern matching for an exampleMAXMINSUMApplication metadata functionsUse these functions in a CQ to add metadata to its output. For example:CREATE CQ addAppInvokerName\nINSERT INTO addAppInvokerNameOutput\nSELECT getApplicationInvoker(),\n merchantId,\n dateTime\nFROM DSVTransformed_Stream;functionnotesgetApplicationInvoker()Returns the name of the Striim user that started the application as a String.getApplicationName()Returns the name of the application containing the function as a String.getApplicationUUID()Returns the UUID of the application containing the function as a String.getCQName()Returns the name of the CQ containing the function as a String.getOpenProcessorName()Returns the name of the open processor containing the function as a String.Date functionsSee http://joda-time.sourceforge.net/apidocs/org/joda/time/Period.html for an explanation of the Period object. A P in printed results represents a Period object.functiondescriptionnotesDADD(DateTime, Period)add a Period to a DateTime valuefor example, DADD(ts, DHOURS(2)) adds two hours to the value of tsDAFTER(DateTime, DateTime)true if the second date is after the firstDBEFORE(DateTime, DateTime)true if the second date is before the firstDBETWEEN(DateTime, DateTime, DateTime)true if the first date is after the second and before the thirdDBETWEEN( origTs, DSUBTRACT(ts, DSECS(1)), DADD(ts, DSECS(1)) ) == trueDDAYS(DateTime)return the day of the month of the DateTimeDDAYS(Integer)return Integer days as a Period for DADD or DSUBSTRACTDDIFF(DateTime, DateTime)return a Period in which the difference in milliseconds between the two dates is storedDDIFF(LocalDate, LocalDate)return the number of whole days between the two partial datetimes as an IntegerDHOURS(DateTime)return the hour of the day of the DateTimeDHOURS(Integer)return Integer hours as a Period for DADD or DSUBSTRACTDMILLIS(DateTime)return the milliseconds of the DateTimeDMILLIS(Integer)return Integer milliseconds as a Period for DADD or DSUBSTRACTDMINS(DateTime)return the minutes of the hour of the DateTimeDMINS(Integer)return Integer minutes as a Period for DADD or DSUBSTRACTDMONTHS(DateTime)return the month of the year of the DateTimeDMONTHS(Integer)return Integer months as a Period for DADD or DSUBSTRACTDNOW()return the current system time as DateTimeDSECS(DateTime)return the seconds of the DateTimeDSECS(Integer)return Integer seconds as a Period for DADD or DSUBSTRACTDSUBTRACT(DateTime, Period)subtract a Period from a datefor example, DSUBTRACT(ts, DHOURS(2)) subtracts two hours from the value of tsDYEARS(DateTime)return the year of the DateTimeDYEARS(Integer)return Integer years as a Period for DADD or DSUBSTRACTTO_DATE(Long)convert an epoch time value to a DateTimeSee MultiLogApp for an example.TO_DATE(Object)convert a Date, sql.Date. sql.Timestamp, Long, or String to a DateTimeFor String input, recommended only for patterns not supported by TO_DATEF.Depending on the format of the input value, output format may be an ISO-formatted date,\u00a0yyyy/MM/dd, yyyy/MM/dd with time, yyyy-MMM-dd, yyyy-MMM-dd with time, or\u00a0yyyy/MM/dd HH:mm:ss.SSS. Use TO_Date(Object, String) for other patterns.When using an aggregate function on a DateTime field, use TO_DATE to convert the returned object to a DateTime, for example,\u00a0select TO_DATE(last(dateTime)) as dateTime.TO_DATE(Object, String)convert a String to a DateTime using any org.joda.time.format.DateTimeFormat patternRecommended only for patterns not supported by TO_DATEF. See\u00a0MultiLogApp for an example.TO_DATEF(Object, String)convert a String to a DateTime using an\u00a0org.joda.time.format.DateTimeFormat pattern containing only y, M, d, h, m, s, and STO_DATEF is over ten times faster than the TO_DATE functions, so is preferred for supported formats.\u00a0See the\u00a0joda-time API reference for information on writing pattern strings; see PosApp for an example.TO_STRING(DateTime, String)convert a DateTime to a String with specific formatTO_ZONEDDATETIME (Long)convert an epoch time value to a java.time.ZonedDateTimeTO_ZONEDDATETIME (Object)convert a String to a java.time.ZonedDateTime using the yyyy-MM-dd HH:mm:ss.SSSSSSSSS z patternIf the String does not match the yyyy-MM-dd HH:mm:ss.SSSSSSSSS z pattern, use TO_ZONEDDATETIME(Object,String).TO_ZONEDDATETIME (Object, String)convert a String to a java.time.ZonedDateTime using any org.joda.time.format.DateTimeFormat patternSee the\u00a0joda-time API reference for information on writing pattern strings.Striim supports all date functions natively associated with Joda-Time. See http://joda-time.sourceforge.net/apidocs for more information.JSONNode functionsUse the following functions in CQs with an input and/or output stream of type JSONNodeEvent, or to create or manipulate any other JSONNode objects.When the JSONNode objects are supplied by the CQ's input stream, JsonNode node is the DATA element of JSONNodeEvent. If there is more than one JSONNodeEvent input stream, choose one by using an alias for the stream, for example, s.data.functiondescriptionnotesAVROTOJSON(Object datum, Boolean IgnoreNulls)convert an Avro node to a JSON nodeObject datum must be\u00a0an Avro GenericRecord present in an AvroEvent output by a source using an AvroParser.If Boolean IgnoreNulls is true, any Avro fields with null values will be omitted from the JSON, so, for example,\u00a0{a: 100, b: null, c: 'test'} will return{a:100, c:'test'}.clearUserData()\u00a0See Adding user-defined data to JSONNodeEvent streams.JSONArrayAdd(JsonNode node, Object value)add object value at the end of array nodeUse .get() to select the array. For example, JSONArrayAdd(data.get(\"PhoneNumbers\"),\"987\") will add 987 to the end of the PhoneNumbers array node.JSONArrayInsert(JsonNode node, int index, Object value)add object value as an element at position index in array nodeUse .get to select the array. For example, JSONArrayInsert(data.get(\"PhoneNumbers\"),0,\"987\") will insert 987 at the beginning of the PhoneNumbers array. Object value must be deserialized as per Jackson ObjectMapper.readTree.JSONFrom(Object value)create a JSONNode from object valueFor example, JSONFrom('{ \"name\":\"John\", \"age\":30, \"city\":\"New York\"}'). Object value must be deserialized as per Jackson ObjectMapper.readTree.JSONGetBoolean(JsonNode node, String field)get a Boolean value from specified field of JSONNode nodeIf the field is a Boolean, returns true or false. For other types, returns false.JSONGetDouble(JsonNode node, String field)get a double value from specified field of JSONNode nodeIf the field is numeric (that is, isNumber() returns true), returns a 64-bit floating point (double) value. For other types, returns 0.0. For integer values, conversion is done using default Java type coercion. With BigInteger values, this may result in overflows.JSONGetInteger(JsonNode node, String field)get an integer value from specified field of JSONNode nodeIf the field is numeric (that is, isNumber() returns true), returns an integer value. For other types, returns 0. For floating-point numbers, the value is truncated using default Java type coercion.JSONGetString(JsonNode node, String field)get a string value from specified field of JSONNode nodeNon-string values (that is, ones for which isTextual() returns false) are returned as null. Empty string values are returned as empty strings.JSONNew()create an empty JSONNode objectJSONRemove(JsonNode node, Collection< String >fieldNames)remove specified fields from of JSONNode nodeFor example, SELECT JSONRemove(data, \"ID\").JSONSet(JsonNode node, String field, Object value)set the value specified field in specified JSONNode to object valueOverwrites any existing value. Object value must be deserialized as per Jackson ObjectMapper.readTree.makeJSON(String jsonText)create a JSONNode\u00a0putUserdata()\u00a0See Adding user-defined data to JSONNodeEvent streams.removeUserData()\u00a0See Adding user-defined data to JSONNodeEvent streams.TO_JSON_NODE(Object obj)convert object to a JSON nodeObject must be in ObjectMapper.readTree format.USERDATA()\u00a0See Adding user-defined data to JSONNodeEvent streams.Masking functionsThe primary use for these functions is to anonymize personally identifiable information, for example, as required by\u00a0 the European Union's General Data Protection Regulation.The\u00a0String value argument is the name of the field containing the values to be masked.The String functionType argument is ANONYMIZE_COMPLETELY, ANONYMIZE_PARTIALLY, or a custom mask:ANONYMIZE_COMPLETELY will replace all characters in the field with x.ANONYMIZE_PARTIALLY will use a default mask specific to each function, as detailed below.A custom mask lets you define which characters to pass and which to mask. A custom mask may include any characters you wish. For example, with maskPhoneNumber, the mask ###-abc-defg would mask 123-456-7890 as 123-abc-defg. See\u00a0Changing and masking field values using MODIFY\u00a0and Modifying and masking values in the WAEvent data array using MODIFY for examples.functionnotesmaskCreditCardNumber(String value, String functionType)Input must be of the format ####-####-####-#### or ################. For the value 1234-5678-9012-3456, partially anonymized output would be xxxx-xxxx-xxxx-3456 and fully anonymized would be xxxx-xxxx-xxxx-xxxx. For the value 1234567890123456, partially anonymized output would be xxxxxxxxxxxx3456 and fully anonymized would be xxxxxxxxxxxxxxxx.\u00a0maskEmailAddress(String value, String functionType)Input must be a valid email address. For the value msmith@example.com, partially anonymized output would be mxxxxx@example.com and fully anonymized would be xxxxxxxxxxxxxxxxxx.maskGeneric(String value, String functionType)Input may be of any length. Partially anonymized output masks all but the last four characters, fully anonymized masks all characters.maskPhoneNumber(String value, String functionType)The input field format must be a ten-digit telephone number in the format ###-###-####, (###)-###-####, ##########, +1-###-###-####, +1(###)###-####, +1##########, or +1(###)#######.For the value 123-456-7890 or +1-123-456-7890, partially anonymized output would be xxx-xxx-7890 and fully anonymized would be xxx-xxx-xxxx.If you use a custom mask and the input field values are of varying lengths, use ELSE functions to handle each length. See\u00a0Changing and masking field values using MODIFY for an example.maskPhoneNumber(String value, String regex, Integer group)The\u00a0String regex parameter is a regular expression that matches the phone number pattern and splits it into regex groups. The Integer group\u00a0parameter specifies a group within that expression to be exposed. The other groups will be masked. See the example below and this\u00a0tutorial for more information.maskSSN(String value, String functionType)The input field format must be ###-##-#### (US Social Security number format).For the value 123-45-6789, partially anonymized output would be xxx-xx-6789 and fully anonymized would be xxx-xx-xxxx.The following example shows how to mask telephone numbers from various countries that have different lengths:CREATE SOURCE PhoneNumbers USING FileReader ( \n positionbyeof: false,\n directory: 'Samples',\n wildcard: 'EUPhoneNumbers.csv'\n ) \n PARSE USING DSVParser ( \n header: true,\n trimquote: false\n ) \nOUTPUT TO phoneNumberStream ;\n\nCREATE CQ FilterNameAndPhone \nINSERT INTO TypedStream\nSELECT TO_STRING(data[0]) as country,\n TO_STRING(data[1]) as phoneNumber\nFROM phoneNumberStream p;\n\nCREATE CQ MaskPhoneNumberBasedOnPattern \nINSERT INTO MaskedPhoneNumber\nSELECT country,\n maskPhoneNumber(phoneNumber, \"(\\\\\\\\d{0,4}\\\\\\\\s)(\\\\\\\\d{0,4}\\\\\\\\s)([0-9 ]+)\", 1, 2) \nFROM TypedStream;\n\nCREATE TARGET MaskedPhoneNumberOut USING FileWriter ( \n filename: 'MaskedData'\n) \nFORMAT USING DSVFormatter() \nINPUT FROM MaskedPhoneNumber;Within the regular expression, groups 1 and 2 (exposed) are \\\\\\\\d{0,4}\\\\\\\\s, which represents zero to four digits followed by a space, and group 3 (masked) is\u00a0([0-9 ]+), which represents zero to 9 digits.If\u00a0Striim/Samples/EUPhoneNumbers.csv contains the following:country,phoneNumber\nAT,43 5 1766 1001\nUK,44 844 493 0787\nUK,44 20 7730 1234\nDE,49 69 86 799 799\nDE,49 211 42168340\nIE,353 818 365000\nthe output file will contain:AT,435xxxxxxxx\nUK,44844xxxxxxx\nUK,4420xxxxxxxx\nDE,4969xxxxxxxx\nDE,49211xxxxxxxx\nIE,353818xxxxxxCreating a masking CQ in the web UIYou can use the Field Masker event transformer to create masking CQs.Drag Field Masker into the workspace and drop it.Name the CQ.Select the input stream.Click ADD COLUMN and select a column to include in the output.\u00a0To pass the field unmasked, do not select a masking function. To mask it, select the\u00a0appropriate masking function.Optionally, change the alias.Repeat steps 4-6 for each field to be included in the output.Select or specify the output, then click Save.With the masking CQ above, using FileWriter with JSONFormatter, if the input was:\"Stuart, Mary\",1234-5678-9012-3456the masked output would be: {\n \"name\":\"Stuart, Mary\",\n \"cc\":\"xxxxxxxxxxxxxxx3456\"\n }If you wish to edit the SELECT statement, click Convert to CQ. When you click Save, the component will be converted to a regular CQ, and if you edit it again the masking UI will no longer be available.Numeric functionsfunctiondescriptionNVL(Object, Object)return the first object if it is not null, otherwise return the second object, for example:NVL(COUNT(*),0)\nNVL(ROUND_DOUBLE(SUM(Duration/60),1),0)ROUND_DOUBLE(Object, Object)round a double to the specified number of placesROUND_FLOAT(Object, Object)round a float to the specified number of placesTO_DOUBLE(Object)convert a byte, float, integer, long, short, or string to a doubleTO_FLOAT(Object)convert a byte, double, integer, long, short, or string to a floatTO_INT(Object):convert a byte, double, float, long, short, or string to an integer. To convert a JSON object to an integer, use this syntax instead: obj.TO_INT()TO_LONG(Object)convert a byte, double, float, integer, short, or string to a longTO_SHORT(Object)convert a byte, double, float, integer, long, or string to a shortParquetEvent functionsParquetEvent events are generated by the Parquet parser. See the Parquet Parser for details. Use the following functions in CQs that use a ParquetEvent input or output stream, or to create or manipulate any other ParquetEvent objects.functiondescriptionnotesputUserData()Works exactly like the putUserData() function for JSONNodeEvent.See Adding user-defined data to JSONNodeEvent streams for details.USERDATA(ParquetEvent event, String key)Works exactly like the USERDATA() function for JSONNodeEvent.See Adding user-defined data to JSONNodeEvent streams for details.String functionsfunctiondescriptionnotesARLEN(String)returns the number of fields in the specified arraysee Handling variable-length events with CQs for an exampleIP_CITY(String)get the city for an IP addressuses MaxMind GeoIPIP_COUNTRY(String)get the country for an IP addressuses MaxMind GeoIPIP_LAT(String)get the latitude for an IP addressuses MaxMind GeoIPIP_LON(String)get the longitude for an IP addressuses MaxMind GeoIPmatch(String s, String regex)match(String s, String regex, Integer groupNumber)match the string using the specified regex expression. You can optionally specify the capture group number (the default is 0).supports only single return value )see Using regular expressions (regex))maxOccurs(String)value that had the maximum occurrences in the Stringsee MultiLogApp for examplesreplaceString(Event s, String findString, String newString)for input stream s, replaces all occurrences of findString (in all fields) with newStringFor example, SELECT replaceString(s,'MyCompany','PartnerCompany') replace all occurrences of MyCompany with PartnerCompany.Use only with events of user-defined types.replaceStringRegex(Event s, String regex, String newString)for input stream s, replaces all strings (in all fields) that match the specified regex expression with newStringFor example, SELECT replaceStringRegex(s,\u2019\\\\s\u2019,\u2019\u2019) would remove all whitespace, and SELECT replaceStringRegex(s,\u2019\\\\d\u2019,\u2019x\u2019) would replace all numerals with x.Use only with events of user-defined types.SLEFT(Object, Integer)returns only the characters to the left of position Integer from the objectSRIGHT(Object, Integer)returns only the characters to the right of position Integer from the objectfor example, SRIGHT(orderAmount,1) would remove a dollar, Euro, or other currency sign from the beginning of a stringTO_BOOLEAN(Object)convert a string to a BooleanTO_STRING(Object)convert any object to a stringWAEvent functionsUse the following functions in CQs with an input stream of type WAEvent.functiondescriptionnotesBEFORE(String) / BEFOREORDERED(String)returns the values in the WAEvent\u00a0before array of the specified stream as a java.util.HashMap, with column names as the keyssee Using the DATA(), DATAORDERED(), BEFORE(), and BEFOREORDERED() functionschangeOperationToInsertSee \"To Staging\" in Using database event transformers.clearUserdataSee Adding user-defined data to WAEvent streams.DATA[Integer]returns the value from field number Integer\u00a0in a WAEvent data arraysee Parsing the fields of WAEvent for CDC readersDATA(String) / DATAORDERED(String)returns the values in the WAEvent\u00a0data array of the specified stream as a java.util.HashMap, with column names as the keyssee Parsing the fields of WAEvent for CDC readers , Using the DATA() function, and Using the DATA(), DATAORDERED(), BEFORE(), and BEFOREORDERED() functionsIS_PRESENT()see Parsing the fields of WAEvent for CDC readersmaxOccurs(String)value that had the maximum occurrences in the Stringsee MultiLogApp for examplesMETA(<stream name>, key)extracts a value from a WAEvent METADATA mapsee Using the META() functionMODIFY()See Changing and masking field values using MODIFY and Modifying and masking values in the WAEvent data array using MODIFY.putUserdataSee Adding user-defined data to WAEvent streams.replaceData()See Modifying the WAEvent data array using replace functions.\u00a0replaceString()See Modifying the WAEvent data array using replace functions.replaceStringRegex()See Modifying the WAEvent data array using replace functions.send()See Using send() functionsUSERDATA(stream name,key)extracts a value from a WAEvent USERDATA mapSee Adding user-defined data to WAEvent streams.VALUE(stream name,key)returns the value in the specified WAEvent stream that matches the specified keySee NetFlow Parser, NVP (name-value pair) Parser, or SNMP Parser for examples of use.Miscellaneous functionsfunctionnotesCONSTRAINED_MULTIPLE_LINEAR_REGRESSION()See Using analytics and regression functions.CONSTRAINED_POLYNOMIAL_REGRESSION()See Using analytics and regression functions.CONSTRAINED_SIMPLE_LINEAR_REGRESSION()See Using analytics and regression functions.eventList()See Using EVENTLIST.getAppInvoker(String applicationName)Returns the name of the Striim user that started the application.ITERATOR()See Using ITERATOR.MULTIPLE_LINEAR_REGRESSION()See Using analytics and regression functions.NSK_CNVT_TXNID_TO_UNSIGNED()See Functions for HP NonStop transaction IDs.NSK_TXN_STRING()See Functions for HP NonStop transaction IDs.NSK_TXNS_ARE_SAME()See Functions for HP NonStop transaction IDs.POLYNOMIAL_REGRESSION()See Using analytics and regression functions.PREV()See Referring to Past Events.SIMPLE_LINEAR_REGRESSION()See Using analytics and regression functions.List of reserved keywordsThe following reserved keywords may not be used as identifiers in TQL applications or queries. The entries in lowercase are Java keywords that may be used as identifiers provided they are in uppercase or have initial capitals.abstract\nADD\nADDONS\nADMIN_UI\nALIAS\nALL\nALTER\nAND\nANY\nAPPLICATION\nAPPLICATIONCOUNT\nAPPLICATIONS\nAPPS_UI\nAS\nASC\nassert\nAUTORESUME\nBACKUP\nBETWEEN\nboolean\nbreak\nBY\nbyte\nCACHE\nCACHES\nCASCADE\ncase\nCASE\nCAST\ncatch\nCDUMP\nchar\nCHECKPOINT\nCID\nclass\nCLASS\nCLUSTER\nCONFIG\nCONNECT\nconst\nCONTEXT\ncontinue\nCPUUSAGE\nCQ\nCQS\nCREATE\nCROSS\nDASHBOARD\nDASHBOARD_UI\nDASHBOARDS\nDATA\nDATE\nDATETIME\nDAY\nDAYS\ndefault\nDEFAULT\nDEFINE\nDELETE\nDEPLOY\nDEPLOYMENTGROUP\nDEPLOYMENTGROUPS\nDESC\nDESCRIBE\nDETAILS\nDG\nDGS\nDISABLE\nDISCARD\nDISTINCT\ndo\ndouble\nDROP\nDUMP\nelse\nELSE\nENABLE\nENCRYPTION\nEND\nENRICH\nenum\nERROR\nERRORS\nEVENT\nEVENTSIZE\nEVENTTABLE\nEVENTTABLES\nEVERY\nEXCACHEBUFFERSIZE\nEXCEPTIONHANDLER\nEXCEPTIONSTORE\nEXCEPTIONSTORES\nEXCLUDE\nEXEC\nEXIT\nEXPORT\nextends\nEXTERNAL\nfalse\nFALSE\nfinal\nfinally\nfloat\nFLOW\nFLOWS\nfor\nFOR\nFORCE\nFORMAT\nFROM\nFULL\ngoto\nGRACE\nGRANT\nGROUP\nGROUPS\nHAVING\nHELPHISTORY\nHOUR\nHOURS\nIDENTIFIED\nIDLE\nif\nIMMEDIATE\nimplements\nimport\nIMPORT\nIN\nINCLUDE\nINITIALIZER\nINNER\nINPUT\nINPUTOUTPUT\nINSERT\ninstanceof\nINSTANCEOF\nint\ninterface\nINTERVAL\nINTO\nIS\nISTREAM\nITERATOR\nJAR\nJOIN\nJUMPING\nKEEP\nKEY\nLAST\nLATENCY\nLDAP\nLEE\nLEFT\nLIBRARIES\nLICENSE\nLIKE\nLIMIT\nLINEAGE\nLINK\nLIST\nLOAD\nlong\nMAP\nMATCH_PATTERN\nMAXIMUM\nMAXLIMIT\nMAXRETRIES\nMDUMP\nMEMORY\nMEMSIZE\nMERGE\nMETER\nMGET\nMINIMUM\nMINUTE\nMINUTESMODIFY\nMON\nMONITOR\nMONITOR_UI\nMONTH\nNAMEDQUERIES\nNAMEDQUERY\nNAMESPACE\nNAMESPACES\nnative\nnew\nNEW\nNODE\nNONE\nNOT\nnull\nNULL\nOBJECTS\nOF\nOFF\nOFFSET\nON\nONE\nOPEN\nOPENPROCESSORS\nOPENTRANSACTIONS\nOR\nORDER\nOUTER\nOUTPUT\nOVER\npackage\nPAGE\nPAGES\nPARALLELIZE\nPARSE\nPARTITION\nPASSPHRASE\nPATHS\nPC\nPERIOD\nPERMISSION\nPERSIST\nPLAN\nPOLICY\nPOLICYCONFIG\nPREVIEW\nprivate\nPROCESSOR\nPROPERTIES\nPROPERTYSET\nPROPERTYSETS\nPROPERTYTEMPLATE\nPROPERTYTEMPLATES\nPROPERTYVARIABLE\nPROPERTYVARIABLES\nprotected\npublic\nPUSHQUERY\nQUERYVISUALIZATION\nQUERYVISUALIZATIONS\nQUIESCE\nQUIT\nRANGE\nREAD\nREALTIME\nREBALANCE\nRECOMPILE\nRECOVERY\nREMOVE\nREPLACE\nREPORT\nRESET\nRESUME\nRETRYINTERVAL\nreturn\nREVOKE\nRIGHT\nROLE\nROLES\nROUTE\nROUTER\nROW\nROWS\nRSTREAM\nSAMPLE\nSCHEDULE\nSCHEMA\nSECOND\nSECONDS\nSELECT\nSELECTIVITY\nSERVER\nSERVERS\nSESSION\nSESSIONS\nSET\nshort\nSHOW\nSLEEP\nSLIDE\nSMARTALERT\nSMART\nSORTER\nSORTERS\nSOURCE\nSOURCEPREVIEW_UI\nSOURCES\nSPOOL\nSTART\nstatic\nSTATIC\nSTATS\nSTATUS\nSTOP\nSTREAM\nSTREAM_GENERATOR\nSTREAMS\nstrictfp\nSUBSCRIPTION\nSUBSCRIPTIONS\nsuper\nswitch\nsynchronized\nTARGET\nTARGETS\nTEST\nTHEN\nthis\nthrow\nthrows\nTIMEOUT\nTIMESTAMP\nTO\nTRANSACTION\nTRANSACTIONID\ntransient\ntrue\nTRUE\ntry\nTTL\nTYPE\nTYPES\nUNDEPLOY\nUNKNOWN\nUNLOAD\nUPDATE\nUSE\nUSER\nUSERS\nUSING\nVALIDATION\nVALUETYPE\nVAULT\nVAULTKEY\nVAULTS\nVAULTSPEC\nVAULTVALUE\nVERSION\nVISUALIZATION\nvoid\nvolatile\nWACACHE\nWACACHES\nWACTIONSTORE\nWACTIONSTORES\nWAIT\nWHEN\nWHERE\nwhile\nWI\nWINDOW\nWINDOWS\nWITH\nWITHIN\nWRITE\nYEARIf you mistakenly use a reserved keyword as an identifier, you will receive a syntax error. For example:Syntax error at:\nCreate Type Order\n ^^^^^In this section: TQL referenceSupported data typesOperatorsArithmetic operatorsComparison operatorsLogical operatorsFunctionsFunctions for supported data typesAggregate functionsApplication metadata functionsDate functionsJSONNode functionsMasking functionsNumeric functionsParquetEvent functionsString functionsWAEvent functionsMiscellaneous functionsList of reserved keywordsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/tql-reference.html", "title": "TQL reference", "language": "en"}} {"page_content": "\n\nDDL and component referenceSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referencePrevNextDDL and component referenceThis section covers the property editors in the Flow Designer and the corresponding syntax and usage of TQL data definition (DDL) statements.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-07\n", "metadata": {"source": "https://www.striim.com/docs/en/ddl-and-component-reference.html", "title": "DDL and component reference", "language": "en"}} {"page_content": "\n\nALTER and RECOMPILESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceALTER and RECOMPILEPrevNextALTER and RECOMPILEUse these commands to modify applications that are loaded but not deployed or running (in other words, applications with status Stopped or Quiesced). Using ALTER instead of dropping and reloading retains persisted WActionStore and Kafka stream data. This could be useful, for example, when there is a DDL change to a source database table.Note that some changes to the application can be made safely but others may be incompatible with persisted data, causing errors. For example:should be compatible with persisted datamay be incompatible with persisted dataadd components or flowsremove components or flowsadd a field to a typeremove a field from a typechange a type from:byte to integer, short, or longinteger to short or longshort to longfloat to doublechange a type from:long to short, integer, or byteshort to integer or byteinteger to bytedouble to floatUsing ALTER when recovery is enabled (STOP vs. QUIESCE)When you stop, alter, and restart an application with recovery enabled, it will resume operation with no missing or duplicate events (\"exactly-once processing,\" also known as E1P), except as noted in Recovering applications and with the following possible excerptions:should not interfere with exactly-once processingmay result in duplicate or missing eventsadding components to an unbranched data flow (that is, to a series of components in which the output of each component is the input of only one other component)removing components from an unbranched data flowadding a branch to a data flowremoving a branch from a data flowsimple modifications to a CQmodifying sourceschanging a window's KEEP clausechanging a CQ's GROUP BY clausechanging the number of fields in a CQ's FROM clausechanging the size of a CQ's MATCH_PATTERN selectionchanging the value of a CQ's LIMIT clausechanging a KafkaWriter's mode from sync to asyncWhen you quiesce an application with recovery enabled, altering it in any way other than changing its recoverable sources will not interfere with exactly-once processing. However, there may be anomalous results (see QUIESCE for details).ExampleThe workflow for ALTER and RECOMPILE is:If the application is running, STOP or QUIESCE it (see\u00a0Console commands).Undeploy the application.Alter the application as described below.Recompile, deploy, and start the application.To begin altering an application, use:USE <application's namespace>;\nALTER APPLICATION <application name>;At this point, enter CREATE, DROP, or CREATE OR REPLACE statements to modify the application, then complete the alteration with:ALTER APPLICATION <application name> RECOMPILE;For example, to add the email subscription described in Sending alerts from applications to the PosApp sample application:USE Samples;\nALTER APPLICATION PosApp;\nCREATE SUBSCRIPTION PosAppEmailAlert\nUSING EmailAdapter (\n SMTPUSER:'sender@example.com',\n SMTPPASSWORD:'password', \n smtpurl:'smtp.gmail.com',\n starttls_enable:'true',\n subject:\"test subject\",\n emailList:\"recipient@example.com,recipient2.example.com\",\n senderEmail:\"alertsender@example.com\" \n)\nINPUT FROM AlertStream;\nALTER APPLICATION PosApp RECOMPILE;\nAt this point you may deploy and start the modified application. If recovery was enabled for the application when it was loaded, when it is restarted, it will pick up source data (subject to the usual limitations detailed in Recovering applications) back to the time it went offline.Keep in mind that a change made to one component may require changes to multiple downstream components and their types. For example, to add the event_url JSON source property to the following app you would need to modify both the MeetupJSONType and the ParseJSON CQ:CREATE TYPE MeetupJSONType (\n venue_id integer KEY,\n group_name string,\n event_name string,\n event_url string,\n time DateTime,\n venue_name string,\n group_city string,\n group_country string,\n lat double,\n lon double\n);\nCREATE STREAM ParsedJSONStream OF MeetupJSONType;\n\nCREATE CQ ParseJSON\nINSERT INTO ParsedJSONStream\nSELECT\n CASE\n WHEN data.has(\"venue\") and data.get(\"venue\").has(\"venue_id\")\n THEN data.get(\"venue\").get(\"venue_id\").intValue()\n ELSE 0\n END,\n CASE\n WHEN data.has(\"group\") and data.get(\"group\").has(\"group_name\")\n THEN data.get(\"group\").get(\"group_name\").textValue()\n ELSE \"NA\"\n END,\n CASE\n WHEN data.has(\"event\") and data.get(\"event\").has(\"event_name\")\n THEN data.get(\"event\").get(\"event_name\").textValue()\n ELSE \"NA\"\n END,\n CASE\n WHEN data.has(\"event\") and data.get(\"event\").has(\"event_url\")\n THEN data.get(\"event\").get(\"event_url\").textValue()\n ELSE \"NA\"\n END, ...\nIf the application has a dashboard, you might also need to edit its properties to add event_url there:In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/alter-and-recompile.html", "title": "ALTER and RECOMPILE", "language": "en"}} {"page_content": "\n\nALTER PROPERTYSETSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceALTER PROPERTYSETPrevNextALTER PROPERTYSETTo create a property set, see \"Email Adapter Properties\" in Sending alerts from applications.NoteBefore altering an SMTP property set, you must stop all applications that use it.To change a property in an existing property set, use:ALTER PROPERTYSET <namespace>.<property set name> UPDATE(<property name>:<new value>);For example, to change the SMTP server used to send email alerts from applications:ALTER PROPERTYSET admin.smtpprop UPDATE(smtpurl:smtp2.example.com);\u00a0To add an additional property:ALTER PROPERTYSET <namespace>.<property set name> ADD(<property name>:<value>);To remove a property (you may specify multiple property names separated by commas):ALTER PROPERTYSET <namespace>.<property set name> REMOVE(<property name>);In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-01\n", "metadata": {"source": "https://www.striim.com/docs/en/alter-propertyset.html", "title": "ALTER PROPERTYSET", "language": "en"}} {"page_content": "\n\nCREATE APPLICATION ... END APPLICATIONSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE APPLICATION ... END APPLICATIONPrevNextCREATE APPLICATION ... END APPLICATIONCREATE APPLICATION <application name>\n[ WITH ENCRYPTION ]\n[ RECOVERY <##> SECOND INTERVAL ]\n[ EXCEPTIONHANDLER () ]\n[ USE EXCEPTIONSTORE [ TTL <interval> ]\n[ AUTORESUME [ MAXRETRIES\u00a0<#>\u00a0]\u00a0[ RETRYINTERVAL\u00a0<##> ]; \n\nEND APPLICATION <application name>;CREATE APPLICATION application_name; creates an application in the current namespace. All subsequent CREATE statements until the END APPLICATION statement create components in that application.The following illustrates typical usage in an application:CREATE APPLICATION simple;\n\nCREATE source SimpleSource USING FileReader (\n directory:'Samples',\n wildcard:'simple.csv',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO RawDataStream;\n\nCREATE TARGET SimpleOutput\nUSING SysOut(name:simple)\nINPUT FROM RawDataStream;\n\nEND APPLICATION simple;For a more complete example, see the PosApp sample application.See Recovering applications for discussion of the RECOVERY option.When the WITH ENCRYPTION option is specified, Striim will encrypt all streams that move data between Striim servers, or from a Forwarding Agent or HP NonStop source to a Striim server, to make them less vulnerable to network sniffers. Common use cases for this option are when a source resides outside the Striim cluster or outside your private network. This option may also be specified at the flow level, which may be useful to avoid the performance impact of encryption on streams not carrying sensitive data. If you are using Oracle JDK 8 or OpenJDK 8 version 1.8.0_161 or later, encryption will be AES-256. With earlier versions, encryption will be AES-128. In this release, encryption between Striim and HP NonStop sources is always AES-128.See Handling exceptions for discussion of the EXCEPTIONHANDLER option.See CREATE EXCEPTIONSTORE for discussion of the USE EXCEPTIONSTORE option.See Automatically restarting an application for discussion of the AUTORESUME option.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/create-application-----end-application.html", "title": "CREATE APPLICATION ... END APPLICATION", "language": "en"}} {"page_content": "\n\nCREATE CACHESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE CACHEPrevNextCREATE CACHENoteA cache is loaded into memory when it is deployed, so deployment of an application or flow with a large cache may take some time. If your cache is too large to fit in memory, see CREATE EXTERNAL CACHE.CREATE CACHE <name>\nUSING <reader name> { <properties> } \n[ PARSE USING <parser name> ( <properties> ) ]\nQUERY ( keytomap:'<key field for type name>'\n [, refreshinterval:'<microseconds>' ]\n [, refreshstarttime:'<hh:mm:ss>' ]\n [, replicas:{<integer>|all} ]\n [, skipinvalid: 'true' )\nOF <type name>;The required and optional properties vary according to the reader selected (typically FileReader or DatabaseReader). See the Sources topic for the reader (and parser, if any) you are using. If a required property is not specified, the cache will use its default value. Required properties without default values must be specified.The keytomap value (in the example below, zip) is the name of field that will be used to index the cache and, in multi-server environments, to distribute it. For best performance, make the keytomap field the one used to join the cache data with stream data. Joins on other fields will be much slower.The refreshinterval\u00a0and refreshstarttime\u00a0values specify when the cache is updated. See the discussion of these properties after the sample code below.When skipinvalid has its default value of false, if the data in a cache does not match the defined format (for example, if it has fewer fields that are in the type, or the column delimiter specified in the PARSE USING clause is a comma but the data file is tab-delimited),\u00a0deployment will fail with an error similar to:Deploy failed! Error invoking method CreateDeployFlowStatement: \njava.lang.RuntimeException: com.webaction.exception.Warning: \njava.util.concurrent.ExecutionException: \ncom.webaction.errorhandling.StriimRuntimeException: \nError in: Cache , error is: STRM-CACHE-1011 : \nThe size of this record is invalid for \n{\"class\":\"com.webaction.runtime.QueryValidator\",\"method\":\"CreateDeployFlowStatement\",\n\"params\":[\"01e6d1e9-3e67-bd31-adda-685b3587069e\",\"APPLICATION\",\n{\"strategy\":\"any\",\"flow\":\"dev9003\",\"group\":\"default\"},[]],\"callbackIndex\":5}To skip invalid records, set\u00a0skipinvalid to\u00a0true.The OF type (in the example below, ZipCacheType) must correctly describe the data source.The following illustrates typical usage in an application. In this example, the key field for ZipCache is zip, which is used in the join with FilteredDataStream:CREATE TYPE ZipCacheType(\n zip String KEY,\n city String,\n state String,\n latVal double,\n longVal double\n);\n \nCREATE CACHE ZipCache\nUSING FileReader (\n directory: 'Samples',\n wildcard: 'zipdata.txt')\nPARSE USING DSVParser (\n header: Yes,\n columndelimiter: '\\t',\n trimquote:false\n) QUERY (keytomap:'zip') OF ZipCacheType;\n \nCREATE TYPE JoinedDataType(\n merchantId String KEY,\n zip String,\n city String,\n state String,\n latVal double,\n longVal double\n);\nCREATE STREAM JoinedDataStream OF JoinedDataType;\n \nCREATE CQ JoinDataCQ\nINSERT INTO JoinedDataStream\nSELECT f.merchantId,\n f.zip,\n z.city,\n z.state,\n z.latVal,\n z.longVal\nFROM FilteredDataStream f, ZipCache z\nWHERE f.zip = z.zip;Using the sample code above, the cache data will be updated only when the cache is started or restarted. To refresh the cache at a set interval, add the refreshinterval option:... QUERY (keytomap:'zip', refreshinterval:'360000000') OF ZipCacheType;With the above setting, the cache will be refreshed hourly. To refresh the cache at a specific time, add the\u00a0refreshstarttime option:... QUERY (keytomap:'zip', refreshstarttime:'13:00:00') OF ZipCacheType;With the above setting, the cache will be refreshed daily at 1:00 pm. You may combine the two options:... QUERY (keytomap:'zip', refreshinterval:'360000000', refreshstarttime:'13:00:00') OF ZipCacheType;With the above setting, the cache will be refreshed daily at 1:00 pm, and then hourly after that. This will ensure that the cache is refreshed at a specific time rather than relative to when it was started.To see when a cache was last updated, use the console command MON <namespace>.<cache name>.See Database Reader for examples of caches populated by querying databases.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-08\n", "metadata": {"source": "https://www.striim.com/docs/en/create-cache.html", "title": "CREATE CACHE", "language": "en"}} {"page_content": "\n\nCREATE CQ (query)Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE CQ (query)PrevNextCREATE CQ (query)CREATE CQ name \nINSERT INTO {\n <output stream name> | \n <WActionStore name> [ ( <field name>, ... ) ] \n}\nSELECT [DISTINCT] { { <expression or field name> [ AS <output stream field name> ], ... } \n[ ISTREAM ]\nFROM {\n <input stream name> | \n <cache name> |\n <window name> | \n <WActionStore name>, ... |\n ITERATOR ( <nested collection name>.<member name> ) \n }\n[ { INNER | CROSS | LEFT | LEFT OUTER | RIGHT | RIGHT OUTER | FULL | FULL OUTER } JOIN ]\n[ [ JUMPING <integer> { SECOND | MINUTE | HOUR | DAY } ] ]\n[ ON { <expression> } ]\n[ WHERE { <expression> } ]\n[ GROUP BY { field name } ]\n[ HAVING { <expression> } ]\n[ ORDER BY { <expression> } [ ASC | DESC ] ]\n[ LIMIT { <expression> } ]\n[ SAMPLE BY <field names>,... [SELECTIVITY <#.#> | MAXLIMIT <number>] ]\n[ LINK SOURCE EVENT ] \n[ EXCACHEBUFFERSIZE <number of events> ];See Operators and Functions for more information about writing expressions.INSERT INTO: When a CQ's INSERT INTO clause specifies a stream that does not exit, the stream and its type will be created automatically based on the SELECT statement. For an example, see Parsing the data field of WAEvent. This clause may include any of the options described in CREATE STREAM.DISTINCT: When a CQ includes the DISTINCT option, at least one of the components in the FROM clause must be a cache or window.SELECT timeStamp: When selecting from the output stream of a source, you may use SELECT timeStamp to get the system time in milliseconds (as a long) when the source processed the event. The timeStamp field exists only in the source's output stream and is dropped by any window, CQ, or target using that stream as a source.ISTREAM: By default, a CQ will update calculated values when events are added to or removed from its input window. If you specify the ISTREAM option, the CQ will update calculated values only when new events are added to the window.SELECT ... FROM: When a CQ's FROM clause includes multiple components, at least one must be a cache or window. See Joins.SELECT ... FROM <WActionStore name>: supported only when the WActionStore is persisted. By default, this will run once, when the application is deployed. Add the [JUMPING ...] clause to re-run the query periodically. (Note that the square brackets are required.) For example, [JUMPING 5 MINUTE] will run the query every five minutes, each time (including the first) returning the most recent five minutes of data.ITERATOR: see Using ITERATOR.INNER JOIN: When a CQ includes the INNER JOIN option, two and only two components must be specified in the FROM clause.GROUP BY: See\u00a0Aggregate functions.SAMPLE BY: When a CQ's FROM clause includes only a single WActionStore or jumping window, you may use the SAMPLE BY clause to reduce the number of events in the CQ's output. One use for this is to reduce the number of events to avoid overloading a dashboard (see Defining dashboard queries). For WActionStores, the CQ must include an ORDERY BY clause to order the events by time.<field name>,... specifies one or more fields from the WActionStore or window. The data will be sampled so as to preserve roughly the same distribution of those fields' values as in the total data set. The field(s) must be of type double, float, integer, long, or short.SELECTIVITY sets the sample size as a digital fraction between 0 and 1. For example, SELECTIVITY 0.05 would select approximately 5% of the events from the source.MAXLIMIT sets the sample size as a maximum number of events. For example, MAXLIMIT 100 would select 100 events every time the CQ is run.You may not specify both SELECTIVITY and MAXLIMIT in the same SELECT statement. If you specify neither, the default is SELECTIVITY 0.01.When selecting from a very large WActionStore, you may want to reduce the amount of data returned before sampling. For example, SELECT DateTime, Temp FROM TemperatureWS ORDER BY DateTime DESC LIMIT 100000 SAMPLE BY Temp MAXLIMIT 500. would return a sample of 500 of the 100,000 most recent events.The smaller the SELECTIVITY or MAXLIMIT value, the less representative of the full data set the sample will be. It may be helpful to experiment with various values.LINK SOURCE EVENT: When a CQ's INSERT INTO clause specifies a WActionStore, the optional LINK SOURCE EVENT clause makes the details of the events in the FROM clause available in the WActionStore.EXCACHEBUFFERSIZE: See CREATE EXTERNAL CACHE.CREATE EXTERNAL CACHENESTING: Queries may be nested by enclosing the subquery in parentheses: SELECT ... FROM (SELECT ... FROM ...) ....See Continuous query (CQ) and Intermediate TQL programming: common patterns for some examples of common query types.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-08-03\n", "metadata": {"source": "https://www.striim.com/docs/en/create-cq--query-.html", "title": "CREATE CQ (query)", "language": "en"}} {"page_content": "\n\nCREATE DASHBOARDSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE DASHBOARDPrevNextCREATE DASHBOARDCREATE DASHBOARD USING \"<path>/<filename>\";Imports a dashboard from the specified JSON file. The dashboard will be created in the current namespace. If not specified from root, the path is relative to the Striim directory. Example from the PosApp sample application:CREATE DASHBOARD USING \"Samples/PosApp/PosAppDashboard.json\";In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/create-dashboard.html", "title": "CREATE DASHBOARD", "language": "en"}} {"page_content": "\n\nCREATE EVENTTABLESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE EVENTTABLEPrevNextCREATE EVENTTABLECREATE EVENTTABLE <name>\nUSING STREAM ( NAME: '<stream name>' ) \n[ DELETE USING STREAM ( '<stream name>') ]\nQUERY ( \n keytomap:'<key field for type name>'\n [, persistPolicy:'True' ]\n)\nOF <type name>;An event table is similar to a cache, except it is populated by an input stream instead of by an external file or database. CQs can both INSERT INTO and SELECT FROM an event table.Event tables retain only the most recent event for each value of the key field defined by\u00a0keytomap. In other words, the key field is similar to a database table's primary key. The first event received for each key value adds an event to the table, and subsequent events for that key value update it.Optionally, an event table may have a second input stream, defined by DELETE USING STREAM. Events inserted into this stream will delete the event with the corresponding key field value. The other values in the event are ignored, though they must be valid as regards the delete stream's type.If persistpolicy is True, events will be persisted to Elasticsearch (while still retained in memory) and retained (including through terminations and restarts) until the application is dropped. If persistpolicy is False, event table data will be lost when the application is undeployed or terminated.Inserting events into an event tableUse a CQ to insert events into an event table.To demonstrate this, save the following as striim/Samples/EventTable1.csv:id,color,value\n1,green,10\n2,blue,20\nThen create EventTable1 in namespace ns1 and run it:CREATE NAMESPACE ns1;\nUSE NS1;\nCREATE APPLICATION EventTable1;\n\nCREATE SOURCE EventTableSource USING FileReader ( \n Wildcard: 'eventtable1.csv',\n Directory: 'Samples',\n PositionByEof: false\n) \nPARSE USING DSVParser ( \n header: true\n) \nOUTPUT TO EventTableSource_Stream ;\n\nCREATE CQ EventTableSource_Stream_CQ \nINSERT INTO EventTableSource_TransformedStream\nSELECT TO_INT(data[0]) as id,\n TO_STRING(data[1]) as color,\n TO_INT(data[2]) as value\nFROM EventTableSource_Stream;\n\nCREATE EVENTTABLE EventTableDemo USING STREAM ( \n name: 'EventTableSource_TransformedStream'\n) \nDELETE USING STREAM ( \n name: 'ETTestDeleteStream'\n) \nQUERY ( \n keytomap: 'id'\n) \nOF EventTableSource_TransformedStream_Type;\n\nEND APPLICATION EventTable1;\nDEPLOY APPLICATION EventTable1;\nSTART APPLICATION EventTable1;Once the application has started, query the event table, and you will see that the contents match EventTable1.csv:W (ns1) > select * from EventTableDemo;\nProcessing - select * from EventTableDemo\n[\n\u00a0\u00a0 id = 1\n\u00a0\u00a0 color = green\n\u00a0\u00a0 value = 10\n]\n[\n\u00a0\u00a0 id = 2\n\u00a0\u00a0 color = blue\n\u00a0\u00a0 value = 20\n]\n\n-> SUCCESS\u00a0\nUpdating an event tableWhen an event table receives a new event with the same key field value as an existing event in the table, it updates its values.To demonstrate this, save the following as striim/Samples/EventTable2.csv:id,color,value\n1,purple,25\nThen create EventTable2, run it, and query the event table again. This application does not need a CQ since it uses the one from EventTable1.CREATE APPLICATION EventTable2;\n\nCREATE SOURCE EventTableSource2 USING FileReader ( \n Wildcard: 'eventtable2.csv',\n Directory: 'Samples',\n PositionByEof: false\n) \nPARSE USING DSVParser ( \n header: true\n) \nOUTPUT TO EventTableSource_Stream;\n\nEND APPLICATION EventTable2;\nDEPLOY APPLICATION EventTable2;\nSTART APPLICATION EventTable2;\n\nselect * from EventTableDemo;\nThe event with id 1 is updated to match the data in EventTable2.csv:W (ns1) > select * from EventTableDemo;\nProcessing - select * from EventTableDemo\n[\n\u00a0\u00a0 id = 1\n\u00a0\u00a0 color = purple\n\u00a0\u00a0 value = 25\n]\n[\n\u00a0\u00a0 id = 2\n\u00a0\u00a0 color = blue\n\u00a0\u00a0 value = 20\n]\nDeleting events from an event table using the delete streamTo delete an event, send an event with the same key field value to the event table's delete stream.To demonstrate this, you can reuse EventTable2.csv. Create EventTable3, run it, and query the event table again:STOP APPLICATION EventTable2;\nCREATE APPLICATION EventTable3;\n\nCREATE SOURCE EventTableDeleteSource USING FileReader ( \n Wildcard: 'eventtable2.csv',\n Directory: 'Samples',\n PositionByEof: false\n) \nPARSE USING DSVParser ( \n header: true\n) \nOUTPUT TO EventTableDelete_Stream ;\n\nCREATE CQ EventTableDelete_Stream_CQ \nINSERT INTO ETTestDeleteStream\nSELECT TO_INT(data[0]) as id,\n TO_STRING(data[1]) as color,\n TO_INT(data[2]) as value\nFROM EventTableDelete_Stream;\n\nEND APPLICATION EventTable3;\nDEPLOY APPLICATION EventTable3;\nSTART APPLICATION EventTable3;\nThe event with id 1 was deleted from the event table:W (ns1) > select * from EventTableDemo;\nProcessing - select * from EventTableDemo\n[\n\u00a0\u00a0 id = 2\n\u00a0\u00a0 color = blue\n\u00a0\u00a0 value = 20\n]Deleting events from an event table using\u00a0convertToDeleteEvent()When deleting events using the delete stream, there could be a race condition between the input stream and the delete stream when they both receive an event with the same key at almost the same time. To avoid that, use\u00a0convertToDeleteEvent(), as follows:CREATE CQ <name>\nINSERT INTO <event table input stream>\nSELECT et.convertToDeleteEvent()\nFROM <event table> et\nWHERE <criteria selecting events to delete>;The alias (et) is required by\u00a0convertToDeleteEvent(). You may use any alias you wish.For example, to delete the remaining event from the\u00a0ETTestDeleteStream event table:STOP APPLICATION EventTable3;\nCREATE APPLICATION EventTable4;\n\nCREATE CQ EventTableDelete_Stream_CQ \nINSERT INTO EventTableSource_Stream\nSELECT et.convertToDeleteEvent()\nFROM EventTableDemo et\nWHERE id=2;\n\nEND APPLICATION EventTable4;\nDEPLOY APPLICATION EventTable4;\nSTART APPLICATION EventTable4;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-14\n", "metadata": {"source": "https://www.striim.com/docs/en/create-eventtable.html", "title": "CREATE EVENTTABLE", "language": "en"}} {"page_content": "\n\nCREATE EXCEPTIONSTORESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE EXCEPTIONSTOREPrevNextCREATE EXCEPTIONSTORECREATE EXCEPTIONSTORE FOR APPLICATION <name> [ TTL: '<interval>' ];Alternatively, to create an exception store at the same time you create an application::CREATE APPLICATION <name> USE EXCEPTIONSTORE [ TTL: '<interval>' ];You may also create and browse an exception store in the Flow Designer:When you create an application using templates, this is enabled automatically. Turn it off if you do not want an exception store for the application.An exception store collects exceptions for a single application along with events related to the exception and persists them to Elasticsearch.The exception store's name is the application name with _ExceptionStore appended.By default, the time to live (TTL) is 7d, which means events are discarded after seven days. Optionally, you may specify a different time to live as m (milliseconds), s (seconds), h (hours), d (days), or w (weeks). To change the time to live, use ALTER\u00a0EXCEPTIONSTORE <name> TTL: '<interval>'; (recompile is not necessary).Dropping the application does not drop the exception store. This allows you to drop the application, modify the TQL, and reload it without losing the existing data in its exception store.The type for exception store events is Global.ExceptionEvent. Its fields are:fieldtypenotesexceptionTypejava.lang.StringOne of the following:CRAAdapterExceptionArithmeticExceptionClassCastExceptionConnectionExceptionInvalidDataExceptionNullPointerExceptionNumberFormatExceptionSystemExceptionUnexpectedDDLExceptionUnknownExceptionactionjava.lang.StringIGNORE or STOP (see Handling exceptions)Handling exceptionsappNamejava.lang.Stringthe name of the associated applicationentityTypejava.lang.Stringthe type of component that threw the exception (source, CQ, target, etc.)entityNamejava.lang.Stringthe component's nameclassNamejava.lang.Stringthe Java class name related to the exceptionmessagejava.lang.Stringthe error message from the target DBMSexceptionTimeorg.joda.time.DateTimethe time of the exceptionepochNumberjava.lang.Longthe epoch time of the exceptionrelatedActivityjava.lang.Stringthe Striim activity that caused the exceptionrelatedObjectsjava.lang.Stringthe Striim event(s) affected by the exceptionFor a simple example, say you have the following Oracle table in SOURCEDB and TARGETDB schemas:CREATE TABLE MYTABLE(\nID int PRIMARY KEY,\nNAME varchar2(100),\nCITY varchar2(100));Replicate it from one Oracle instance to another using the following application:CREATE APPLICATION ExceptionstoreDemo USE EXCEPTIONSTORE;\nCREATE SOURCE OracleCDC USING OracleReader (\n Username:'striim',\n Password:'******',\n ConnectionURL:'10.211.55.3:1521:orcl1',\n Tables:'SOURCEDB.MYTABLE'\n)\nOUTPUT TO OracleCDCStream;\nCREATE TARGET WriteToOracle USING DatabaseWriter (\n ConnectionURL:'jdbc:oracle:thin:@10.211.55.3:1521:orcl1',\n Username:'striim',\n Password:'******',\n Tables:'SOURCEDB.MYTABLE,TARGETDB.MYTABLE',\n IgnorableExceptionCode: 'NO_OP_UPDATE'\n)\nINPUT FROM OracleCDCStream;\nEND APPLICATION ExceptionstoreDemo;Insert the identical row twice:INSERT INTO MYTABLE VALUES (1,'name1','city1');\nINSERT INTO MYTABLE VALUES (1,'name1','city1');Since the primary key already exists, the second insert will throw an exception. Since NO_OP_UPDATE is specified as an ignorable exception (see discussion of Ignorable Exception Code property in Database Writer), the exception will be written to the application's exception store. You can query the exception store using the same syntax you would to query a WActionStore:Database WriterW (ns1) > select * from ExceptionstoreDemo_exceptionstore;\nProcessing - select * from ExceptionstoreDemo_exceptionstore\n[\n exceptionType = AdapterException\n action = STOP\n appName = ns1.NSDemo\n appid = 01ea1e42-8e77-7651-b9fa-52b54b45818e\n entityType = TARGET\n entityName = t\n className = java.sql.SQLIntegrityConstraintViolationException\n message = ORA-00001: unique constraint (MYUSERID.SYS_C007357) violated\n exceptionTime = 2019-12-14T12:57:32.067+05:30\n epochNumber = 1576308181383\n relatedActivity = target notify exception\n relatedObjects = {\"_id\":null,\"timeStamp\":1576308452050,\"originTimeStamp\":0,\"key\":null,\n\"sourceUUID\":{\"uuidstring\":\"01ea1e42-8ea5-3d11-b9fa-52b54b45818e\"},\"data\":[\"1\",\"name1\",\"city1\"],\n\"metadata\":{\"RbaSqn\":\"3\",\"AuditSessionId\":\"30106\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"1579373\",\n\"SQLRedoLength\":\"78\",\"BytesProcessed\":null,\"ParentTxnID\":\"1.27.567\",\"SessionInfo\":\"UNKNOWN\",\n\"RecordSetID\":\" 0x000003.000454d0.0010 \",\"DBCommitTimestamp\":\"1576308452000\",\"COMMITSCN\":1579374,\n\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"1579373\",\"SegmentName\":\"MYTABLE\",\"OperationName\":\"INSERT\",\n\"TimeStamp\":1576337252000,\"TxnUserID\":\"MYUSERID\",\"RbaBlk\":\"283856\",\"SegmentType\":\"TABLE\",\n\"TableName\":\"SOURCEDB.MYTABLE\",\"TxnID\":\"1.27.567\",\"Serial\":\"28518\",\"ThreadID\":\"1\",\n\"COMMIT_TIMESTAMP\":1576337252000,\"OperationType\":\"DML\",\"ROWID\":\"AAAR+CAAHAAAADcAAB\",\n\"DBTimeStamp\":\"1576308452000\",\"TransactionName\":\"\",\"SCN\":\"157937300000008444435329188000001\",\n\"Session\":\"344\"},\"userdata\":null,\"before\":null,\"dataPresenceBitMap\":\"Bw==\",\n\"beforePresenceBitMap\":\"AA==\",\"typeUUID\":{\"uuidstring\":\"01ea1e42-9082-3a71-8672-52b54b45818e\"}}\n]The following application demonstrates how you might query data from the exception store and write it to a file you could use as a starting point for inserting some or all of the logged events into the target. Contact Striim support if you need assistance in developing such an application.CREATE APPLICATION ProcessExceptionstoreDemo;\nCREATE TYPE ExceptionList_Type (\n evtlist java.util.List\n);\nCREATE STREAM ExceptionListStream OF ExceptionList_Type;\nCREATE CQ ReadFromExceppStore \nINSERT INTO ExceptionListStream\nSELECT to_waevent(s.relatedObjects) AS evtlist \nFROM ExceptionstoreDemo_exceptionstore [JUMPING 5 SECOND] s;\n\nCREATE STREAM RelatedEventStream OF Global.WAEvent;\nCREATE CQ GetRelatedEvents \nINSERT INTO RelatedEventStream\nSELECT com.webaction.proc.events.WAEvent.makecopy(cdcevent) \nFROM ExceptionListStream a, iterator(a.evtlist) cdcevent;\n\nCREATE TARGET WriteToFileAsJSON USING FileWriter ( \n filename: 'expEvent',\n directory: 'ExpStore_logs'\n) \nFORMAT USING JSONFormatter()\nINPUT FROM RelatedEventStream;\nEND APPLICATION ProcessExceptionstoreDemo;\nIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-17\n", "metadata": {"source": "https://www.striim.com/docs/en/create-exceptionstore.html", "title": "CREATE EXCEPTIONSTORE", "language": "en"}} {"page_content": "\n\nCREATE EXTERNAL CACHESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE EXTERNAL CACHEPrevNextCREATE EXTERNAL CACHENoteIn this release, this is not a cache. Striim queries the external database as data is needed. If the same data is needed again, Striim queries it again.CREATE EXTERNAL CACHE <name> (\n AdapterName:'DatabaseReader',\n Username: '<username>',\n Password: '<password>',\n ConnectionURL: '<database connection string>',\n ConnectionRetry: <parameters>,\n FetchSize: <count>,\n Table: '<table name>',\n Columns: '<colums to read>',\n KeyToMap: '<column name>',\n SkipInvalid: <True / False>\n) \nOF <type name>;For better performance, lookups are buffered in memory and queried in batches. The size of the buffer is set automatically.When an external cache is joined with a window, by default the buffer will hold the same number of events as the window. You may increase the size of the buffer by adding [EXCACHEBUFFERSIZE <number of events>] to the properties of the CQ that performs the join, or disable it entirely by adding [EXCACHEBUFFERSIZE 0](.AdapterName must be DatabaseReader.For Username, Password, ConnectionURL, FetchSize and discussion of whether you need to install a JDBC driver, see Database Reader.Database ReaderWith the default setting, if a connection attempt is unsuccessful, the adapter will try again in 30 seconds (retryInterval. If the second attempt is unsuccessful, in 30 seconds it will try a third time (maxRetries). If that is unsuccessful, the adapter will fail and log an exception. Negative values are not supported.For Table, specify a single table. See the discussion of the Tables property in Database Reader for additional information.Database ReaderFor Columns, specify the names of the columns you wish to retrieve, separated by commas.For KeyToMap specify the table's primary key column. If the table has no primary key, specify any column.When skipinvalid has its default value of False, if the data in a cache does not match the defined format (for example, if it has fewer fields that are in the type), the application will terminate. To skip invalid records, set to\u00a0True.You may omit ConnectionRetry, FetchSize, and SkipInvalid from TQL if their default values are appropriate.The OF type must match the order, number, and data types of the specified columns.For example:CREATE TYPE RackType(\n rack_id String KEY,\n datacenter_id String,\n rack_aisle java.lang.Integer,\n rack_row java.lang.Integer,\n slot_count java.lang.Integer\n);\nCREATE EXTERNAL CACHE ConfiguredRacks (\n AdapterName:'DatabaseReader',\n ConnectionURL:'jdbc:mysql://10.1.10.149/datacenter',\n Username:'username',\n Password:'passwd',\n Table:'RackList',\n Columns: \"rack_id,datacenter_id,rack_aisle,rack_row,slot_count\",\n KeyToMap: 'rack_id'\n)\nOF RackType;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-14\n", "metadata": {"source": "https://www.striim.com/docs/en/create-external-cache.html", "title": "CREATE EXTERNAL CACHE", "language": "en"}} {"page_content": "\n\nCREATE FLOW ... END FLOWSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE FLOW ... END FLOWPrevNextCREATE FLOW ... END FLOWCREATE FLOW <flow name>m [WITH ENCRYPTION];\n...\nEND FLOW <flow name>;CREATE FLOW <flow name> creates a flow. All subsequent CREATE statements until the END FLOW statement will create components in that flow. See CREATE APPLICATION ... END APPLICATION for discussion of WITH ENCRYPTION.CREATE APPLICATION ... END APPLICATIONSee Flow for more information and MultiLogApp for a detailed discussion of a multi-flow application.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-06-13\n", "metadata": {"source": "https://www.striim.com/docs/en/create-flow-----end-flow.html", "title": "CREATE FLOW ... END FLOW", "language": "en"}} {"page_content": "\n\nCREATE PROPERTYVARIABLESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE PROPERTYVARIABLEPrevNextCREATE PROPERTYVARIABLECREATE PROPERTYVARIABLE [<namespace>.]<name>='<value>';Property variables allow you to store values for adapter properties in an encrypted form, so they may be used as passwords and tokens applications without sharing the cleartext with users.The following will create a property variable common.dbpass:USE common;\nCREATE PROPERTYVARIABLE dbpass='12345678';You could then use that in an application as follows:CREATE CACHE ConfiguredRacks USING DatabaseReader (\n Username:'striim',\n Password:'$common.dbpass'...You may omit the namespace if the property variable is in the same namespace as the application:CREATE CACHE ConfiguredRacks USING DatabaseReader (\n Username:'striim',\n Password:'$dbpass'...NoteIf a property variable has the same name as an environment variable, Striim will use the value of the property variable.To change the value of an existing property variable, use CREATE OR REPLACE:CREATE OR REPLACE PROPERTYVARIABLE dbpass='abcdefgh';NoteAfter changing the value of a property variable, any Striim applications that use it must be restarted to update the value.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-12\n", "metadata": {"source": "https://www.striim.com/docs/en/create-propertyvariable.html", "title": "CREATE PROPERTYVARIABLE", "language": "en"}} {"page_content": "\n\nCREATE OR REPLACESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE OR REPLACEPrevNextCREATE OR REPLACEIf the component exists, replaces it; if it does not exist, creates it. Every CREATE statement has a corresponding CREATE OR REPLACE statement. The syntax is the same, just add OR REPLACE after CREATE. For an example, see ALTER and RECOMPILE.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-06-01\n", "metadata": {"source": "https://www.striim.com/docs/en/create-or-replace.html", "title": "CREATE OR REPLACE", "language": "en"}} {"page_content": "\n\nCREATE ROUTERSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE ROUTERPrevNextCREATE ROUTERCREATE ROUTER <name> INPUT FROM <stream name> [ AS <alias> ]\nCASE\n WHEN <expression> THEN ROUTE TO <stream name>,...\n[ ELSE ROUTE TO <stream name> ] \n;Distributes events from an input stream among two or more output streams based on user-defined criteria.If an event matches more than one WHEN expression, a copy will be output to each of the corresponding streams.If an event matches none of the WHEN expressions, it will be output to the ELSE stream.If no ELSE clause is specified, events that do not match any of the WHEN expressions are discarded.See CREATE CQ (query), Operators, and Functions for information about writing expressions.CREATE CQ (query)For example:CREATE ROUTER myRouter INPUT FROM mySourceStream AS src \nCASE\n WHEN TO_INT(src.data[1]) < 150 THEN ROUTE TO stream_one,\n WHEN TO_INT(src.data[1]) >= 150 THEN ROUTE TO stream_two,\n WHEN meta(src,\"TableName\").toString() like 'QATEST.TABLE_%' THEN ROUTE TO stream_three,\nELSE ROUTE TO stream_else;Routers may be created in the Flow Designer. Known issue (DEV-36792): if your WHEN expression uses a function that requires an alias for the input stream, you cannot create the router in the Flow Designer, instead you must create the router in TQL and import it.When you add a new router to an application and select the input stream, you will see something like this (the fields vary depending on the input stream's type):Click Edit using TQL if you prefer to enter the expression as code.Attribute: for an input stream of a user-defined events, select a field. For an input stream of type WAEvent, enter an appropriate expression, typically using a DATA() or META() function.Condition: for numeric fields or dates, select Less than, Less than or equal, Equal, Greater than or equal, or Greater than; for strings, select Like or Not like. If you require a more complex expression, click Edit using TQL and write the expression manually.Value: the value to be compared with the specified Attribute using the selected Condition.Data type: specify the data type of the selected Attribute.Output stream: select the output stream or enter a name to create a new stream. There should be separate output streams for each WHEN and the ELSE.Click Add condition to specify as many additional WHEN clauses as you need.When you view a saved router, your WHEN expressions will be displayed as read-only text summary..To edit the expression, click the \u2304 to the left of When.To edit the expression as TQL, click Edit using TQL > Continue. Note that once you do this you will not be able to return to editing using the UI.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/en/create-router.html", "title": "CREATE ROUTER", "language": "en"}} {"page_content": "\n\nCREATE SORTERSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE SORTERPrevNextCREATE SORTERCREATE SORTER <name> OVER \n<input stream 1 name> ON <timestamp field name> OUTPUT TO <output stream 1 name>,\n<input stream 2 name> ON <timestamp field name> OUTPUT TO <output stream 2 name>\n[, input stream 3 name> ON <timestamp field name> OUTPUT TO <output stream 3 name>... ] \nWITHIN <integer> { SECOND | MINUTE | HOUR | DAY } \nOUTPUT ERRORS TO <error output stream name>;This ensures that events from multiple streams are processed in sync with each other based on timestamps in the streams. For example:CREATE SORTER MySorter OVER\nStream1 ON logTime OUTPUT TO Stream1Sorted,\nStream2 ON logTime OUTPUT TO Stream2Sorted\nWITHIN 2 second\nOUTPUT ERRORS TO fooErrorStream;If the events in Stream1 and Stream2 fall out of sync by more than two seconds, the stream with the more recent timestamps is buffered until the events from the other stream with matching timestamps arrive.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/create-sorter.html", "title": "CREATE SORTER", "language": "en"}} {"page_content": "\n\nCREATE SOURCESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE SOURCEPrevNextCREATE SOURCECREATE SOURCE <name>\nUSING <reader name> ( <properties> ) \n[ PARSE USING <parser name> ( <properties> ) ] \nOUTPUT TO <stream name> \n [SELECT ( <data field>, ... [ WHERE <expression>, ... ] ), ...];Whether the properties and PARSE USING clause are required depends on the adapter. See the Sources topic for the writer you are using.If an optional property is not specified, the source will use its default value. Values for required properties must be always specified in TQL, even if they have default values (which are displayed automatically in the web UI).For information about the\u00a0SELECT clause, see\u00a0Filtering data in a source.Here is a complete example of a CREATE SOURCE statement:CREATE SOURCE AALSource using AAlReader (\n directory:'$ACCESSLOGPATH',\n wildcard:'$ACCESSLOGFILE:access_log')\nOUTPUT TO AccessEntryStream;If the OUTPUT TO stream does not exist, it will be created automatically, using the type associated with the adapter. When there is only one output stream, it is not mapped, and there is no SELECT clause, you may follow the stream name with any of the options described in CREATE STREAM. See also Using OUTPUT TO ... MAP.Example from the PosApp sample application:CREATE source CsvDataSource USING FileReader (\n directory:'Samples/PosApp/appData',\n wildcard:'posdata.csv',\n blocksize: 10240,\n positionByEOF:false\n)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false\n) OUTPUT TO CsvStream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/create-source.html", "title": "CREATE SOURCE", "language": "en"}} {"page_content": "\n\nCREATE STREAMSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE STREAMPrevNextCREATE STREAMCREATE STREAM <name> OF <type name> \n[ PARTITION BY { <field name>, ... | <expression> } ]\n[ PERSIST USING <property set> ]\n[ GRACE PERIOD <integer> { SECOND | MINUTE | HOUR | DAY } ON <field name> ];Creates a stream using a previously defined type.NoteA stream and its type may also be created as part of a CREATE CQ declaration. See Parsing the data field of WAEvent for an example.For example, PosApp defines the MerchantTxRate type and creates a stream based on that type:CREATE TYPE MerchantTxRate(\n merchantId String KEY,\n zip String,\n startTime DateTime,\n count integer,\n totalAmount double,\n hourlyAve integer,\n upperLimit double,\n lowerLimit double,\n category String,\n status String\n);\nCREATE STREAM MerchantTxRateOnlyStream OF MerchantTxRate PARTITION BY merchantId;GRACE PERIOD should be used when data may be received out of order. For example, if you were collecting log data from servers all over the world, network latency might result in events with a logTime value of 1427890320 arriving from a nearby server before events with a timestamp of 1427890319 (one second earlier) from a server on another continent. If you knew that the maximum latency was two seconds, you could use the clause GRACE PERIOD 2 SECOND ON logTime to ensure that all events are processed in order. The events of the stream would then be buffered for two seconds, continuously sorted based on the logTime value, and passed to the next component in the application. If the time interval is too short, any out-of-order events received too late to be sorted in the correct order are discarded. Without the GRACE PERIOD option, out-of-order events are processed as they arrive, which may result in incorrect calculations.For example, this sample sorter for out of order events is paired with a stream created with a 2 second grace period on the log time:CREATE SORTER MySorter OVER\nStream1 ON logTime OUTPUT TO Stream1Sorted,\nStream2 ON logTime OUTPUT TO Stream2Sorted\nWITHIN 2 second\nOUTPUT ERRORS TO fooErrorStream;\n \nCREATE STREAM EventStream1 OF EventType1 GRACE PERIOD 2 SECOND ON logTime;Persisting a stream to KafkaFor an overview of this feature, see Kafka streams.NoteKafka and Zookeeper must be running when you create a Kafka-persisted stream, persist an existing stream, or import an application containing one.CREATE STREAM <name> OF <type> PERSIST [USING <property set>];To enable replay of a stream by persisting it to Kafka (see Replaying events using Kafka streams), use the syntax CREATE STREAM <name> OF <type> USING <namespace>.<property set>, where <property set> is the name of a set of Kafka server properties. To persist to a Striim cluster's integrated Kafka broker, use the property set Global.DefaultKafkaProperties, for example:CREATE STREAM MyStream of MyStreamType PERSIST USING Global.DefaultKafkaProperties;This memory-resident stream may be used in the usual way in a window or CQ. Alternatively, the persisted data may be read by KafkaReader using topic name <namespace>_<stream name> (see Reading a Kafka stream with KafkaReader). To use persisted stream data from the integrated Kafka broker outside of Striim, see\u00a0Reading a Kafka stream with an external Kafka consumer.If a persisted stream is created in an application or flow with encryption enabled (see CREATE APPLICATION ... END APPLICATION) it will be encrypted. It may be read by another application without encryption enabled.CREATE APPLICATION ... END APPLICATIONLimitations:Kafka streams may be used only on the output of a source or the output of a CQ that parses a source.Implicit streams may not be persisted to Kafka.In an application or flow running in a Forwarding Agent, a source or CQ may output to a Kafka stream, but any further processing of that stream must take place on the Striim server.If the Kafka broker configuration delete.topic.enable is false (the default for Kafka 0.11 and all other releases prior to 1.0.0), when a Striim application containing a Kafka stream has been terminated and you drop the application, when you reload the application, creating the stream will fail. To avoid this, set delete.topic.enable=true.Thus the Kafka stream must be explicitly created before the source or CQ that populates it. Using MultiLogApp for example, to persist the raw output of the access log source:CREATE STREAM RawAccessStream OF Global.WAEvent\n PERSIST USING Global.DefaultKafkaProperties;\n\nCREATE SOURCE AccessLogSource USING FileReader (\n directory:'Samples/MultiLogApp/appData',\n wildcard:'access_log',\n positionByEOF:false\n)\nPARSE USING DSVParser (\n ignoreemptycolumn:'Yes',\n quoteset:'[]~\"',\n separator:'~'\n)\nOUTPUT TO RawAccessStream;Alternatively, to persist the output of the CQ that parses that raw output:CREATE TYPE AccessLogEntry (\n srcIp String KEY ...\n);\nCREATE STREAM AccessStream OF AccessLogEntry\n PERSIST USING Global.DefaultKafkaProperties;\n\nCREATE CQ ParseAccessLog \nINSERT INTO AccessStream\nSELECT data[0] ...\nFROM RawAccessStream;To distribute\u00a0events among multiple Kafka partitions, use\u00a0PARTITION BY <field>:CREATE STREAM AccessStream OF AccessLogEntry\n PERSIST USING Global.DefaultKafkaProperties\n PARTITION BY srcIp;\nAll events with the same value in <field>\u00a0will be written to the same randomly selected Kafka partition. Striim will distribute the data evenly among the partitions to the extent allowed by the frequency of the <field>\u00a0values. For example, if 80% of the events have the same <field>\u00a0value, then one of the Kafka partitions will contain 80% of the events.By default, events may be distributed among up to 200 Kafka partitions.Dropping a persisted stream will automatically delete the associated Kafka topics.If\u00a0recovery (see\u00a0Recovering applications)\u00a0is enabled for an application containing a Kafka stream, the persisted data will include \"CheckPoint\" events used by the recovery process.In this section: CREATE STREAMPersisting a stream to KafkaSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-10\n", "metadata": {"source": "https://www.striim.com/docs/en/create-stream.html", "title": "CREATE STREAM", "language": "en"}} {"page_content": "\n\nCREATE SUBSCRIPTIONSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE SUBSCRIPTIONPrevNextCREATE SUBSCRIPTIONSee Sending alerts from applications.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-06-01\n", "metadata": {"source": "https://www.striim.com/docs/en/create-subscription.html", "title": "CREATE SUBSCRIPTION", "language": "en"}} {"page_content": "\n\nCREATE TARGETSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE TARGETPrevNextCREATE TARGETCREATE TARGET <name>\nUSING <writer name> ( <properties> )\n[ FORMAT USING <formatter name> ( <properties> ) ]\nINPUT FROM <stream name>;Whether the properties and the FORMAT USING clause are required vary according to the adapter. See the Targets topic for the writer you are using.If an optional property is not specified, the source will use its default value. Values for required properties must be always specified in TQL, even if they have default values (which are displayed automatically in the web UI).Here is an example of a target that writes server data to a file:CREATE TARGET DSVFormatterOut using FileWriter(\n filename:'DSVFormatterOutput')\nFORMAT USING DSVFormatter ()\nINPUT FROM DSVTransformed_Stream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/create-target.html", "title": "CREATE TARGET", "language": "en"}} {"page_content": "\n\nCREATE TYPESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE TYPEPrevNextCREATE TYPECREATE TYPE <name> ( <field name> <data type> [KEY], ... );KEY defines the field that will be used to relate events in WActions when the type is specified as the CONTEXT OF in the CREATE WACTIONSTORE statement.Example from the PosApp sample application:CREATE TYPE ProductTrackingType (\n sku String KEY, \n salesAmount double, \n Count Integer, \n startTime DateTime \n );Unlike other components, types do not need to be deployed. They are available for use in running applications as soon as they are created.You cannot specify a composite key directly. Instead, create a CQ that concatenates two or more fields into a new field, then make that field the key. For example, if inputStream has this type:CREATE TYPE inputType(\n SubKey1 String,\n SubKey2 String,\n Name String,\n UniqueID String KEY\n );The following will create a composite key on the first two fields:CREATE TYPE outputType(\n SubKey1 String,\n SubKey2 String,\n Name String,\n CompKey String KEY\n );\nCREATE STREAM outputStream OF outputType;\nCREATE CQ makeCompositeKey\n INSERT INTO outputStream\n SELECT SubKey1,\n SubKey2,\n Name,\n SubKey1 + SubKey2 KEY\n FROM inputStream;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/create-type.html", "title": "CREATE TYPE", "language": "en"}} {"page_content": "\n\nCREATE VAULTSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE VAULTPrevNextCREATE VAULTSee Using vaults.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-08-18\n", "metadata": {"source": "https://www.striim.com/docs/en/create-vault.html", "title": "CREATE VAULT", "language": "en"}} {"page_content": "\n\nCREATE WACTIONSTORESkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE WACTIONSTOREPrevNextCREATE WACTIONSTORECREATE WACTIONSTORE <name> \nCONTEXT OF { <type name> } \nEVENT TYPES ( <type name> ) \n[ USING { MEMORY | ( <properties> ) } ];The CONTEXT OF type defines the fields that may be stored in the WActionStore.\u00a0Two WActionStores may not use the same\u00a0CONTEXT OF\u00a0type. If necessary, define multiple identical types to work around this limitation.If\u00a0LINK SOURCE EVENT\u00a0is not specified in the CQ that populates the WActionStore, specify the\u00a0CONTEXT OF\u00a0type as the sole event type. If LINK SOURCE EVENT is specified, specify the type of each component specified in the CQ's FROM clause in EVENT TYPES.USING MEMORY disables persistence.If you omit the USING clause, the WActionStore will persist to Elasticsearch with its default properties. This is functionally equivalent to USING (storageProvider:'elasticsearch'). Data is persisted to Striim/data/<cluster name>/nodes/<node number>/indices/<namespace>.<WActionStore name>. See https://www.elastic.co/blog/found-dive-into-elasticsearch-storage for more information about these paths.For example, from MultiLogApp:CREATE TYPE ZeroContentEventListType (\n srcIp String KEY,\n code Integer,\n size Integer,\n level String,\n message String,\n xception String);\n \nCREATE WACTIONSTORE ZeroContentEventList\nCONTEXT OF ZeroContentEventListType \nEVENT TYPES (\n ZeroContentEventListType ...\n\nCREATE CQ GenerateZeroContentEventList\nINSERT INTO ZeroContentEventList\nSELECT srcIp, code, size, level, message, xception ...This stores the information used to populate the five-column table on the Zero Content details dashboard page.See PosApp for a detailed discussion of how the queries, types, and WActionStore interact.The following will persist to Elasticsearch:CREATE WACTIONSTORE MerchantActivity \nCONTEXT OF MerchantActivityContext\nEVENT TYPES (MerchantTxRate);The following will retain data in Elasticsearch for a minimum of one day, after which it will be expunged.\u00a0The exact time the data will be expunged is unpredictable.CREATE WACTIONSTORE MerchantActivity \nCONTEXT OF MerchantActivityContext\nEVENT TYPES (MerchantTxRate) \n USING (storageProvider:'elasticsearch', elasticsearch.time_to_live: '1d');You may specify the time to live as m (milliseconds), s (seconds), h (hours), d (days), or w (weeks).\u00a0The following does not persist the data to disk:CREATE WACTIONSTORE MerchantActivity\n CONTEXT OF MerchantActivityContext\n EVENT TYPES (MerchantTxRate)\n USING MEMORY;Striim also supports persistence to MySQL and Oracle. To use one of those options, specify USING (<properties>) with the appropriate properties for the DBMS as detailed below.WarningWActionStores with more than one event type cannot be persisted to MySQL or Oracle.The properties for MySQL are:storageProvider:'jdbc',\npersistence_interval: '10 sec',\nJDBC_DRIVER:'com.mysql.jdbc.Driver',\nJDBC_URL:'jdbc:mysql://<host>/<database name>',\nJDBC_USER:'<user name>',\nJDBC_PASSWORD:'<password>',\nDDL_GENERATION:'create-or-extend-tables'The properties for Oracle are:storageProvider:'jdbc',\npersistence_interval: '10 sec',\nJDBC_DRIVER:'oracle.jdbc.driver.OracleDriver',\nJDBC_URL:'jdbc:oracle:thin:@<host IP address>:<port>:<host SID>',\nJDBC_USER:'<user name>',\nJDBC_PASSWORD:'<password>',\nDDL_GENERATION:'create-or-extend-tables',\nCONTEXT_TABLE:'<context table name>',\nEVENT_TABLE:'<event table name>'WarningWhen persisting a WActionStore to Oracle, the context and event table names must be unique within the application and not exceed Oracle's 30-character limit, and the number of characters in the namespace and the WActionStore name must total no more than 24.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-08\n", "metadata": {"source": "https://www.striim.com/docs/en/create-wactionstore.html", "title": "CREATE WACTIONSTORE", "language": "en"}} {"page_content": "\n\nCREATE WINDOWSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceCREATE WINDOWPrevNextCREATE WINDOWTo aggregate, join, or perform calculations on the data, you must create a bounded data set. The usual way to do this is with a window, which bounds the stream by a specified number of events, a period of time, or both. As discussed in the Concepts Guide (see Window), this may be a sliding window, which always contains the most recent set of events a jumping window, which breaks the stream up into successive chunks, or a session window, which breaks the stream up into chunks when there are gaps in the flow of events, that is, when no new event has been received for a specified period of time (the idle timeout).The syntax for sliding and jumping windows is:CREATE [ JUMPING ] WINDOW <name> \nOVER <stream name> \nKEEP {\n <int> ROWS |\n WITHIN <int> { SECOND | MINUTE | HOUR | DAY } |\n <int> ROWS WITHIN <int> { SECOND | MINUTE | HOUR | DAY } }\n[ PARTITION BY <field name> ];The syntax for session windows is:CREATE SESSION WINDOW <name>\nOVER <stream name>\nIDLE TIMEOUT <int> { SECOND | MINUTE | HOUR | DAY }\n[ PARTITION BY <field name> ];If JUMPING\u00a0or SESSION\u00a0is not specified, the window uses the sliding mode (see the Concepts Guide for an explanation of the difference).If PARTITION BY is specified, the criteria\u00a0will be applied separately for each value of the specified field name.With a count-based window,\u00a0PARTITION BY\u00a0will keep the specified number of events for each field value. For example, a window with KEEP 100 ROWS PARTITION BY merchantID would contain 100 events for each merchant.With a time-based window,\u00a0PARTITION BY will start the timer separately for each field value. This could be used, for example, to raise an alert when a device has multiple errors in a certain period of time.With a session window, PARTITION BY will apply the idle timeout separately for each value of the specified field, which might be a user ID, session ID, or IP address.This example from MultiLogApp creates a one-hour jumping window for each company's API usage stream:TQL:CREATE JUMPING WINDOW CompanyWindow \nOVER CompanyApiUsageStream \nKEEP WITHIN 1 HOUR ON logTime \nPARTITION BY company;UI:Mode: JumpingSize of Window: TimeTime: 1 hourEvent TimeOn: logTimeThe hour will be timed separately for each company, starting when the first event for that company is received.The following is a detailed guide to the syntax for various types of windows. For more detailed discussion of windows in the context of applications, see Bounding data with windows.Bounding data in batches (jumping) by system timeCREATE JUMPING WINDOW <name> OVER <stream name>\nKEEP WITHIN <int> { SECOND | MINUTE | HOUR | DAY }\n[ PARTITION BY <field name> ];With this syntax, the window will output a batch of data each time the specified time interval has elapsed. For example, this window will emit a set of data every five minutes:TQL:CREATE JUMPING WINDOW P2J_ST\nOVER PosSource_TransformedStream\nKEEP WITHIN 5 MINUTE;UI:Mode: JumpingSize of Window: TimeTime: 5 minuteSystem TimeBounding data in batches (jumping) by event timeCREATE JUMPING WINDOW <name> OVER <stream name>\nKEEP WITHIN <int> { SECOND | MINUTE | HOUR | DAY } ON <timestamp field name>\n[ PARTITION BY <field name> ];With this syntax, the window will output a batch of data each time it receives an event in which the specified timestamp field's value exceeds oldest event in the window by the specified amount of time. For example, assuming data is received continuously, this window will emit a set of data every five minutes:TQL:CREATE JUMPING WINDOW P5J_ET\nOVER PosSource_TransformedStream\nKEEP WITHIN 5 MINUTE \nON dateTime;UI:Mode: JumpingSize of Window: TimeTime: 5 minuteEvent TimeOn: dateTimeBounding data in batches (jumping) by event countCREATE JUMPING WINDOW <name>\nOVER <stream name>\nKEEP <int> ROWS\nWITHIN <int>\n[ PARTITION BY <field name> ];With this syntax, the window will output a batch of data each time it contains the specified number of events. For example, the following will break the stream up into batches of 100 events:TQL:CREATE JUMPING WINDOW P11J_ROWS\nOVER PosSource_TransformedStream\nKEEP 100 ROWS;UI:Mode: JumpingSize of Window: CountEvents: 100Bounding data continuously (sliding) by timeCREATE WINDOW <name> OVER <stream name>\nKEEP WITHIN <int> { SECOND | MINUTE | HOUR | DAY }\n[ ON <timestamp field name> ] ]\n[ SLIDE <int> { SECOND | MINUTE | HOUR | DAY } ]\n[ PARTITION BY <field name> ];With this syntax, the window emits data every time an event is added to the window, first removing any events that exceed the specified time interval from the window. For example, the following will emit the events received in the past five minutes each time it receives a new event:TQL:CREATE WINDOW P1S_ST\nOVER PosSource_TransformedStream\nKEEP WITHIN 5 MINUTE;UI:Mode: SlidingSize of Window: TimeTime: 5 minuteSystem TimeThe following is similar but uses event time:TQL:CREATE WINDOW P4S_ET\u00a0\nOVER PosSource_TransformedStream\u00a0\nKEEP WITHIN 5 MINUTE\u00a0\nON dateTime;UI:Mode: SlidingSize of Window: TimeTime: 5 minuteEvent TimeOn: dateTimeIf you want to get events less often, use the Output interval (UI) / SLIDE (TQL) option. For example, the following will emit the past five minutes of events once a minute:TQL:CREATE WINDOW P3S_ST_OI\nOVER PosSource_TransformedStream\nKEEP WITHIN 5 MINUTE\nSLIDE 1 MINUTE;UI:Mode: SlidingSize of Window: TimeTime: 5 minuteSystem TimeOutput interval: 1 minuteCREATE WINDOW P6S_ET_OI\nOVER PosSource_TransformedStream\nKEEP WITHIN 5 MINUTE\nON dateTime\nSLIDE 1 MINUTE;UI:Mode: SlidingSize of Window: TimeTime: 5 minuteEvent TimeOn: dateTimeOutput interval: 1 minuteBounding data continuously (sliding) by event countCREATE WINDOW <name> OVER <stream name>\nKEEP <int> ROWS \n[ SLIDE <int> ]\n[ PARTITION BY <field name> ];With this syntax, when the window has received the specified number of events, it will emit its contents. From then on, every time it receives a new event, it will remove the event that has been in the window the longest, then emit the remaining events contents. For example, the following will send the first 100 events it receives, and from then on send the most recent 100 events every time it receives a new one:TQL:CREATE WINDOW P10S_ROWS\nOVER PosSource_TransformedStream\nKEEP 100 ROWS;UI:Mode: SlidingSize of Window: CountEvents: 100If you want to get events less often, use the Output interval (UI) / SLIDE (TQL) option. For example, the following will emit the most recent 100 events every tenth event:TQL:CREATE WINDOW P12S_ROWS_OI \nOVER PosSource_TransformedStream \nKEEP 100 ROWS \nSLIDE 10; UI:Mode: SlidingSize of Window: CountEvents: 100Output interval: 10Advanced window settings (RANGE / Timeout)When a window's size is based on events (KEEP ROWS) or event\n time (KEEP WITHIN <int> <time unit> ON <timestamp field name>),\n the window may not jump for far longer than is desired. Use the RANGE or Timeout property to\n force the window to jump within a set period. For example:CREATE JUMPING WINDOW MyWindow OVER MyStream KEEP WITHIN 5 MINUTE ON DateTimeIf there is an hour gap between events, the window could be open for over\n an hour without sending any data. To prevent that, use the Timeout (UI) or RANGE (TQL) option. Use the following when the window\n size is based on event time:\n\nCREATE [ JUMPING ] WINDOW <name> OVER <stream name>\nKEEP RANGE <int> { SECOND | MINUTE | HOUR | DAY }\n ON <timestamp field name>\nWITHIN <int> { SECOND | MINUTE | HOUR | DAY } \n[ PARTITION BY <field name> ];\n\n\n\n\nNoteNote that Timeout / RANGE\n is always based on system time.Use the following when the window size is based on event count:\n\nCREATE [ JUMPING ] WINDOW <name> OVER <stream name>\nKEEP <int> ROWS\nWITHIN <int> { SECOND | MINUTE | HOUR | DAY }\n[ PARTITION BY <field name> ];\n\n\n\n\nNoteNote that when Timeout is used with\n used with Time it maps to RANGE, but when used\n with Events it maps to WITHIN.Jumping by event timeWith the following settings, when events enter the window steadily, the window will jump\n every five minutes, but if there is a gap in events, the window will always jump within six\n minutes.\n\nTQL:\nCREATE JUMPING WINDOW P8J_ET_TO\nOVER PosSource_TransformedStream\nKEEP RANGE 6 MINUTE\nON dateTime\nWITHIN 5 MINUTE;\n\n\nUI:\nMode: JumpingSize of Window: AdvancedTime: 5 minuteTimeout: 6 minuteOn: logTime\n\nJumping by event countThe following will emit a batch of events every time the count reaches 100 or ten seconds\n has elapsed from the last time it emitted data:\n\nTQL:\nCREATE JUMPING WINDOW P14J_ROWS_TO\nOVER PosSource_TransformedStream\nKEEP 100 ROWS\nWITHIN 10 SECOND;\n\n\nUI:\nMode: JumpingSize of Window: AdvancedEvents: 100Timeout: 10 second\n\nSliding by event timeWith the following settings, when there is a gap in events, the window will always emit its\n contents within six minutes.\n\nTQL:\nCREATE WINDOW P7S_ET_TO\nOVER PosSource_TransformedStream\nKEEP RANGE 6 MINUTE\nON dateTime\nWITHIN 5 MINUTE;\n\n\nUI:\nMode: SlidingSize of Window: AdvancedTime: 5 minuteTimeout: 6 minuteOn: dateTime\n\nSliding by event countWith these settings, when there is a gap in events, the window will always emit its contents\n within six minutes.\n\nTQL:\nCREATE WINDOW P13S_ROWS_TO\nOVER PosSource_TransformedStream\nKEEP 100 ROWS\nWITHIN 10 SECOND;\n\n\nUI:\nMode: SlidingSize of Window: AdvancedEvents: 100Timeout: 10\n\nBounding data in batches by session timeoutSession windows bound events based on gaps in the data flow, that is, when no new event has been received for a specified period of time. For example, the following window would emit a set of events every time a minute passes between one event and the next.\u00a0TQL:CREATE SESSION WINDOW MySessionWindow \nOVER MyStream\nIDLE TIMEOUT 1 MINUTE;UI:Mode: AdvancedTimeout: 1 minuteEach set could contain any number of events, accumulated over any length of time, and the gap between the last event in one session and the first event in the next session could be any duration of a minute or longer.If a session window is partitioned, the idle timeout will be applied separately to each value of the field it is partitioned by, and each set emitted will contain only events with that value. The partitioning field might be a session ID or a user ID.Supported combinations of window mode and size propertiesThe following table lists all supported combinations of window properties. Note that in the UI the available Size of Window options change depending on whether the Mode is Sliding or Jumping.sliding / jumpingtime / event countoutput intervaltimeoutFlow DesignerTQLslidingsystem timenonoMode: SlidingSize of Window: TimeTime: 5 minuteSystem TimeCREATE WINDOW P1S_ST OVER PosSource_TransformedStream KEEP WITHIN 5 MINUTE;jumpingsystem timenonoMode: JumpingSize of Window: TimeTime: 5 minuteSystem TimeCREATE JUMPING WINDOW P2J_ST OVER PosSource_TransformedStream KEEP WITHIN 5 MINUTE;slidingsystem timeyesnoMode: SlidingSize of Window: TimeTime: 5 miinuteSystem TimeOutput interval: 1 minuteCREATE WINDOW P3S_ST_OI OVER PosSource_TransformedStream KEEP WITHIN 5 MINUTE SLIDE 1 MINUTE;slidingevent timenonoMode: SlidingSize of Window: TimeTime: 5 minuteEvent TimeOn: dateTimeCREATE WINDOW P4S_ET OVER PosSource_TransformedStream KEEP WITHIN 5 MINUTE ON dateTime;jumpingevent timenonoMode: JumpingSize of Window: TimeTime: 5 minuteEvent TimeOn: dateTimeCREATE JUMPING WINDOW P5J_ET OVER PosSource_TransformedStream KEEP WITHIN 5 MINUTE ON dateTime;slidingevent timeyesnoMode: SlidingSize of Window: TimeTime: 5 miinuteEvent TimeOn: dateTimeOutput interval: 1 minuteCREATE WINDOW P6S_ET_OI OVER PosSource_TransformedStream KEEP WITHIN 5 MINUTE ON dateTime SLIDE 1 MINUTE;slidingevent timenoyesMode: SlidingSize of Window: AdvancedTime: 5 miinuteTimeout: 6 minuteOn: dateTimeCREATE WINDOW P7S_ET_TO OVER PosSource_TransformedStream KEEP RANGE 6 MINUTE ON dateTime WITHIN 5 MINUTE;jumpingevent timenoyesMode: JumpingSize of Window: AdvancedTime: 5 minuteTimeout: 6 minuteOn: logTimeCREATE JUMPING WINDOW P8J_ET_TO OVER PosSource_TransformedStream KEEP RANGE 6 MINUTE ON dateTime WITHIN 5 MINUTE;slidingevent timeyesyesMode: SlidingSize of Window: AdvancedTime: 5 miinuteTimeout: 6 minuteOn: dateTimeOutput interval: 1 minuteCREATE WINDOW P9SL_ET_TO_OI OVER PosSource_TransformedStream KEEP RANGE 6 MINUTE ON dateTime WITHIN 5 MINUTE SLIDE 1 MINUTE;slidingevent countnonoMode: SlidingSize of Window: CountEvents: 100CREATE WINDOW P10S_ROWS OVER PosSource_TransformedStream KEEP 100 ROWS;jumpingevent countnonoMode: JumpingSize of Window: CountEvents: 100CREATE JUMPING WINDOW P11J_ROWS OVER PosSource_TransformedStream KEEP 100 ROWS;slidingevent countyesnoMode: SlidingSize of Window: CountEvents: 100Output interval: 10CREATE WINDOW P12S_ROWS_OI OVER PosSource_TransformedStream KEEP 100 ROWS SLIDE 10;slidingevent countnoyesMode: SlidingSize of Window: AdvancedEvents: 100Timeout: 10CREATE WINDOW P13S_ROWS_TO OVER PosSource_TransformedStream KEEP 100 ROWS WITHIN 10 SECOND;jumpingevent countnoyesMode: JumpingSize of Window: AdvancedEvents: 100Timeout: 10 secondCREATE JUMPING WINDOW P14J_ROWS_TO OVER PosSource_TransformedStream KEEP 100 ROWS WITHIN 10 SECOND;slidingevent countyesyesMode: SlidingSize of Window: AdvancedEvents: 100Timeout: 10Output interval: 2CREATE WINDOW P15S_ROWS_TO_OI OVER PosSource_TransformedStream KEEP 100 ROWS WITHIN 10 SECOND SLIDE 2;In this section: CREATE WINDOWAdvanced window settings (RANGE / Timeout)Bounding data in batches by session timeoutSupported combinations of window mode and size propertiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-10\n", "metadata": {"source": "https://www.striim.com/docs/en/create-window.html", "title": "CREATE WINDOW", "language": "en"}} {"page_content": "\n\nDROPSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Programmer's GuideDDL and component referenceDROPPrevNextDROPDROP { APPLICATION | FLOW | <component type> } <namespace>.<component name> [ CASCADE | FORCE ];Removes a previously created component. For example:DROP WACTIONSTORE Samples.MerchantActivity;For an application or flow, if the CASCADE option is specified, all the components it contains are also removed. For a source that implicitly creates its output stream, if the CASCADE option is specified, the stream is also removed. For example:DROP APPLICATION Samples.PosApp CASCADE;The FORCE option works like CASCADE\u00a0but will override any warnings that cause the DROP command to fail, such as components created in one application being used by another application. This may be used to drop applications that cannot be undeployed or to drop namespaces when\u00a0DROP NAMESPACE ... CASCADE\u00a0fails. Using FORCE may result in an invalid application. Using\u00a0FORCE will remove all components from the metadata repository, but some components may remain in memory until the Striim cluster is restarted.Note: DROP USER <user name> does not drop the corresponding <user name> namespace. If you wish to drop that as well, use DROP NAMESPACE <user name> CASCADE. (See Using namespaces for more about namespaces.)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-12-18\n", "metadata": {"source": "https://www.striim.com/docs/en/drop.html", "title": "DROP", "language": "en"}} {"page_content": "\n\nAdministrator's GuideSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuidePrevNextAdministrator's GuideThis section documents tasks to be performed by the Striim administrator or other privileged users.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-19\n", "metadata": {"source": "https://www.striim.com/docs/en/administrator-s-guide.html", "title": "Administrator's Guide", "language": "en"}} {"page_content": "\n\nStarting and stopping Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideStarting and stopping Striim CloudPrevNextStarting and stopping Striim CloudTo start a Striim Cloud serviceIn Striim Cloud Console, on the Services page, select ... > Start for the service you want to start.To stop a Striim Cloud serviceIn Striim Cloud Console, on the Services page, select ... > Stop for the service you want to stop.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-09\n", "metadata": {"source": "https://www.striim.com/docs/en/starting-and-stopping-striim-cloud.html", "title": "Starting and stopping Striim Cloud", "language": "en"}} {"page_content": "\n\nApplication statesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideApplication statesPrevNextApplication statesAPPROVING QUIESCE: waiting for sources to approve or reject the Quiesce command; if rejected, the state will return to RUNNINGCOMPLETED: Database Reader's \"Quiesce on IL Completion\" property was set to True and initial load has completed. This state is identical to QUIESCED except that quiesce is initiated by the application rather than by command of the user.CREATED: ready to deployDEPLOY FAILED: an error caused deploy to fail (previous state was DEPLOYING). This state is identical to CREATED except that it displays the error.DEPLOYED: ready to start or undeploy (previous state was DEPLOYING)DEPLOYING: transitional state between CREATED and DEPLOYED. Some adapters validate their properties on deployment, and if validation fails, the application will return to the CREATED state.FLUSHING: transitional state between QUIESCING and QUIESCED; see QUIESCEHALT: identical to TERMINATED except that the cause is an external issue, such as a source or target database being offline or not configured to accept the connection from Striim as specified in the adapter propertiesNOT ENOUGH SERVERS: see CREATE DG (deployment group)QUIESCED: ready to start or undeploy; see QUIESCE (previous state was QUIESCING)QUIESCING: transitional state between RUNNING and QUIESCEDRECOVERING SOURCES: see Recovering applicationsRecovering applicationsRUNNING: ready to stop or quiesceSTARTING: transitional state between DEPLOYED and RUNNING. Some adapters validate their properties when started, and if validation fails, the application will return to the DEPLOYED state.STARTING SOURCES: application has started but sources are not running yetSTOPPED: ready to start or undeploy (previous state was STOPPING)STOPPING: transitional state between RUNNING and STOPPEDTERMINATED: application stopped unexpectedly, undeploy to continueUNKNOWN: the application's state cannot be determinedVERIFYING STARTING: Striim is validating the application; if valid, the next state is STARTING SOURCES, if invalid, the next state is TERMINATEDIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-04\n", "metadata": {"source": "https://www.striim.com/docs/en/application-states.html", "title": "Application states", "language": "en"}} {"page_content": "\n\nManaging users, permissions, and rolesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideManaging users, permissions, and rolesPrevNextManaging users, permissions, and rolesBefore a person can access Striim, an administrator must create a user account for them.Understanding namespacesNamespaces are logical domains within the Striim environment that contain applications, flows, and their components such as sources, streams, and so on. Namespace-level roles and permissions control which users can do what in Striim.By default, in a new Striim installation, there are two or four namespaces depending on which sample apps were installed:Global contains system-created objects including system-level roles, default types such as WAEvent and AlertEvent, and the DefaultKafkaProperties property set. Users cannot create objects in this namespace.admin is empty and may be used by administrators for any purpose.SamplesDB, SamplesDB2File, and SamplesDB2Kafka contain the CDC demo apps discussed in Running the CDC demo apps.Samples contains sample applications including those discussed in Sample applications for programmers.When you create a new user account, a personal namespace with the same name is created automatically. The user has admin privileges for that namespace so may create applications and dashboards in it.It is possible for an application in one namespace to use components in another namespace. See Using namespaces for more information.PermissionsPermissions determine which actions each user can perform in Striim. Permissions are assigned to users through roles.A permission defines one or more actions that may be performed on one or more component types. A permission's domain may be global (granted in all namespaces) or limited to one or more specified namespaces. Optionally, permission may be restricted to one or more objects (components, flows, and/or applications) specified by name.The syntax is:GRANT <action(s)> ON <component types(s)> <namespace>.<object>For example,\u00a0GRANT READ,SELECT ON type Global.*\u00a0 means permission to read and select all types in the Global namespace. Since many basic Striim operations use those types, by default all users have this permission through the\u00a0Global.systemuser role.ALL\u00a0(for actions) and\u00a0*\u00a0(for the other elements) are wildcards. For example, GRANT ALL ON * *.* means permission to perform all actions on all components in all objects in all namespaces. The admin user has this permission through the Global.admin role.Actions:NoteThe READ action is a prerequisite for all other actions. For example, to select from a stream, you must have both READ and SELECT permissions. If you have only SELECT permission, select will fail with a \"no such object\" error.CREATEDEPLOYDROPGRANT (also allows use of REVOKE)QUIESCEREAD (allows user to see that objects exist, for example, when using the LIST command)RESUMESELECT (allows user to query objects and to preview stream contents in the UI)STARTSTATUSSTOPUNDEPLOYUPDATE (also allows use of ALTER and RECOMPILE)Components:alertsubscriberapplicationcacheclustercqdashboarddeploymentgroupflowinitializernamedquerynamespacenodepermissionpropertysetpropertytemplatequeryvisualizationroleserversourcestreamsubscriptiontargettypeuserwactionstorewindowRolesRoles associate permissions with users and namespaces. A role may contain multiple permissions and multiple roles.The default system-level roles that exist in a new Striim installation are:roleA user with this role:Global.adminis a superuser, that is,\u00a0may perform any action, including managing users, roles, applications, and clusters. By default, only users with this role have access to the\u00a0admin namespace.Global.agentroleis assigned to the internal sys user for use in authenticating Forwarding Agents when they connect to the clusterGlobal.appadminis automatically granted the admin role for every namespace created.Global.appdevis automatically granted the dev role for every namespace created.Global.appuseris automatically granted the enduser role for every namespace created.Global.serverroleis assigned to the internal sys user for use in authenticating servers when they connect to the clusterGlobal.systemuserThis role is automatically granted to new users. By default, it allows use of types, property templates, and deployment groups in the Global namespace.Global.uiuserThis role is automatically granted to new users. By default, it allows access to the Apps, Dashboard, Flow Designer, Monitor, and Source Preview pages.The following roles are automatically created for each namespace:roleA user with this role:<namespace>.admincan perform any action within the namespace.<namespace>.devcan perform any action within the namespace except DROP, GRANT, and REVOKE.<namespace>.enduserhas read-only access to the namespace.<namespace>.useradmincan alter their own user account properties such as the password. This role is created only by CREATE USER, not by CREATE NAMESPACE.System users and keystoreStriim has two system user accounts that are created during installation:admin has all privileges on all namespaces.sys authenticates servers and Forwarding Agents when they connect to the Striim cluster. Its only privileges are Global.serverrole and Global.agentrole. It does not have a namespace and cannot log in.The admin and sys passwords, as well as the metadata repository password, are stored in a Java KeyStore, striim/conf/sks.jks, using AES-256 and BCrypt.If you prefer, you may create a user similar to sys that can authenticate only Forwarding Agents (replace ******** with a strong password):CREATE USER agentauth IDENTIFIED BY ********;\nDROP NAMESPACE agentauth CASCADE;\nREVOKE Global.systemuser FROM USER agentauth;\nREVOKE Global.uiuser FROM USER agentauth;\nGRANT Global.agentrole TO USER agentauth;TQL commands for usersCREATE USER <name>\n IDENTIFIED BY <password> \n [ DEFAULT ROLE <namespace>.<role name> ];Creates a new user and a personal namespace of the same name.User names:must contain only alphanumeric characters and underscoresmay not start with a numeric charactermust be uniqueIf you do not specify a default role, the user will have the following role and permissions:rolenotes<username>.adminhas full control over their personal namespace (all other namespaces will be hidden and inaccessible until the user is granted additional roles)<username>.useradmincan change their password and other account detailsGlobal.systemusercan use use types, property templates, and deployment groups in the Global namespace (unless the administrator has modified this role)Global.uiusercan access the Apps, Dashboard, Flow Designer, Monitor, and Source Preview pages in the UI (unless the administrator has modified this role)WarningPasswords may contain only uppercase and lowercase letters, numbers, _, and $. Passwords are case-sensitive.For example, the following command creates a new user jsmith with the ability to view, edit, deploy, and run the sample applications:CREATE USER jsmith IDENTIFIED BY secureps DEFAULT ROLE Samples.dev;If you do not include the optional DEFAULT ROLE clause, the user will have access only to their personal namespace until granted additional roles as described in TQL commands for roles.To change a user's password (requires UPDATE permission on the user), use:ALTER USER <user name> SET ( password:\"<password>\" );For example, ALTER USER jsmith SET (password:\"newpass\"); will change jsmith's password to newpass.Optionally, you may specify a time zone to be used to convert DateTime values in dashboard visualizations and query output to the user's local time. For example:ALTER USER jsmith SET (timezone:\"America/Los_Angeles\");This can be useful when the user is in a different time zone than the Striim cluster. See http://joda-time.sourceforge.net/timezones.html for a full list of supported values.Optionally, you may add additional fields that will be included in DESCRIBE USER output:ALTER USER <user name> SET ( { firstname | lastname | email }:\"<value>\",... );For example, ALTER USER jsmith SET (email:\"jsmith@example.com\", firstname:\"James\",lastname:\"Smith\", email:\"jsmith@example.com\"); will result in this DESCRIBE output:USER jsmith CREATED 2017-10-02 16:49:32\nUSERID jsmith\nFIRSTNAME James\nLASTNAME Smith\nTIMEZONE America/Los_Angeles\nCONTACT THROUGH [type : email value : jsmith@example.com]\nROLES {samples.dev, jsmith.admin, jsmith.useradmin, Global.systemuser, Global.uiuser}\nPERMISSIONS []\nINTERNAL user.\n\nNAMESPACE jsmith CREATED 2017-10-02 16:49:32\nCONTAINS OBJECTS (\n\tROLE DEV, \n\tROLE USERADMIN, \n\tROLE ENDUSER, \n\tROLE ADMIN, \n)If a user has been deactivated due to too many failed login attempts, you can reactivate the user account with this command:ALTER USER <user name> SET (ACTIVE:\"true\");TQL commands for rolesGRANT <namespace>.<role name> TO USER <user name>;Grants a user a role.GRANT Samples.appdev TO USER <user name>;Gives a user the ability to view, edit, deploy, and run the sample applications.CREATE ROLE <namespace>.<role name>;Creates a role in the specified namespace. See Using namespaces for discussion of sharing roles among applications.GRANT <action(s)> ON [<component type(s)>] <namespace>[.<application_name>] TO ROLE <namespace>.<role_name>;Grants a role permission to perform one or more actions in the specified namespace or application. Optionally, you may specify one or more component types (see Permissions).GRANT <namespace>.<role name> TO ROLE <namespace>.<role name>;Grants one role to another. Effectively, this grants all the first role's permissions to the second role.REVOKE <action(s)> ON [<component type(s)>] <namespace>[.<application_name>] FROM ROLE <namespace>.<role_name>;Revokes a previously granted permission from a role.REVOKE '<namespace>.<role name>' FROM ROLE <namespace>.<role name>;Revokes a previously granted role from another role.REVOKE <namespace>.<role name> FROM USER <user name>;Revokes a previously granted role from a user.Web UI permissionsThe following permissions control access within the web UI:*:*:apps_ui:**:*:dashboard_ui:**:*:monitor_ui:**:*:sourcepreview_ui:*The apps_ui permission allows access to the Apps, Flow Designer, and Metadata Manager pages.You may allow access to other pages by granting one or more of the above permissions to a role. For example, the following would give users with the Samples:dev role access to the Source Preview page:GRANT ALL ON sourcepreview_ui *.* TO ROLE Samples.dev;UI permissions must always be granted to *.*. You cannot limit them to a particular namespace or object.The following would revoke the permission granted in the previous command:REVOKE ALL ON sourcepreview_ui *.* FROM ROLE Samples.dev;The Global:admin role provides access to all pages of the web UI. Other users' access is controlled by the Global.uiuser role, which by default allows access to all pages. To change that, modify the role. For example, to restrict the Monitor page to admins:revoke all on monitor_ui *.* from role Global.uiuser;Inspecting users and roles with LIST and DESCRIBEUse the LIST command to see what users or roles exist. For example, in a default installation:W (admin) > list roles;\nProcessing - list roles\nROLE 1 =>\u00a0 Global.uiuser\nROLE 2 =>\u00a0 Global.admin\nROLE 3 =>\u00a0 admin.admin\nROLE 4 =>\u00a0 admin.dev\nROLE 5 =>\u00a0 Global.appadmin\nROLE 6 =>\u00a0 Global.appuser\nROLE 7 =>\u00a0 Global.systemuser\nROLE 8 =>\u00a0 admin.enduser\nROLE 9 =>\u00a0 Global.appdevUse the DESCRIBE command to see which roles and privileges are associated with a user or role. For example, for the default admin user and default Global.admin role (which as noted above has all privileges):W (admin) > describe user admin;\nProcessing - describe user admin\nUSER admin CREATED 2017-09-28 12:08:59\nUSERID admin\nCONTACT THROUGH []\nROLES {Global.admin}\nPERMISSIONS []\nINTERNAL user.\nSee what happens when we add a user:W (admin) > CREATE USER newuser IDENTIFIED BY passwd;\nProcessing - CREATE USER newuser IDENTIFIED BY passwd\n-> SUCCESS\u00a0\nElapsed time: 131 ms\nW (admin) > describe user newuser;\nProcessing - describe user newuser\n\nUSER newuser CREATED 2017-10-02 17:19:00\nUSERID newuser\nCONTACT THROUGH []\nROLES {newuser.admin, newuser.useradmin, Global.systemuser, Global.uiuser}\nPERMISSIONS []\nINTERNAL user.The DESCRIBE output shows us:CONTACT THROUGH []: no email address for the user has been specified yetROLES {newuser.admin, newuser.useradmin, Global.systemuser, Global.uiuser}: the user has the roles discussed aboveINTERNAL user: not authenticated via LDAPIn this section: Managing users, permissions, and rolesUnderstanding namespacesPermissionsRolesSystem users and keystoreTQL commands for usersTQL commands for rolesWeb UI permissionsInspecting users and roles with LIST and DESCRIBESearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-22\n", "metadata": {"source": "https://www.striim.com/docs/en/managing-users,-permissions,-and-roles.html", "title": "Managing users, permissions, and roles", "language": "en"}} {"page_content": "\n\nManaging deployment groupsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideManaging deployment groupsPrevNextManaging deployment groupsCreating deployment groups allows you to control which servers and Forwarding Agents in a Striim cluster will run specific applications and flows. For example, you could use this to ensure that a source is run on the same server as the file or process from which it reads data. The web UI includes Deploy and Undeploy commands in the flow editor.The default deployment group, which is created automatically, contains all servers and Forwarding Agents in the cluster.The agent deployment group may be created automatically as discussed in Configuring the Forwarding Agent.Configuring the Forwarding AgentCREATE DG (deployment group)CREATE DG <name> (\"<node name>:\",...)\n [ MINIMUM SERVERS <number of servers> ]\n [ LIMIT APPLICATIONS <maximum number of applications> ];Node names begin with S or A to indicate server or Forwarding Agent, followed by the node's IP address with hyphens instead of periods. For example, a server with the IP address 192.168.1.12 would be named S192_168_1_12. Use the command LIST SERVERS; to return a list of nodes in the current cluster.NoteDo not put servers and agents in the same group. See Using the Forwarding Agent for more information.Using the Striim Forwarding AgentThe following would create a two-server cluster named SourceData:CREATE DG SourceData (\"S192_168_1_12\",\"S192_168_1_13\");Individual servers within a multi-server deployment group may be stopped and restarted without stopping applications. Striim will automatically reallocate resources as necessary.To ensure that applications are not deployed to the group when some of its servers are offline, use\u00a0MINIMUM SERVERS. For example,\u00a0MINIMUM SERVERS 2 will allow deployment only when at least two servers are available, ensuring that failover is possible. If only one server is available, deployment will fail with a \"not enough servers\" error.To prevent servers from being overloaded by applications after failover, use\u00a0LIMIT APPLICATIONS.\u00a0 For example, if you had a four-server group,\u00a0LIMIT APPLICATIONS 4\u00a0 would ensure that no server would ever\u00a0run more than four applications. If the group was running ten applications deployed ON ONE and one server failed, all ten applications would keep running. If a second server failed, two of the applications would terminate.ALTER DG (deployment group)ALTER DG <name>\n [ { ADD | REMOVE } (\"<node name>\",...) ]\n [ MINIMUM SERVERS <number of servers> ]\n [ LIMIT APPLICATIONS <maximum number of applications> ];\nUse ALTER DG to change the members or properties of an existing deployment group. For example, to add a third server to the group created by the example discussed in CREATE DG (deployment group):ALTER DG SourceData ADD (\"S192_168_1_14\");To make these changes take effect, redeploy the application(s). Until you redeploy:Applications deployed ON ALL will not be deployed on a newly added server.\u00a0Applications deployed to a removed server, deployed on fewer servers than a new MINIMUM SERVERS value, or that exceed a new LIMIT APPLICATIONS value will not be stopped or undeployed.DEPLOY APPLICATIONDEPLOY APPLICATION <namespace>.<application name>\nON { ONE | ALL } IN <deployment group>\n[ WITH <flow name> ON { ONE | ALL } IN <deployment group>,... ] ;NoteA cache is loaded into memory when it is deployed, so deployment of an application or flow with a large cache may take some time.With a single-server deployment group, you may use DEPLOY APPLICATION <application name>; without further options.The following examples assume that you are currently using the application's namespace so it is not necessary to specify it:DEPLOY APPLICATION <application name> ON ONE IN <deployment group>; will deploy the application on one server in the specified deployment group. Use\u00a0ON ONE\u00a0in a multi-server environment to deploy an application that has not been written to run on multiple servers. Striim will automatically deploy the application to the server with the fewest applications.DEPLOY APPLICATION MyApp ON ALL IN Group1 WITH FlowX ON ALL IN Group2, FlowY ON ALL IN Group3; will deploy FlowX on all servers in deployment group Group2, FlowY on all servers in\u00a0Group3, and any other flow in the application on all servers of Group1.DEPLOY FLOWDEPLOY FLOW <namespace>.<flow name> ON { ONE | ALL } IN <deployment group>;Deploys a single flow. Typically you would use this after undeploying and dropping a flow to make changes without stopping the entire application.UNDEPLOYUNDEPLOY { APPLICATION | FLOW } { <namespace>.<application name> | <namespace>.<flow name> };Undeploys a previously deployed application or flow.In this section: Managing deployment groupsCREATE DG (deployment group)ALTER DG (deployment group)DEPLOY APPLICATIONDEPLOY FLOWUNDEPLOYSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/managing-deployment-groups.html", "title": "Managing deployment groups", "language": "en"}} {"page_content": "\n\nUsing vaultsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideUsing vaultsPrevNextUsing vaultsVaults provide secure storage for any sensitive information, such as passwords, tokens, or keys. You can use a vault to secure the value of any property in Striim. Vaults store sensitive information as encrypted key-value pairs. TQL can use the keys as variables, keeping the cleartext value secured even from the developer.NoteStriim automatically encrypts values when the property type is com.webaction.security.Password (see Encrypted passwords), but if desired you may specify vault keys for those values.Striim's native vault stores key-value pairs in Striim's metadata repository.Striim will encrypt the values using AES-256.Alternatively, you may store key-value pairs in an Azure Key Vault, Hashicorp Vault's KV Secrets Engine Version 2, or the Google Secrets Manager.TipWhen handing off applications from development to QA, or from QA to production, create vaults with the same name in different namespaces. If vaults' entries have the same names but different values, the applications can use different connection URLs, user names, passwords, keys, and so on with no need to revise the TQL.NoteIn this release, vault-related commands are available only in the console. There is no web UI counterpart.Striim native vaultsTo create a Striim native vault:CREATE VAULT <vault_name>;To add an entry to a Striim native vault, the syntax is:WRITE INTO <vaultName> (\n vaultKey: \"<key>\",\n vaultValue : \"<value>\",\n [ valueType: \"FILE\" ]\n);If valueType: \"FILE\" is specified, value must be the fully-qualified name of a file accessible by Striim. (The file can be deleted after the vault entry is created). For example:WRITE INTO MyVault (\n vaultKey: \"MyKey\",\n vaultValue: \"/opt/striim/UploadedFiles/myfile.txt\",\n valueType: \"FILE\"\n);Otherwise, value must be a string. For example:WRITE INTO MyVault (\n vaultKey: \"MyKey\",\n vaultValue: \"12345678\"\n);Azure Key VaultsTo create a vault component that makes an existing Azure Key Vault available for use in Striim:CREATE VAULT <vaultName> USING VAULTSPEC (\n VaultType: \"AZUREKEYVAULT\", \n ConnectionURL: \"<connection_url>\",\n ClientID: \"<Application (client) ID>\",\n ClientSecret: \"<Secret ID>\",\n TenantID: \"<Directory (tenant) ID>\"\n);The values to specify are:ConnectionURL: from the Overview page for your Key VaultClientID: the Application (client) ID from the Overview page for the Azure Active Directory application with read permission on the vault (applications are listed on the Active Directory \"App registrations\" page)ClientSecret: The Value from the \"Certificates & secrets\" page for the Active Directory application with read permission on the vault.TenantID: the Directory (tenant) ID from the Overview page for the Azure Active Directory application with read permission on the vaultYou cannot add an entry to an Azure Key Vault in Striim. See Microsoft's Add a secret to Key Vault for instructions on adding entries.Using vault keys as variables in TQLSpecify vault entries in TQL adapter properties with double square brackets. For example:Username: '[[myvault.myusername]]',\nPassword: '[[myvault.mypassword]]',If you are using an Azure Key Vault or Hashicorp Vault and the property expects a value to specify a file, indicate that as follows:ServiceAccountKey: '[[myvault.my-sa-key, \"FILE\"]]', \"FILE\" is not required in TQL when using Striim's native vault.Hashicorp vaultsTo create a vault component that makes an existing Hashicorp vault available for use in Striim:CREATE VAULT <vaultName> USING VAULTSPEC (\n VaultType: \"HASHICORPVAULT\", \n AccessToken: \"<rootToken>\",\n ConnectionURL: \"<connection_url>\",\n Port: \"<port>\",\n EngineName: \"<name>\",\n PathToSecret: \"<path>\",\n AutoRenew: \"{true|false}\", -- default value is false\n AutoRenewIncrement: \"<interval>\",\n AutoRenewCheckPeriod: \"<interval>\"\n);For example, without auto-renew:CREATE VAULT myvault USING VAULTSPEC (\n VaultType: \"HASHICORPVAULT\", \n AccessToken: \"**************************\",\n ConnectionURL: \"https//198.51.100.20\",\n Port: \"8200\",\n EngineName: \"secret\",\n PathToSecret: \"my-secret\"\n);Alternatively, to enable auto-renew:CREATE VAULT myvault USING VAULTSPEC (\n VaultType: \"HASHICORPVAULT\", \n AccessToken: \"**************************\",\n ConnectionURL: \"https//198.51.100.20\",\n Port: \"8200\",\n EngineName: \"secret\",\n PathToSecret: \"my-secret\",\n AutoRenew: \"true\",\n AutoRenewIncrement: \"7d\",\n AutoRenewCheckPeriod: \"1d\"\n);AutoRenewIncrement specifies the time-to-live (expiration) of the tokens (see Token Management).AutoRenewCheckPeriod controls how often Striim will check to see if the current token should be renewed.To ensure that your token is always valid, the AutoRenewCheckPeriod interval must be shorter than the AutoRenewIncrement interval.Valid interval unit indicators are ms for milliseconds, s for seconds, m for minutes, and h for hours, and d for daysYou cannot add an entry to a Hashicorp vault in Striim. See Hashicorp's Vault Documentation for instructions on adding entries to KV Secrets Engine Version 2.Google Secrets ManagerTo connect with the Google Secrets Manager, use Google\u2019s provided API. Authenticate to the API using the service key. The account must have the Secret Manager Secret Accessor IAM.To create a vault component that uses Google Secrets Manager:CREATE VAULT <vault_name> USING VAULTSPEC (\n VaultType: \"GOOGLESECRETMANAGER\", \n ProjectID: \"projectIDExample\", \n serviceAccountKeyPath: \"serviceAccountKeyJsonFormat\"\n);Other vault commandsNoteAfter entering an ALTER VAULT command, any Striim applications that use that vault must be restarted to update the value.ALTER VAULT <vault_name> SET (\n vaultKey: \"<key name>\",\n vaultValue : \"<new value>\");For a Striim native vault, changes the value of the specified key. (You cannot change key values for Azure or Hashicorp vaults in Striim.)ALTER VAULT <vault name> (connectionURL: \"https://<new connection URL>\");For an Azure or Hashicorp vault, changes the connection URL.ALTER VAULT <vault name> (ClientSecret: \"https://<new client secret>\");For an Azure vault, changes the Secret ID.ALTER VAULT <vault name> (AccessToken: \"**************************\");For a Hashicorp valult, updates the access token.DESCRIBE <vault_name>;Returns a description of the specified vault component.DROP VAULT [<namespace>].<vault_name>;For a Striim native vault, deletes the vault and all its entries.For an Azure Key Vault or Hashicorp vault, makes it inaccessible by Striim, but has no effect in Azure Key or Hashicorp vaults.LIST VAULTS;Returns a list of vaults usable by the current user.READ ALL FROM <vault_name>;Returns the encrypted values for all keys in the vault.READ FROM <vault_name> WHERE vaultKey=\"<key>\";Returns the encrypted value for the specified key.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/en/using-vaults.html", "title": "Using vaults", "language": "en"}} {"page_content": "\n\nLoading standalone sources, caches, and WActionStoresSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideLoading standalone sources, caches, and WActionStoresPrevNextLoading standalone sources, caches, and WActionStoresSources, caches, and WActionStores may be loaded outside of applications. This may be appropriate when they are shared by multiple applications and you want to make sure that they are not stopped accidentally. The LOAD and UNLOAD commands require the Global.admin role.The following example would create, load, and start the cache ZipLookup in the SharedCaches namespace.CREATE NAMESPACE SharedCaches;\nUSE SharedCaches;\nCREATE TYPE ZipData(\n zip String KEY,\n latVal double,\n longVal double\n);\nCREATE CACHE ZipLookup using FileReader (\n directory: 'shared/caches',\n wildcard: 'zip_data.txt',\n positionByEOF:false\n)\nPARSE USING DSVParser () QUERY (keytomap:'zip') OF ZipData;\nLOAD CACHE ZipLookup;Since no options are specified for the LOAD command, this would deploy the cache in the default deployment group. Optionally, you may specify deployment options as for the DEPLOY command. For example, LOAD CACHE ZipLookup ON ALL IN DG1 would load the cache on all servers in deployment group DG1. For more information see Managing deployment groups.Standalone components appear on the Monitor page in the UI and in MONITOR output in the console as <namespace>.<component name>_app. For example, the cache above would be SharedCaches.ZipData_app.To stop and undeploy the cache, you would enter:UNLOAD CACHE SharedCaches.ZipLookup;NoteStandalone sources, caches, and WActionStores are not recoverable (see Recovering applications).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-08-06\n", "metadata": {"source": "https://www.striim.com/docs/en/loading-standalone-sources,-caches,-and-wactionstores.html", "title": "Loading standalone sources, caches, and WActionStores", "language": "en"}} {"page_content": "\n\nSending alerts about servers and applicationsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsPrevNextSending alerts about servers and applicationsStriim has three kinds of alerts:Smart alerts warn you about problems and potential problems for all applications, Forwarding Agents, applications, sources, and targets. See Managing Smart Alerts.Custom alerts warn you about a specific condition for a particular server, Forwarding Agent, application, or component. See Creating and managing custom alerts.Application-level alerts are triggered by TQL and may be used for any purpose. See Sending alerts from applications.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-18\n", "metadata": {"source": "https://www.striim.com/docs/en/sending-alerts-about-servers-and-applications.html", "title": "Sending alerts about servers and applications", "language": "en"}} {"page_content": "\n\nManaging Smart AlertsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsManaging Smart AlertsPrevNextManaging Smart AlertsThe following alerts are enabled by default and are sent for every server, Forwarding Agent, application, source, and target.By default these alerts are visible only to administrators (members of the Global.admin group) in the alerts drop-down in the top right corner of the Striim web UI and in the Message Log at the bottom of the web UI. You may modify them to be sent by email or to Slack or Microsoft Teams.Alert nameAlert condition (default)NotesServer_HighCpuUsagethe server average per core CPU time used by its Java process is over 90%By default, an alert will be sent every four hours until the condition is resolved.Server_HighMemoryUsagethe server's JVM free heap size is below 10% of the maximum heap size (Xmx)By default, an alert will be sent every four hours until the condition is resolved.Server_NodeUnavailablethe server is no longer connected to the clusterAgent_HighCpuUsagethe Forwarding Agent average per core CPU time used by its Java process is over 90%By default, an alert will be sent every four hours until the condition is resolved.Agent_HighMemoryUsagethe Forwarding Agent's JVM free heap size is below 10% of the maximum heap size (Xmx)By default, an alert will be sent every four hours until the condition is resolved.Agent_NodeUnavailablethe Forwarding Agent is no longer connected to the clusterApplication_AutoResumedthe application resumed automatically (see Automatically restarting an application)Application_Backpressuredone or more streams in the application have been backpressured for over ten minutes (see Understanding and managing backpressure)By default, an alert will be sent every four hours until the condition is resolved.Application_CheckpointNotProgressingit has been over 30 minutes since the recovery checkpoint advanced and during that time at least one new event was received from a source (see Recovering applications)By default, an alert will be sent every four hours until the condition is resolved.Application_Haltedthe application has halted (see Application states)\u00a0Application_Rebalancednot applicable to Striim CloudApplication_RebalanceFailednot applicable to Striim Cloud\u00a0Application_Terminatedthe application has terminated (see Application states)Source_Idleit has been over 10 minutes since the source read an eventBy default, an alert will be sent every four hours until the condition is resolved.Target_HighLeeone or more events received by the target had an end-to-end lag of over ten minutes (see Monitoring end-to-end lag (LEE))By default, an alert will be sent every four hours until the condition is resolved.Target_Idleit has been over 10 minutes since the target wrote an eventBy default, an alert will be sent every four hours until the condition is resolved.Modifying a Smart AlertThe properties (which vary depending on the alert) are:alertMessage: defines the text of the alert. This can be edited in the console but not in the web UI. Variables used in the alert.The following replacement variables can be used in alert messages. Actual values will be substituted for the variables when an alert is being issued. The values are taken from the alert definition and the monitor event being evaluated for the alert.adapterName: Adapter name in alert definition (e.g. FileReader)address: Address to which the alert will be sent (e.g. somebody@example.com)alertName: Name of the alert (e.g.\u00a0Application_CheckpointNotProgressing)alertValue: Metrics value defined in the alert condition (e.g.\u00a0300)comparator:\u00a0Alert condition comparator (GT, LT, EQ, LIKE)entityName: Actual component name in the mon event (e.g.\u00a0admin.PosApp)entityType: Component type (e.g.\u00a0APPLICATION)medium: Alerting medium (WEB, EMAIL, SLACK, TEAMS)metricName: Metrics name in the alert condition (e.g.\u00a0LAST_CHECKPOINT_AGE)metricUnit: Unit of metrics (e.g.\u00a0seconds)metricValue: Actual metrics value in the mon event (e.g.\u00a0543)objectName: Component name pattern in the alert definition (e.g\u00a0.*\\.APPLICATION\\..*)alertType: EMAIL, SLACK, TEAMS, or WEB (default); except for WEB, you must also specify the toAddressBefore modifying an alert to send via Slack, follow the setup instructions in Sending alerts about servers and applications and Configure Slack to receive alerts from Striim.Sending alerts about servers and applicationsBefore modifying an alert to send via Teams, follow the setup instructions in Sending alerts about servers and applications and Configure Teams to receive alerts from Striim.Sending alerts about servers and applicationsalertValue:for integer values: the time in seconds before the alert is triggered; for example, for Source_Idle, the number of seconds with no events that need to pass before an alert is sentfor string values: the string to search for in the error message; for example, for Application_Terminated, Application terminatedcomparator: EQ (equals), GT (greater than), LT (less than)for integer values: EQ (equals), GT (greater than), LT (less than)for string values: EQ (equals), LIKE (matches if the specified string occurs anywhere in the value)intervalSec: the number of seconds between alerts (the snooze interval)isEnabled: true (default) or falsetoAddress: for email, the recipient's address; for Slack or Teams, the channelSome of these properties are displayed and editable in the web UI.To see all of an alert's properties, use the DESCRIBE command. For example:DESCRIBE Application_Terminated;\nProcessing - describe Application_Terminated\n\nSysAlertRule Application_Terminated \n on .*\\.APPLICATION\\..*: \n for LOG_ERROR \n comparator LIKE \n with value Application terminated \n alert type WEB \n snooze 0 SECOND \n system-defined and enabled \n message: Application {{entityName}}: {{metricValue}}.\n-> SUCCESSThe property names in the DESCRIBE output correspond to the following keywords in ALERT SMARTALERT commands:DESCRIBE outputkeyword for ALTER SMARTALERToncan't be modifiedforcan't be modifiedcomparatorcan't be modified; the comparators arefor integer values: EQ (equals), GT (greater than), LT (less than)for string values: EQ (equals), LIKE (matches if the specified string occurs anywhere in the value)with valuealertValuealert typealertTypesending totoAddresssnoozeintervalSecmessagealertMessageenabledisEnabledExamples of modifying Smart Alert propertiesTo modify a Smart Alert, go to the Alert Manager and select the alert you want to modify. Which properties are available varies depending on the alert selected.To change the alert type for Application_Terminated from in App to Email, change the Alert Type and specify the email address of the person to receive the alert:To change the alert interval (snooze) for Source_Idle to an hour, set Snooze After Alert. This means alerts on this condition will be sent no more often than once an hour.To disable Source_Idle alerts, set Enable alert off:In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/en/managing-smart-alerts.html", "title": "Managing Smart Alerts", "language": "en"}} {"page_content": "\n\nCreating and managing custom alertsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsCreating and managing custom alertsPrevNextCreating and managing custom alertsSee also Managing Smart Alerts and Sending alerts from applications.Sending alerts from applicationsAdministrators (members of the Global.admin group) can access the Alert Manager page, where you can create alerts about various events and conditions for a given server, Forwarding Agent, application, or component. Alerts may be sent as email, to a Slack channel, to a Microsoft Teams channel, or displayed in the alerts drop-down in the top right corner of the Striim web UI. Alerts are also displayed in the Message Log at the bottom of the web UI.You must configure Striim before you can send alerts via Slack or Teams . See Configuring alerts for instructions.Available alert conditionsServer and Forwarding Agent alert conditionsCPU rateCPU rate is greater than 90%Node memory: value of Java.lang.Runtime.freeMemory()Node memory is less than 1GBApplication alert conditions (see Application states)App deployedApp haltedApp invalidApp quiescedApp stoppedApp terminatedComponent alert conditionsCachesCache sizeEvent rateEvent rate is zeroLocal hitsLocal hits rateLocal missesRemote hitsRemote missesCQsInput rateInput rate is zeroOutput rateOutput rate is zeroTotal events input to the CQTotal events output from the CQSourceInput rate: events / sec.Input rate is zeroSource input: total number of eventsStreamEvent rateEvent rate is zeroTotal number of eventsTargetEvent rate (same as Target rate)Event rate is zeroTarget acked: events acknowledged (not available for all targets)Target output: events sentTarget rate (same as Input rate)Target rate is zeroWActionStoreEvent rateEvent rate is zeroInput rateTotal number of WActionsWActions created rateWindowInput rateInput rate is zeroRange tailTotal number of eventsWindow sizeWindow size is zeroCreate a new alertClick Add New Alert Subscription.From the Create Alert On drop-down list, select the server, Forwarding Agent, application, or component on which you want to create the alert. Enter the beginning of the object name to filter the list. To show servers, enter s; to show Forwarding Agents, enter a.In Alert Name, enter a descriptive name for the alert.Select the desired Alert Condition.If displayed, select the desired Alert Comparator. Select the LIKE comparator only if the alert value is a string.If displayed, specify the Alert Value.Note: The maximum CPU Rate is 100% times the number of cores. For example, a system with four cores has a maximum CPU rate of 400%. To alert on 90% of this maximum, you would specify 360.Optionally, enter a value in Snooze After Alert. Some alerts will continue sending messages until the issue is resolved. To limit the number of messages, set a higher interval. As an example, with a value of 10 minutes, Striim will send no more than six messages per hour.Select whether to receive the alert by email, in a Slack channel, a Microsoft Teams channel, or in the Striim web UI (In App).Optionally, click Enable Alert to make the alert active immediately after saving.Click Save.Modify an existing custom alertTo modify an existing custom alert, go to the Alert Manager, select the tab for the type of alert (Nodes for server or Forwarding Agent, Apps, or Components, select the alert you want to modify, make your changes, and click Save.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-18\n", "metadata": {"source": "https://www.striim.com/docs/en/creating-and-managing-custom-alerts.html", "title": "Creating and managing custom alerts", "language": "en"}} {"page_content": "\n\nConfiguring alertsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsConfiguring alertsPrevNextConfiguring alertsStriim can send alerts via email, Slack, or Microsoft Teams. All three must be set up before they can receive alerts.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://www.striim.com/docs/en/configuring-alerts.html", "title": "Configuring alerts", "language": "en"}} {"page_content": "\n\nConfigure Striim to send email alertsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsConfiguring alertsConfigure Striim to send email alertsPrevNextConfigure Striim to send email alertsGo to the Alert Manager (visible only to administrators) and click Configure Email.Enter the URL, credentials, and From: address for the SMTP server. Select whether to start TLS. Optionally, enter a valid email address to receive a test alert and click Send Test.After the connection is validated, click Save at the bottom of the Alert Manager page.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://www.striim.com/docs/en/configure-striim-to-send-email-alerts.html", "title": "Configure Striim to send email alerts", "language": "en"}} {"page_content": "\n\nConfigure Slack to receive alerts from StriimSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsConfiguring alertsConfigure Slack to receive alerts from StriimPrevNextConfigure Slack to receive alerts from StriimBefore Slack can receive alerts from Striim, you must create and install a Slack app, as follows:Log in to api.slack.com and select Create New App > From scratch.For App Name, enter SlackAlertAdapter.From Pick a workspace to develop your app in, select a workspace, then click Create App.In the Features section on the left navigation bar, click OAuth & PermissionsIn the Scopes section, configure the scopes for the bot token and user token as follows:Bot: chat:write, chat:write.publicUser: chat:writeIn the Settings section, click Install App.Click Allow.Save the OAuth token securely for later use.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://www.striim.com/docs/en/configure-slack-to-receive-alerts-from-striim.html", "title": "Configure Slack to receive alerts from Striim", "language": "en"}} {"page_content": "\n\nConfigure Striim to send Slack alertsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsConfiguring alertsConfigure Striim to send Slack alertsPrevNextConfigure Striim to send Slack alertsIn order to send the alerts defined from the Alert Manager page as Slack messages, you must configure a Slack OAuth token and specify a Slack channel for Striim to use. When the Slack OAuth token is not configured, as is the case immediately after installing Striim, a Configure Slack button is visible at the top of the Alert Manager page.To generate a Slack OAuth token, follow the instructions in Configure Slack to receive alerts from Striim. Copy the OAuth Token value.In Striim, in the Alert Manager, click Configure Slack to open the configuration dialog box. The box contains only one field, labeled OAuth Token. Paste the OAuth token into the field and click Save.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://www.striim.com/docs/en/configure-striim-to-send-slack-alerts.html", "title": "Configure Striim to send Slack alerts", "language": "en"}} {"page_content": "\n\nConfigure Teams to receive alerts from StriimSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsConfiguring alertsConfigure Teams to receive alerts from StriimPrevNextConfigure Teams to receive alerts from StriimBefore Microsoft Teams can receive alerts from Striim, you must create and install a Teams app.Log in to the Azure Portal.From Azure Active Directory > App Registrations, select + New Registration.Type the required information to set up the new app and click Register. Note the Client and Tenant IDs for future use.From Certificates and Secrets, select + New Client Secret.Type a description and duration and click Add.In Delegated Permissions, add the offline_access, ChannelMessage.send, and ChannelMessage.Read.All scopes.Open the Authorization URL and obtain the authorization code.Use the authorization code to obtain the Refresh token.Navigate to the channel where Striim will post alerts and select Get link to channel.Note the channel URL for use in Striim configuration.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-28\n", "metadata": {"source": "https://www.striim.com/docs/en/configure-teams-to-receive-alerts-from-striim.html", "title": "Configure Teams to receive alerts from Striim", "language": "en"}} {"page_content": "\n\nConfigure Striim to send Teams alertsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideSending alerts about servers and applicationsConfiguring alertsConfigure Striim to send Teams alertsPrevNextConfigure Striim to send Teams alertsIn order to send the alerts defined from the Alert Manager page to Microsoft Teams, you must configure a Teams OAuth token and specify a channel URL for Striim to use. When the Teams OAuth token is not configured, as is the case immediately after installing Striim, a Configure Teams link is visible at the top of the Alert Manager page. To generate a Teams OAuth token, see .Follow the instructions in Configure Teams to receive alerts from Striim.In Striim, in the Alert Manager, click Configure Teams and provide the information from Teams setup in for Client ID, Client Secret, and Refresh Token.Test the configuration by typing the channel URL in Validate Connection and clicking Send Test.If the connection is valid, click Save.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://www.striim.com/docs/en/configure-striim-to-send-teams-alerts.html", "title": "Configure Striim to send Teams alerts", "language": "en"}} {"page_content": "\n\nUnderstanding and managing backpressureSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideUnderstanding and managing backpressurePrevNextUnderstanding and managing backpressureIn Striim applications, the output of an \"upstream\" component is connected to the input of a \"downstream\" component by a stream. The two components may process data at different speeds: for example, a Database Reader source might be able to read data from an on-premise database faster than Database Writer can write it to a target database in the cloud.The streams that connect these components can hold only a certain number of events. When a stream reaches its limit, it can not accept further events. This condition is known as backpressure. This inability to accept events from upstream components increases the risk of the upstream sources developing backpressure in turn.Identifying backpressureWhen a stream in an application is backpressured, Striim will send an alert (see Managing Smart Alerts).Backpressured streams are rendered in red in the Striim web GUI, as in this example image:You can use the Latency report to identify the downstream component causing the backpressure. See Using the REPORT LATENCY command for details.In the Tungsten console, use the mon <streamname> command to check the value of the Stream Full parameter. A stream in a backpressure condition has the value of Stream Full set to true.Reducing backpressureBackpressure reduction strategies depend on the particular structure of the flow in place and the nature of the sources, targets, other components, and streams connecting them. The event processing speed of the target can be increased by assigning more CPU resources. Alternately, the stream coming from the source can be divided among several identical targets working in parallel. See Creating multiple writer instances for details.Creating multiple writer instancesFor some targets, adjusting the batch, flush, or upload policies can help manage backpressure. See File Writer for details.Query code inefficiencies in a CQ stream can slow processing, leading to backpressure. Examine the code of backpressured CQ streams to find optimizations, such as dropping fields that are not used by downstream operations.Changing the size of a window can sometimes reduce backpressure.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-22\n", "metadata": {"source": "https://www.striim.com/docs/en/understanding-and-managing-backpressure.html", "title": "Understanding and managing backpressure", "language": "en"}} {"page_content": "\n\nManaging the application lifecycleSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideManaging the application lifecyclePrevNextManaging the application lifecycleThe lifecycle of a Striim application will typically have three phases, each with its own Striim Platform cluster(s) or Striim Cloud service(s):A developer or team of developers creates an application.Quality assurance tests the application and returns ownership to the developers for debugging.When the application is deemed sufficiently robust and error free, it is transferred to the production team for deployment.When an application is passed from one phase to the next, the properties in sources, caches, and targets must be updated to reflect the new environment. You may accomplish this without modifying the applications by Using vaults.Development and QA may share a cluster or service. Production should always have its own.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/managing-the-application-lifecycle.html", "title": "Managing the application lifecycle", "language": "en"}} {"page_content": "\n\nHandling planned DDL changesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideHandling planned DDL changesPrevNextHandling planned DDL changesIf your application supports Handling schema evolution, that may be a preferable approach to handling DDL changes.Otherwise, if recovery was enabled for the application when it was started, follow these steps.Stop the source database (or use some other method to ensure that nothing is written to the tables read by Striim).QUIESCE the Striim application and wait until its status is Quiesced.Perform the DDL changes in the source database.If required by those DDL changes, ALTER and RECOMPILE the Striim application.Start the source database.Start the Striim application.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-09-08\n", "metadata": {"source": "https://www.striim.com/docs/en/handling-planned-ddl-changes.html", "title": "Handling planned DDL changes", "language": "en"}} {"page_content": "\n\nReplaying events using Kafka streamsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideReplaying events using Kafka streamsPrevNextReplaying events using Kafka streamsIf a Kafka-persisted source runs in a separate application from the associated logic (windows, CQs, caches, WActionStores, targets), the latter application can safely be brought down for maintenance or updates. When it is restarted, the application will automatically replay the Kafka stream from the point it left off, with zero loss of data and no duplicates.See Introducing Kafka streams for more information.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/replaying-events-using-kafka-streams.html", "title": "Replaying events using Kafka streams", "language": "en"}} {"page_content": "\n\nRecovering applicationsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideRecovering applicationsPrevNextRecovering applicationsSubject to the following limitations, Striim applications can be recovered after planned downtime or most cluster failures with no loss of data:Recovery must have been enabled when the application was created. See CREATE APPLICATION ... END APPLICATION or Creating and modifying apps using the Flow Designer.CREATE APPLICATION ... END APPLICATIONNoteEnabling recovery will have a modest impact on memory and disk requirements and event processing rates, since additional information required by the recovery process is added to each event.All sources and targets to be recovered, as well as any CQs, windows, and other components connecting them, must be in the same application. Alternatively, they may be divided among multiple applications provided the streams connecting those applications are persisted to Kafka (see Persisting a stream to Kafka and Using the Forwarding Agent).Persisting a stream to KafkaUsing the Striim Forwarding AgentData from a CDC reader with a Tables property that maps a source table to multiple target tables (for example, Tables:'DB1.SOURCE1,DB2.TARGET1;DB1.SOURCE1,DB2.TARGET2') cannot be recovered.Data from time-based windows that use system time rather the ON <timestamp field name> option cannot be recovered.DatabaseReader, HTTPReader, MongoDB Reader when using transactions, MultiFileReader, TCPReader, and UDPReader are not recoverable. You may work around this limitation by putting these readers in a separate application and making their output a Kafka stream (see Introducing Kafka streams), then reading from that stream in another application.Standalone sources and WActionStores (see Loading standalone sources, caches, and WActionStores) are not recoverable unless persisted to Kafka (see Persisting a stream to Kafka).Data from sources using an HP NonStop reader can be recovered provided that the AuditTrails property is set to its default value, merged.Caches are reloaded from their sources. If the data in the source has changed in the meantime, the application's output may be different than it would have been.Except when using Parallel Threads, each Kafka topic may be written to by only one instance of Kafka Writer.Each Kinesis stream may be written to by only one instance of Kinesis Writer.Recovery will fail when KinesisWriter target has 250 or more shards. The error will\u00a0include \"Timeout while waiting for a remote call on member ...\"In some situations, after recovery there may be duplicate events.Recovered flows that include WActionStores should have no duplicate events. Recovered flows that do not include WActionStores may have some duplicate events from around the time of failure (\"at least once processing,\" also called A1P), except when a target guarantees no duplicate events (\"exactly once processing,\" also called E1P). See Writers overview for details of A1P and E1P support.ADLSWriterGen1, ADLSWriterGen2, AzureBlobWriter, FileWriter, HDFSWriter, and S3Writer restart rollover from the beginning and depending on rollover settings (see Setting output names and rollover / upload policies) may overwrite existing files. For example, if prior to planned downtime there were file00, file01, and the current file was file02, after recovery writing would restart from file00, and eventually overwrite all three existing files. Thus you may wish to back up or move the existing files before initiating recovery. After recovery, the target files may include duplicate events; the number of possible duplicates is limited to the Rollover Policy eventcount value.When KafkaWriter is in sync mode (see Setting KafkaWriter's mode property: sync versus async), if the Kafka topic's retention period is shorter than the time that has passed since the cluster failure, after recovery there may be some duplicate events, and striim.server.log will contain a warning, \"Start offset of the topic is different from local checkpoint (Possible reason - Retention period of the messages expired or Messages were deleted manually). Updating the start offset ...\"After recovery, Cosmos DB Writer, MongoDB Cosmos DB Writer, and RedshiftWriter targets may include some duplicate events.When the input stream for a writer is the output stream from Salesforce Reader, there may be duplicate events after recovery.To enable Striim applications to recover from system failures, you must do two things:1. Enable persistence of all of the application's WActionStores.2. Specify the RECOVERY option in the CREATE APPLICATION statement. The syntax is:CREATE APPLICATION <application name> RECOVERY <##> SECOND INTERVAL;NoteWith some targets, enabling recovery for an application disables parallel threads. See Creating multiple writer instances for details.For example:CREATE APPLICATION PosApp RECOVERY 10 SECOND INTERVAL;With this setting, Striim will record a recovery checkpoint every ten seconds, provided it has completed recording the previous checkpoint. When recording a checkpoint takes more than ten seconds, Striim will start recording the next checkpoint immediately.When the PosApp application is restarted after a system failure, it will resume exactly where it left off.While recovery is in progress, the application status will be RECOVERING SOURCES. The shorter the recovery interval, the less time it will take for Striim to recover from a failure. Longer recovery intervals require fewer disk writes during normal operation.To see detailed recovery status, enter MON <namespace>.<application name> <node> in the console (see Using the MON command). If the status includes \"late\" checkpoints, we recommend you Contact Striim support, as this may indicate a bug or other problem (though it will not interfere with recovery).Using the MON commandTo see the checkpoint history, enter SHOW <namespace>.<application name> CHECKPOINT HISTORY in the console.Some checkpoint information is included in DESCRIBE <application> output.Some checkpoint information is included in the system health object (see Monitoring using the system health REST API).[{\n\t\"Application Name\": \"admin.ps1\",\n\t\"Source Name\": \"ADMIN:SOURCE:S:2\",\n\t\"Source Restart Position\": {\n\t\t\"Seek Position\": \"0\",\n\t\t\"Creation Time\": \"2019\\/05\\/14-18:03:04\",\n\t\t\"Offset Begin\": \"456,099,392\",\n\t\t\"Offset End\": \"456,099,707\",\n\t\t\"Record Length\": \"0\",\n\t\t\"Source Name\": \"lg000000003.gz\",\n\t\t\"Actual name\": \"\"\n\t},\n\t\"Source Current Position\": {\n\t\t\"Seek Position\": \"0\",\n\t\t\"Creation Time\": \"2019\\/05\\/14-18:03:04\",\n\t\t\"Offset Begin\": \"456,099,392\",\n\t\t\"Offset End\": \"456,099,707\",\n\t\t\"Record Length\": \"0\",\n\t\t\"Source Name\": \"lg000000003.gz\",\n\t\t\"Actual name\": \"\"\n\t}\n}, {\n\t\"Application Name\": \"admin.ps1\",\n\t\"Target Name\": \"ADMIN:TARGET:T1:1\",\n\t\"Target Current Position\": {\n\t\t\"Seek Position\": \"0\",\n\t\t\"Creation Time\": \"2019\\/05\\/14-18:03:04\",\n\t\t\"Offset Begin\": \"456,099,392\",\n\t\t\"Offset End\": \"456,099,707\",\n\t\t\"Record Length\": \"0\",\n\t\t\"Source Name\": \"lg000000003.gz\",\n\t\t\"Actual name\": \"\"\n\t}\n}]In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-25\n", "metadata": {"source": "https://www.striim.com/docs/en/recovering-applications.html", "title": "Recovering applications", "language": "en"}} {"page_content": "\n\nAutomatically restarting an applicationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Administrator's GuideAutomatically restarting an applicationPrevNextAutomatically restarting an applicationIf known transient conditions such as network outages cause an application to terminate, you may configure it to restart automatically after a set period of time. The syntax is:CREATE APPLICATION <name> AUTORESUME [ MAXRETRIES\u00a0<integer> ] [ RETRYINTERVAL <interval in seconds> ];The default values are two retries with a 60-second interval before each. Thus CREATE APPLICATION MyApp AUTORESUME; means that if MyApp terminates, Striim will wait one minute and restart it. If MyApp terminates a second time, Striim will again wait one minute and restart it, If MyApp terminates a third time, Striim will leave it in the TERMINATED state.CautionBe sure to set the RETRYINTERVAL high enough that the transient condition should have resolved itself and, if recovery is enabled, to also allow Striim enough time to recover the application (see Recovering applications).Recovering applicationsTo disable auto-resume, stop (or quiesce) and undeploy the application, then enter ALTER APPLICATION <name> DISABLE AUTORESUME;. To enable auto-resume again, undeploy the app and enter ALTER APPLICATION <name> ENABLE AUTORESUME, optionally including MAXRETRIES or RETRYINTERVAL.You may also configure auto-resume settings in Flow Designer's App Settings.TipIf you enable auto-resume for an application, consider configuring a terminate alert for it as well (see Sending alerts about servers and applications).Sending alerts about servers and applicationsIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-14\n", "metadata": {"source": "https://www.striim.com/docs/en/automatically-restarting-an-application.html", "title": "Automatically restarting an application", "language": "en"}} {"page_content": "\n\nMonitoring GuideSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuidePrevNextMonitoring GuideYou may monitor the Striim cluster, its applications, and their components using the Monitoring page in the web ui, the console,\u00a0or the system health REST API (see Monitoring using the system health REST API).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-24\n", "metadata": {"source": "https://www.striim.com/docs/en/monitoring-guide.html", "title": "Monitoring Guide", "language": "en"}} {"page_content": "\n\nMonitoring using the web UISkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideMonitoring using the web UIPrevNextMonitoring using the web UIThe Monitor page in the web UI displays summary information for the cluster and each of its applications, servers, and agents.Above, you can see that two applications are running.The display is showing the App Rate (events per second). Click Events Processed, App CPU%, Server Memory, or Server CPU% to graph those statistics instead.By default, the Monitor page displays the most recent data. To look at older data, click Time Range > Specific Time and select a date and start time. The S in the name in the Node Overview list indicates that the node is a regular server (S) rather than a Forwarding Agent (A).To monitor an individual app, click its name in the Apps Overview list or select\u00a0Monitor App from the app's \u22ee menu on the Apps page.Interval Report provides access to the\u00a0REPORT START and\u00a0REPORT STOP commands through the UI (see\u00a0Using the REPORT START / STOP command).\u00a0For details about\u00a0Latency Report, see\u00a0\u00a0Using the REPORT LATENCY command.Click one of the buttons at left to see statistics for components of a particular type:When you see a More Details button, you can click it for more detailed information. For example, for KafkaWriter, you will see something like this:Click\u00a0Event Log to see application-related errors and status changes for all nodes in the cluster. The following status changes reflect PosApp being loaded, deployed, started, stopped, and undeployed. No status change is logged when you drop an application.By default, the event log shows status changes and errors for the past five minutes from all nodes (there may be a delay before they appear). Use the filter drop-downs to select a narrower set of data. If you choose a time range, the event log will not be updated with new events until you clear the filter.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-04-30\n", "metadata": {"source": "https://www.striim.com/docs/en/monitoring-using-the-web-ui.html", "title": "Monitoring using the web UI", "language": "en"}} {"page_content": "\n\nUsing monitor reportsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideUsing monitor reportsPrevNextUsing monitor reportsA monitor report provides summary information about a single component over a specified time range using a specified rollup period. For example, you could get a report showing how many events a source processed every 15 minutes for the past 24 hours.To view a monitor report:Go to the Monitor page, click the application name, and from the Component drop-down select the component to report on. (You may type a few characters of the component name to filter the drop-down.)Set a start time, end time, and rollup interval for the report (for example, hourly for the past week, or every 15 minutes for the past day), then click Start.The following example shows source performance from midnight until 11:00 am. You can see that performance was steady.Hover the cursor over a point in the chart to get detailed data. Here you can see that from 8:30 to 8:45 the source processed 8544 events.Here you can see that from 6:15 to 6:30 the source processed an average of 9.5 events per second. (This example was generated with a very slow source. In real-world applications, Striim can handle millions of events per second.)This is the same report in the table view. Click Download to get this information in a comma-delimited file.This is the same report in the JSON view.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/using-monitor-reports.html", "title": "Using monitor reports", "language": "en"}} {"page_content": "\n\nMonitoring application progress in Flow DesignerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideMonitoring application progress in Flow DesignerPrevNextMonitoring application progress in Flow DesignerWhen viewing an application in Flow Designer, click the clock button to view its progress.Here, you can see that the output (green) was at first lagging behind the input (blue), but eventually caught up.When possible, Striim will query the total number of events to be processed and display progress bars.When all data has been read, the Total Input and Total Output will be equal.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-10-06\n", "metadata": {"source": "https://www.striim.com/docs/en/monitoring-application-progress-in-flow-designer.html", "title": "Monitoring application progress in Flow Designer", "language": "en"}} {"page_content": "\n\nUnderstanding reported CPU usageSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideUnderstanding reported CPU usagePrevNextUnderstanding reported CPU usageThe reported CPU percentage (CPU%) for an application is not always the sum of the CPU% of its components, and the CPU% for a node is not always the sum of the CPU% of the application it is running. That is because:The CPU% for an individual component is a measure of the CPU% for the thread on which it is running. The range is 0-100% times the number of cores, so, for example, a four-core system has a theoretical upper limit of 400%, and a Striim cluster with three four-core servers has a theoretical upper limit of 1200%.The CPU% for an application is aggregated from the CPU% of its components. However, some components share a thread, in which case the same CPU% will be displayed for each, but will be aggregated only once. For example, a CQ and the window it selects from run in the same thread, so if they both display 4.2%, only 4.2% will be added to the application CPU%, not 8.4%.The CPU% for a server is not an aggregated value: it is the operating system's measure of the CPU usage of the JVM in which it is running. The server performs other tasks besides running component threads, so its CPU% will never equal the sum of the CPU% of the applications it is running.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-09-04\n", "metadata": {"source": "https://www.striim.com/docs/en/understanding-reported-cpu-usage.html", "title": "Understanding reported CPU usage", "language": "en"}} {"page_content": "\n\nUnderstanding Read Lag valuesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideUnderstanding Read Lag valuesPrevNextUnderstanding Read Lag valuesRead Lag is reported only for targets whose input streams are the output stream of a CDC reader. It is calculated by subtracting the timestamp in the CDC event from the current Striim system time when the event is written to the target.If both systems use Network Time Protocol (NTP) to set their system time from internet time servers, the system times should be synchronized within a few milliseconds of each other. In that case, Read Lag should accurately indicate how long after the database generated an event it was processed by Striim. Read Lag is reported in milliseconds, so a value of 100 would indicate a tenth of a second lag.\u00a0If the Read Lag is large large but not increasing, that indicates latency between the two systems. If it is increasing, that indicates that\u00a0the Striim server is not processing events fast enough to keep up with the database activity.If the system times are not synchronized, Read Lag could be a very large positive or negative number due to the difference in system times. In this case, you can not use Read Lag to estimate latency between the systems, but you can compare values over time to see if the lag is increasing.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-02-19\n", "metadata": {"source": "https://www.striim.com/docs/en/understanding-read-lag-values.html", "title": "Understanding Read Lag values", "language": "en"}} {"page_content": "\n\nUsing the MON commandSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideUsing the MON commandPrevNextUsing the MON commandThe monitor command's syntax is:mon[itor]\n [ <node>\n | <namespace>.<application name> [ <node> ] [memorysize]\n | <namespace>.<component name> [ <node> ] ] [memorysize]\n[-follow [ seconds ] ]\n[-start '<time>']\n[-end '<time>']Node names begin with S or A to indicate server or Forwarding Agent, followed by the node's IP address with hyphens instead of periods. For example, a server with the IP address 192.168.1.12 would be named S192-168-1-12. Use the command LIST SERVERS; to return a list of nodes in the current cluster.If you include the -follow option, the display will be refreshed every five seconds. You may specify a different refresh period in seconds, such as -follow 30.Use the -start\u00a0and/or\u00a0-end options when specifying a node, application, or component to return the most recent data within the specified range. Time may be specified as\u00a0HH:mm (current day),\u00a0yyyy/MM/dd-HH:mm,\u00a0yyyy/MM/dd-HH:mm:ss, or\u00a0yyyy/MM/dd-HH:mm:ss:SSS.mon;Returns a summary of the cluster's applications, nodes, and Elasticsearch usage. For example:\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 Striim Applications \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Name \u2502 Status \u2502 Rate \u2502 SourceRate \u2502 CPU% \u2502 Nodes \u2502 Activity \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Samples.MultiLogApp \u2502 RUNNING \u2502 333,82 \u2502 22,836 \u2502 95% \u2502 1 \u2502 2021-04-28 16:42:44 \u2502\n\u2502 \u2502 \u2502 8 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Samples.PosApp \u2502 RUNNING \u2502 90,754 \u2502 10,088 \u2502 20% \u2502 1 \u2502 2021-04-28 16:42:44 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 Striim Cluster Nodes \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Name \u2502 Version \u2502 Free Mem \u2502 CPU% \u2502 Uptime \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 S192_168_7_91 \u2502 (982647b992) \u2502 392.85Mb \u2502 318% \u2502 06H:04M:51S \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 ElasticSearch \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Elasticsearch Receive Throughput \u2502 0 \u2502\n\u2502 Elasticsearch Transmit Throughput \u2502 0 \u2502\n\u2502 Elasticsearch Cluster Storage Free \u2502 488,682,848,256 \u2502\n\u2502 Elasticsearch Cluster Storage Total \u2502 499,963,174,912 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nmon <node>;Returns details for every component in every namespace in the specified node and a summary of its Elasticsearch usage and threads. For example:\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 NODE S192_168_7_91 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Name \u2502 Version \u2502 Free Mem \u2502 CPU% \u2502 Uptime \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 S192_168_7_91 \u2502 (982647b992) \u2502 2.62Gb \u2502 23% \u2502 06H:15M:51S \u2502\n\u2502 Global.MonitoringProcessApp \u2502 \u2502 \u2502 0.2% \u2502 \u2502\n\u2502 Global.MonitoringStream1 \u2502 \u2502 \u2502 0.2% \u2502 \u2502\n\u2502 Global.MonitoringCQ \u2502 \u2502 \u2502 0.2% \u2502 \u2502\n\u2502 Samples.PosApp \u2502 \u2502 \u2502 0.024% \u2502 \u2502\n\u2502 Samples.CsvToPosData \u2502 \u2502 \u2502 8% \u2502 \u2502\n\u2502 Samples.GenerateMerchantTxRateOnly \u2502 \u2502 \u2502 0% \u2502 \u2502\n\u2502 Samples.GenerateMerchantTxRateWithStatus \u2502 \u2502 \u2502 0.275% \u2502 \u2502\n\u2502 Samples.GenerateWactionContext \u2502 \u2502 \u2502 0% \u2502 \u2502\n...\n\u2502 Samples.MultiLogApp \u2502 \u2502 \u2502 0.306% \u2502 \u2502\n\u2502 Samples.MonitorLogs \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Samples.ParseAccessLog \u2502 \u2502 \u2502 27% \u2502 \u2502\n\u2502 Samples.ParseLog4J \u2502 \u2502 \u2502 4% \u2502 \u2502\n\u2502 Samples.AccessLogEntry \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Samples.Log4JEntry \u2502 \u2502 \u2502 \u2502 \u2502\n...\n\u2502 Samples.ErrorsAndWarnings \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Samples.GetLog4JErrorWarning \u2502 \u2502 \u2502 2% \u2502 \u2502\n\u2502 Samples.Log4JErrorWarningActivity \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Samples.Log4ErrorWarningStream \u2502 \u2502 \u2502 0% \u2502 \u2502\n\u2502 Samples.HackerCheck \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Samples.FindHackers \u2502 \u2502 \u2502 0% \u2502 \u2502\n\u2502 Samples.GenerateHackerContext \u2502 \u2502 \u2502 0.048% \u2502 \u2502\n\u2502 Samples.SendHackingAlerts \u2502 \u2502 \u2502 0.029% \u2502 \u2502\n...\n\u2502 Global.MonitoringSourceApp \u2502 \u2502 \u2502 0.104% \u2502 \u2502\n\u2502 Global.MonitoringSourceStream \u2502 \u2502 \u2502 0.104% \u2502 \u2502\n\u2502 Global.MonitoringSourceFlow \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Global.MonitoringSource1 \u2502 \u2502 \u2502 0.095% \u2502 \u2502\n\u2502 Global.MonitoringSourceFlowAgent \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Global.MonitoringSourceAgent \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 System$Alerts.AlertingApp \u2502 \u2502 \u2502 16% \u2502 \u2502\n\u2502 System$Alerts.validateAlertStream \u2502 \u2502 \u2502 14% \u2502 \u2502\n\u2502 System$Alerts.EmailOutputStream \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 System$Alerts.AlertJoinedMointorStream \u2502 \u2502 \u2502 2% \u2502 \u2502\n...\n\u2502 System$Alerts.WebAlertFilter \u2502 \u2502 \u2502 0.05% \u2502 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 ElasticSearch \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Elasticsearch Receive Throughput \u2502 0 \u2502\n\u2502 Elasticsearch Transmit Throughput \u2502 0 \u2502\n\u2502 Elasticsearch Cluster Storage Free \u2502 488,682,848,256 \u2502\n\u2502 Elasticsearch Cluster Storage Total \u2502 499,963,174,912 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Hot Threads: S192_168_7_91: \u2502\n\u2502 Hot threads at 2021-04-28T23:53:39.690Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true: \u2502\n\u2502 \u2502\n\u2502 0.1% (535micros out of 500ms) cpu usage by thread |\n| 'Global:showStream:03624c7d-2292-4535-bbe5-e376c5f5bc42:01eba848-de40-9c21-8a82-8cae4cf129d6:Async-Sender' \u2502\n\u2502 10/10 snapshots sharing following 5 elements \u2502\n\u2502 sun.misc.Unsafe.park(Native Method) \u2502\n\u2502 java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) \u2502\n\u2502 org.jctools.queues.MpscCompoundQueue.awaitNotEmpty(MpscCompoundQueue.java:308) \u2502\n\u2502 org.jctools.queues.MpscCompoundQueue.poll(MpscCompoundQueue.java:286) \u2502\n\u2502 com.webaction.jmqmessaging.AsyncSender$AsyncSenderThread.run(AsyncSender.java:119) \u2502\n\u2502 \u2502\n\u2502 0.1% (445micros out of 500ms) cpu usage by thread 'hz._hzInstance_1_robertmac.cached.thread-6' \u2502\n\u2502 5/10 snapshots sharing following 4 elements \u2502\n\u2502 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) \u2502\n\u2502 java.lang.Thread.run(Thread.java:748) \u2502\n\u2502 com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64) \u2502\n\u2502 com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80) \u2502\n...\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nmon <namespace>.<application name>;Returns a summary view of every component in the specified application. For example:\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 Application Samples.MultiLogApp \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Name \u2502 Status \u2502 Rate \u2502 SourceRate \u2502 CPU% \u2502 Nodes \u2502 Activity \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Samples.MultiLogApp \u2502 RUNNING \u2502 571,01 \u2502 40,958 \u2502 66% \u2502 1 \u2502 2021-04-28 17:02:14 \u2502\n\u2502 \u2502 \u2502 0 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 Samples.ApiFlow \u2502 \u2502 0 \u2502 0 \u2502 \u2502 1 \u2502 \u2502\n\u2502 Samples.GetApiSummaryUsage \u2502 \u2502 176 \u2502 0 \u2502 0% \u2502 1 \u2502 2021-04-28 17:02:13 \u2502\n\u2502 Samples.GetApiUsage \u2502 \u2502 26,142 \u2502 0 \u2502 0% \u2502 1 \u2502 2021-04-28 17:02:13 \u2502\n\u2502 Samples.ApiSummaryWindow \u2502 \u2502 0 \u2502 0 \u2502 \u2502 1 \u2502 2021-04-28 17:02:13 \u2502\n...\n\u2502 Samples.ZeroContentCheck \u2502 \u2502 0 \u2502 0 \u2502 \u2502 1 \u2502 \u2502\n\u2502 Samples.FindZeroContent \u2502 \u2502 19,550 \u2502 0 \u2502 7% \u2502 1 \u2502 2021-04-28 17:02:14 \u2502\n\u2502 Samples.GenerateZeroContentContext \u2502 \u2502 0 \u2502 0 \u2502 0.02% \u2502 1 \u2502 2021-04-28 17:02:03 \u2502\n...\n\u2502 Samples.ZeroContentEventList \u2502 \u2502 0 \u2502 0 \u2502 0% \u2502 1 \u2502 2021-04-28 17:02:03 \u2502\n\u2502 Samples.ZeroContentAlertSub \u2502 \u2502 0 \u2502 0 \u2502 0% \u2502 1 \u2502 2021-04-28 17:02:03 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518mon <namespace>.<application name> <node>;Returns details for the specified application on the specified node. This is useful to drill down on an application deployed on multiple nodes. For example:\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 APPLICATION Samples.MultiLogApp \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Property \u2502 Value \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Nodes \u2502 1 \u2502\n\u2502 Status Change \u2502 Object UUID : 01eba87b-324d-1b91-8a82-8cae4cf129d6 \u2502\n\u2502 \u2502 Name : Samples.MultiLogApp \u2502\n\u2502 \u2502 Type : APPLICATION \u2502\n\u2502 \u2502 Previous Status: STOPPED \u2502\n\u2502 \u2502 Current Status : RUNNING \u2502\n\u2502 \u2502 Timestamp : 2021/04/28-17:06:31 \u2502\n\u2502 Timestamp \u2502 2021-04-28 17:06:31 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518mon <namespace>.<component name> [ <node> ]Returns details for only the specified component. For example:\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 STREAM Samples.AlertStream \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Property \u2502 Value \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 CPU \u2502 0.00 \u2502\n\u2502 CPU Rate Per Node \u2502 0% \u2502\n\u2502 CPU Rate \u2502 0% \u2502\n\u2502 Number of events seen per \u2502 0 \u2502\n\u2502 monitor snapshot interval \u2502 \u2502\n\u2502 Input \u2502 3,825 \u2502\n\u2502 Input Rate \u2502 0 \u2502\n\u2502 Latest Activity \u2502 2021-04-28 17:12:01 \u2502\n\u2502 Nodes \u2502 1 \u2502\n\u2502 Rate \u2502 0 \u2502\n\u2502 Stream Full \u2502 False \u2502\n\u2502 Timestamp \u2502 2021-04-28 17:12:01 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518For a DatabaseWriter source with an\u00a0input stream that is the output stream of a DatabaseReader or CDC reader source, output will include Commit Lag, which is the number of milliseconds between the timestamp of the last known operation of the database source and the current Striim system time. Note that if the database and Striim systems are in different time zones this number will be quite large.For an OracleReader source, output will include the following:\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 SOURCE ns1.ora2striim_OracleSource \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Property \u2502 Value \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Catalog evolution duration \u2502 Not Ready \u2502\n\u2502 Schema Evolution Status \u2502 NotApplicable \u2502\n\u2502 CDC Operation \u2502 { \u2502\n\u2502 \u2502 \"No of Deletes\" : 0, \u2502\n\u2502 \u2502 \"No of DDLs\" : 0, \u2502\n\u2502 \u2502 \"No of PKUpdates\" : 0, \u2502\n\u2502 \u2502 \"No of Updates\" : 0, \u2502\n\u2502 \u2502 \"No of Inserts\" : 10 \u2502\n\u2502 \u2502 } \u2502\n\u2502 CPU \u2502 0.00029 \u2502\n\u2502 CPU Rate Per Node \u2502 0.007% \u2502\n\u2502 CPU Rate \u2502 0.029% \u2502\n\u2502 Number of events seen per monitor \u2502 0 \u2502\n\u2502 snapshot interval \u2502 \u2502\n\u2502 Input \u2502 10 \u2502\n\u2502 Input Rate \u2502 0 \u2502\n\u2502 Largest transaction details \u2502 TxnID : 8.25.946 Operation count : 12 \u2502\n\u2502 Last Event Position \u2502 01eba881-2a1b-93b1-8a82-8cae4cf129d6/*@[{OpenSCN[2466241]-CommitSCN[2467119]-SeqNum[10]} | %] \u2502\n\u2502 Latest Activity \u2502 2021-04-29 12:06:09 \u2502\n\u2502 Logminer Start Duration \u2502 4ms \u2502\n\u2502 Oldest Open Transactions \u2502 [{\"3.15.924\":{\"# of Ops\":11,\"CommitSCN\":\"null\",\"Sequence #\":\"1\",\"StartSCN\":\"2467200\",\"Rba \u2502\n\u2502 \u2502 block #\":\"11\",\"Thread #\":\"1\",\"TimeStamp\":\"2021-04-29T19:06:16.000-07:00\"}}] \u2502\n\u2502 Longest transaction details \u2502 TxnID : 8.25.946 Open Duration : 114 seconds \u2502\n\u2502 Open transactions in cache \u2502 1 \u2502\n\u2502 Uninterested transactions in cache \u2502 0 \u2502\n\u2502 Nodes \u2502 1 \u2502\n\u2502 Oracle Reader Current SCN \u2502 2467327 \u2502\n\u2502 Current SCN Range \u2502 2467254-2467327 \u2502\n\u2502 Oracle Reader Last SCN \u2502 2467254 \u2502\n\u2502 Oracle Reader Last Timestamp \u2502 2021-04-29 12:06:35 \u2502\n\u2502 Total Logminer Records read \u2502 1209 \u2502\n\u2502 Redo Switch Count \u2502 0 \u2502\n\u2502 Rate \u2502 0 \u2502\n\u2502 Read Lag \u2502 -25,083,374 \u2502\n\u2502 Source Input \u2502 10 \u2502\n\u2502 Source Rate \u2502 0 \u2502\n\u2502 StartSCN \u2502 2466068 \u2502\n\u2502 Table Information \u2502 { \u2502\n\u2502 \u2502 \"HR.JOB_HISTORY\" : { \u2502\n\u2502 \u2502 \"No of Deletes\" : 0, \u2502\n\u2502 \u2502 \"No of DDLs\" : 0, \u2502\n\u2502 \u2502 \"No of PKUpdates\" : 0, \u2502\n\u2502 \u2502 \"No of Updates\" : 0, \u2502\n\u2502 \u2502 \"No of Inserts\" : 10 \u2502\n\u2502 \u2502 } \u2502\n\u2502 \u2502 } \u2502\n\u2502 Timestamp \u2502 2021-04-29 12:07:14 \u2502\n\u2502 Top Open Transactions (# of Ops) \u2502 [{\"3.15.924\":{\"# of Ops\":11,\"CommitSCN\":\"null\",\"Sequence #\":\"1\",\"StartSCN\":\"2467200\",\"Rba \u2502\n\u2502 \u2502 block #\":\"11\",\"Thread #\":\"1\",\"TimeStamp\":\"2021-04-29T19:06:16.000-07:00\"}}] \u2502\n\u2502 Operations in the Cache \u2502 11 \u2502\n\u2502 Total number of Reconnects \u2502 0 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nThe snapshot interval is five seconds. \"Number of events seen per monitor snapshot interval\" is the count for the most recent snapshot.Known issue\u00a0DEV-12638: The \"Oracle Reader Last Timestamp\" always uses the time zone of the Striim server, even when the Oracle server is in a different time zone.mon <namespace>.<application or component name> [ <node> ] -format \"summaryreport\"Returns\u00a0a summary for the specified application or component. For example:\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 Samples.MultiLogApp Summary Report \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Cluster Name \u2502 hz.client_0 \u2502\n\u2502 Striim Version \u2502 Version 4.0.2 (982647b992) \u2502\n\u2502 Metadata Repository Version \u2502 1.0.0-SNAPSHOT \u2502\n\u2502 Nodes \u2502 1 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Source Input Diff \u2502 0 \u2502\n\u2502 Output Diff \u2502 0 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nmon <namespace>.<application or component name> memorysize;Returns the total memory used by the specified application or component.mon <namespace>.<component name> [-start \"HH:MM\"] [-end \"HH:MM\"] -format \"timeseriesreport\";Returns a comma-delimited, highly verbose report for the specified component, which you may parse using your own tools. The first line is a header row.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/using-the-mon-command.html", "title": "Using the MON command", "language": "en"}} {"page_content": "\n\nMonitoring end-to-end lag (LEE)Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideMonitoring end-to-end lag (LEE)PrevNextMonitoring end-to-end lag (LEE)End-to-end lag (\"LEE\") is the time it takes from the origin of an event in an external source to its final delivery by Striim to an external target. This data is sampled every five seconds and retained for 24 hours. If within a five-second sample period no event is delivered to the target, no LEE is recorded for that period.A low and stable LEE is desirable. A high but stable LEE indicates either that Striim is reading stale data from the source or a window or some other component in the Striim application is holding data for a significant time. A continuously increasing LEE indicates that events are being received from the source faster than the target can handle them, which might indicate that you should consider Creating multiple writer instances or reducing network bottlenecks between Striim and the external target.Creating multiple writer instancesNote: For accurate LEE calculation with SQL Server sources, the Fetch Transaction Metadata property must be set to True (see MS SQL Reader properties).MS SQL Reader propertiesViewing LEEIn the Web UI: Go to the Flow Designer page for the application and click View End to End Lag (at the top right corner) to display a line chart of LEE in milliseconds over time. If the target has multiple sources, you will see a chart for each.In the console: REPORT LEE+; will return the LEE for each source-target combination in the cluster, the time each was measured, the start time type (see discussion of \"Start time types\" below), and the statistics for the most recent sample (see discussion below).\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 Lag End-to-End (LEE) Report \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Source \u2502 Target \u2502 Latest LEE (seconds) \u2502 Measured At \u2502 Source Time \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 SamplesDB.ReadPostgres \u2502 SamplesDB.WriteToPostg \u2502 0.067 \u2502 2021-07-29 15:03:52.14 \u2502 Idle \u2502\n\u2502 TablesDB (DatabaseRead \u2502 resTable (DatabaseWrit \u2502 \u2502 3 PDT \u2502 \u2502\n\u2502 er) \u2502 er) \u2502 \u2502 \u2502 \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 Lag End-to-End (LEE) Recent Statistics \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 Source \u2502 Target \u2502 Minimum LEE (s \u2502 Maximum LEE (s \u2502 Average LEE (s \u2502 Sample Size \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 SamplesDB.ReadPostgre \u2502 SamplesDB.WriteToPost \u2502 0.067 \u2502 0.067 \u2502 0.067 \u2502 1 \u2502\n\u2502 sTablesDB (DatabaseRe \u2502 gresTable (DatabaseWr \u2502 \u2502 \u2502 \u2502 \u2502\n\u2502 ader) \u2502 iter) \u2502 \u2502 \u2502 \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518The following commands return various subsets of that information (you may omit the namespace if the specified objects are in the current namespace). Use LEE+ to include statistics for the most recent five-second sample.REPORT LEE[+] <namespace.source name> <namespace.target name>; - LEE for the specified source-target combinationREPORT LEE STATS <namespace.source name> <namespace.target name> [-START '<start time>'] [-END '<end time>'] [-ROLLUPINTERVAL '<interval>']; - summary of LEE for the specified source and target over the specified time periodSpecify the ROLLUPINTERVAL as minutes or hours (for example, 15m or 1h. To specify a start or end time from the previous day, specify the time in the format yyyy/MM/dd-HH:mm or yyyy/MM/dd-HH:mm:ss.For example, REPORT LEE STATS OracleSource KafkaTarget -START '10:00' -END '12:00' -ROLLUPINTERVAL '1h'; would return statistics for two one-hour intervals, 10-11 am and 11am-noon, for the current day.REPORT LEE[+] <namespace.source name> *; - LEEs for all targets of the specified sourceREPORT LEE[+] * <namespace.target name>; - LEEs for all sources of the specified targetREPORT LEE[+] APPLICATION <namespace.application name>; - LEEs for all targets in the specified application (sources may be in other applications)Using the system health REST API (Monitoring using the system health REST API): Each target element of the health map includes the latest LEE for each source in the following format:\"lagEnd2End\": {\n \"data\": [\n {\n \"source\": \"ns1.source1\",\n \"target\": \"ns1.target1\",\n \"lee\": 0.014,\n \"at\": 1589415119444,\n \"type\": \"Observed\"\n }\n ]How LEE is calculatedStriim calculates end-to-end lag by subtracting an event's the start time from its end time. This figure includes:the time it takes Striim to acquire the event data from the external sourcewithin Striim, any time the event spends in buffers and queue;, being enriched, joined, aggregated, filtered, or otherwise processed; and any time required for communication between nodes in a clusterthe time it takes Striim to deliver the event to the targetLEE is calculated separately for each source-target combination.If the clocks of the source server, Striim server, and target server are not synchronized, the lag will not be calculated correctly.When there are multiple paths between a source and a target, events between the source and target are distributed among multiple Striim servers in a multi-node cluster, the most recent LEE will be displayed. In this situation, the LEE graph in the web UI may be jagged or noisy.Start time types: Striim uses one of the following as the start time (these types appear in the MON and system health object reports):Attribute: a timestamp from an attribute of the event (for instance, a JSON timestamp field)Commit: a database commit timestampIdle: the time Striim inserted a mock event into the pipeline. Striim uses these to track LEE when no events are being received from the external source because it is inactive. Idle events are used only to measure LEE and are discarded as they arrive at the writer.Ingestion: a Kafka broker ingestion timestampObserved: the time the event was received by the reader (used when no timestamp is available from the external source)Operation: the timestamp of an individual operation in a database transactionEnd time: Striim uses the time it receives an acknowledgement from the external target as the end time.Configuring LEETwo properties may be configured in startUp.properties / agent.conf:LeeRate / striim.node.LeeRate: By default, LEE is computed for every source event. Set this to integer n to compute LEE every n events. For example, LeeRate=10 will calculate LEE every ten source events. Set to 0 to disable LEE.LeeTimeout / striim.node.LeeTimeout: The length of time an external source can be inactive before Striim inserts Idle events (see discussion of \"Start time types\" above.) The default is 10 seconds. Set to 0 to disable insertion of idle events.These properties are set independently for each server and agent.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-18\n", "metadata": {"source": "https://www.striim.com/docs/en/monitoring-end-to-end-lag--lee-.html", "title": "Monitoring end-to-end lag (LEE)", "language": "en"}} {"page_content": "\n\nUsing the REPORT LATENCY commandSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideUsing the REPORT LATENCY commandPrevNextUsing the REPORT LATENCY commandUse this command to identify bottlenecks in applications. This can be helpful in determining which components need to be scaled up or otherwise optimized for better throughput.\u00a0REPORT LATENCY [<namespace>.]<application>;Returns the total latency for the application. For example, :W (Samples) > report latency MultiLogApp;\nProcessing - report latency MultiLogApp\n\nAverage Latency Rate is: 400 milli seconds\nREPORT LATENCY [<namespace>.]<application> all;Returns latency for for each source-destination path. For example:report latency samples.posapp all;\nProcessing - report latency samples.posapp all\n\nAverage Latency Rate is: 166 milli seconds\n\nAverage Latency Rate is: 166\nDetailed latency for the application Samples.PosApp: \n\nLatency calculation started at component : Samples.CsvDataSource of type : SOURCE at time : 2017-12-19 01:46:37\n\n+--------------------------------------------+------------------+----------------+------------------+\n| Component Name | Component Type | Latency (ms) | Server Name |\n+--------------------------------------------+------------------+----------------+------------------+\n| Samples.CsvStream | STREAM | 0 | S192_168_7_36 |\n| Samples.CsvToPosData | CQ | 0 | S192_168_7_36 |\n| Samples.PosDataStream | STREAM | 0 | S192_168_7_36 |\n| Samples.PosData5Minutes | WINDOW | 2 | S192_168_7_36 |\n| Samples.GenerateMerchantTxRateOnly | CQ | 141 | S192_168_7_36 |\n| Samples.MerchantTxRateOnlyStream | STREAM | 0 | S192_168_7_36 |\n| Samples.GenerateMerchantTxRateWithStatus | CQ | 0 | S192_168_7_36 |\n| Samples.MerchantTxRateWithStatusStream | STREAM | 0 | S192_168_7_36 |\n| Samples.GenerateWactionContext | CQ | 0 | S192_168_7_36 |\n| Samples.MerchantActivity | WACTIONSTORE | 0 | S192_168_7_36 |\n+--------------------------------------------+------------------+----------------+------------------+\n\nTotal lag = 143 milliseconds\n\nLatency calculation started at component : Samples.CsvDataSource of type : SOURCE at time : 2017-12-19 01:46:37\n\n+--------------------------------------------+------------------+----------------+------------------+\n| Component Name | Component Type | Latency (ms) | Server Name |\n+--------------------------------------------+------------------+----------------+------------------+\n| Samples.CsvStream | STREAM | 0 | S192_168_7_36 |\n| Samples.CsvToPosData | CQ | 0 | S192_168_7_36 |\n| Samples.PosDataStream | STREAM | 0 | S192_168_7_36 |\n| Samples.PosData5Minutes | WINDOW | 2 | S192_168_7_36 |\n| Samples.GenerateMerchantTxRateOnly | CQ | 141 | S192_168_7_36 |\n| Samples.MerchantTxRateOnlyStream | STREAM | 0 | S192_168_7_36 |\n| Samples.GenerateMerchantTxRateWithStatus | CQ | 0 | S192_168_7_36 |\n| Samples.MerchantTxRateWithStatusStream | STREAM | 0 | S192_168_7_36 |\n| Samples.GenerateAlerts | CQ | 0 | S192_168_7_36 |\n| Samples.AlertStream | STREAM | 0 | S192_168_7_36 |\n| Samples.AlertSub | TARGET | 0 | S192_168_7_36 |\n+--------------------------------------------+------------------+----------------+------------------+\n\nTotal lag = 143 milliseconds\nLIMITATIONSLatency is measured by markers that are inserted into source output once every ten events, so if a target gets only a small percentage of the events, it may not appear in\u00a0REPORT LATENCY ... ALL output. For example, in MultiLogApp, the ZeroContentEventList WActionStore gets only 30 out of over 10,000 events, so it typically does not appear.\u00a0Also, when multiple sources are joined by a CQ, the latency markers from some of the sources may not end up in the target, so there will be no data set for that source-destination pair.In order to get a report including all source-destination pairs, you may temporarily modify\u00a0Striim/bin/startUp.properties to change the\u00a0LagReportRate setting from 10 to 1. Restart Striim and run REPORT LATENCY again. We recommend changing the setting back to the default after you are through.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-09-13\n", "metadata": {"source": "https://www.striim.com/docs/en/using-the-report-latency-command.html", "title": "Using the REPORT LATENCY command", "language": "en"}} {"page_content": "\n\nUsing the REPORT START / STOP commandSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Monitoring GuideUsing the REPORT START / STOP commandPrevNextUsing the REPORT START / STOP commandThe report command's synatax is:report { start | stop } <namespace>.<application>;Use this command to get precise details about what an application has been doing. The namespace may be omitted if the application is in the current namespace.The application must be deployed or running when you issue the\u00a0report start. Here is an example using\u00a0MultiLogApp, which processed all of its data before the\u00a0report stop.W (Samples) > report start multilogapp;\nProcessing - report start multilogapp\n-> SUCCESS \nElapsed time: 11 ms\nW (Samples) > start application multilogapp;\nProcessing - start application multilogapp\n-> SUCCESS \nElapsed time: 4043 ms\nW (Samples) > report stop multilogapp;\nProcessing - report stop multilogapp\nTotal Events - 2898802\nFirst Event Time - 14:46:30\n\nFirst Event - {\"timeStamp\":\"2017-02-23T14:46:30.973-08:00\",\"data\":[\"216.103.201.86\",\"-\",\n\"EHernandez\",\"10/Feb/2014:12:13:51.037 -0800\",\"GET http://cloud.saas.me/login&\njsessionId=01e3928f-e059-6361-bdc5-14109fcf2383 HTTP/1.1\",\"200\",\"21560\",\"-\",\n\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)\",\"1606\"],\n\"metadata\":{\"RecordOffset\":0,\"RecordStatus\":\"VALID_RECORD\",\"FileName\":\"access_log\",\n\"RecordEnd\":237,\"FileOffset\":0},\"before\":null,\"dataPresenceBitMap\":\"AAA=\",\n\"beforePresenceBitMap\":\"AAA=\",\"typeUUID\":null,\"idstring\":\"01e6fa19-eb45-ded1-a5e4-685b3587069e\"}\nLast Event Time - 14:47:27\nLast Event - {\"timeStamp\":\"2017-02-23T14:47:27.038-08:00\",\"data\":[\"206.93.30.161\",\"-\",\n\"EMorris\",\"11/Feb/2014:10:20:52.043 -0800\",\"GET http://cloud.saas.me/getUpdated?\ntype=ChatterMessage&id=01e39290-79a3-2dd2-bdc5-14109fcf2383&jsessionId=\n01e39290-799f-3637-bdc5-14109fcf2383 HTTP/1.1\",\"200\",\"35712\",\"-\",\n\"Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:24.0) Gecko/20100101 Firefox/24.0\",\"972\"],\n\"metadata\":{\"RecordOffset\":479236678,\"RecordStatus\":\"VALID_RECORD\",\"FileName\":\"access_log\",\n\"RecordEnd\":479236986,\"FileOffset\":0},\"before\":null,\"dataPresenceBitMap\":\"AAA=\",\n\"beforePresenceBitMap\":\"AAA=\",\"typeUUID\":null,\"idstring\":\"01e6fa1a-0cb0-b625-a5e4-685b3587069e\"}\n\nAverage Throughput for this interval - 51000\nTotal Wactions created - 5850\nLast Waction created at - 14:47:27\n\nLast Waction - {\"internalWactionStoreName\":\"Samples_UnusualActivity\",\"wactionTs\":\n\"2017-02-23T14:47:27.408-08:00\",\"key\":\"75.201.196.173\",\"mapKey\":\n{\"id\":\"01e6fa1a-0ce9-2b03-a5e4-685b3587069e\",\"key\":\"75.201.196.173\"},\"wactionStatus\":0,\n\"position\":null,\"typeOfActivity\":\"ZeroContent\",\"accessTime\":1392142812204,\n\"accessSessionId\":\"01e39290-7985-4599-bdc5-14109fcf2383\",\"srcIp\":\"75.201.196.173\",\n\"userId\":\"AWhite\",\"country\":\"United States\",\"city\":null,\"lat\":38.0,\"lon\":-97.0,\n\"accessTimeAsLong\":1392142812204,\"events\":[{\"_id\":null,\"timeStamp\":1487890047407,\n\"originTimeStamp\":0,\"key\":null,\"accessTime\":1392142812204,\"accessSessionId\":\n\"01e39290-7985-4599-bdc5-14109fcf2383\",\"srcIp\":\"75.201.196.173\",\"userId\":\"AWhite\",\n\"request\":\"GET http://cloud.saas.me/query?type=ChatterActivity&\nid=01e39290-798d-0dc9-bdc5-14109fcf2383&jsessionId=01e39290-7985-4599-bdc5-14109fcf2383 \nHTTP/1.1\",\"code\":200,\"size\":0,\"referrer\":\"-\",\"userAgent\":\"Mozilla/5.0 (Windows NT 6.0) \nAppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.69 Safari/537.36\",\"responseTime\":883,\n\"logTime\":1392142812204,\"logSessionId\":\"01e39290-7985-4599-bdc5-14109fcf2383\",\n\"level\":\"WARN\",\"message\":\"Issue in API call [api=query] [session=01e39290-7985-4599-bdc5-\n14109fcf2383] [user=AWhite] [sobject=ChatterActivity]\",\"api\":\"query\",\n\"sobject\":\"ChatterActivity\",\"xception\":\"com.me.saas.SaasMultiApplication$SaasException: \nIssue in API call [api=query] [session=01e39290-7985-4599-bdc5-14109fcf2383] [user=AWhite] \n[sobject=ChatterActivity]\\n\\tat com.me.saas.SaasMultiApplication.query\n(SaasMultiApplication.java:1069)\\n\\tat sun.reflect.GeneratedMethodAccessor2.invoke\n(Unknown Source)\\n\\tat sun.reflect.DelegatingMethodAccessorImpl.invoke\n(DelegatingMethodAccessorImpl.java:43)\\n\\tat java.lang.reflect.Method.invoke(Method.java:606)\n\\n\\tat com.me.saas.SaasMultiApplication$ObjectApiCall.invoke(SaasMultiApplication.java:216)\n\\n\\tat com.me.saas.SaasMultiApplication$Session.invokeAPI(SaasMultiApplication.java:1500)\n\\n\\tat com.me.saas.SaasMultiApplication.main(SaasMultiApplication.java:1626)\",\n\"className\":\"com.me.saas.SaasMultiApplication\",\"method\":\"query\",\n\"fileName\":\"SaasMultiApplication.java\",\"lineNum\":\"1072\",\n\"idstring\":\"01e6fa1a-0ce9-0413-a5e4-685b3587069e\"}]}In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-11-10\n", "metadata": {"source": "https://www.striim.com/docs/en/using-the-report-start---stop-command.html", "title": "Using the REPORT START / STOP command", "language": "en"}} {"page_content": "\n\nAPI GuideSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 API GuidePrevNextAPI GuideStriim's programming interfaces include:REST API endpoints to retrieve WActionStore definitions (GET /wactions/def), query WActionStores (GET /wactions/search), and retrieve system health statistics (GET /health)the Striim Application Management REST API, which allows you to manage the lifecycle (deploy, start, stop, undeploy, drop, etc.) of existing Striim applications, create new applications from templates, and retrieve file lineage datathe Java event publishing API, which allows Java applications to write directly to Striim streamsUsing the Striim Application Management REST APIThis API allows you to create and manage (deploy, start, stop, undeploy, drop, etc.) Striim applications, execute TQL commands, retrieve monitoring and file lineage data, and more. See the interactive documentation at https://striim.stoplight.io/docs/striim-application-management..Getting a REST API authentication tokenAn authentication token must be included in all REST API calls using the token parameter. You can get a token using any REST client. For sample code to get a token using Java, .NET, Python, Ruby, and other languages, see github.com/striim/rest-api-samples/tree/master/v2.To get a token using the Striim Cloud Console, go to the Services page, select More > API and under API token click Copy..To reset the token, in the Striim Cloud Console go to the Users page, select \u22ef > View details > Login & Provisioning, and under API Token click Reset.Retrieving a WActionStore definition using the REST APITo retrieve the data definition for a WActionStore (for example, so that the custom application can use it to populate a field selector), the URI syntax is:http://<IPAddress>:<port>/wactions/def?name=<namespace>.<WActionStore name>&token=<token>See\u00a0Getting a REST API authentication token.For example, using curl (which requires the & between the WActionStore name and token to be escaped), if PosApp is running on localhost with the default port, and the token is\u00a001e86e5f-ef7f-aab1-919f-000ec6fd8764, curl -X GET http://localhost:9080/wactions/def?\"name=Samples.MerchantActivity&token=01e86e5f-ef7f-aab1-919f-000ec6fd8764\"; will return:CONTEXT DEFINITION :\u00a0\nTYPE Samples.MerchantActivityContext CREATED 2018-06-12 09:46:21\nATTRIBUTES (\n\u00a0 MerchantId java.lang.String KEY\u00a0\n\u00a0 StartTime org.joda.time.DateTime\n\u00a0 CompanyName java.lang.String\n\u00a0 Category java.lang.String\n\u00a0 Status java.lang.String\n\u00a0 Count java.lang.Integer\n\u00a0 HourlyAve java.lang.Integer\n\u00a0 UpperLimit java.lang.Double\n\u00a0 LowerLimit java.lang.Double\n\u00a0 Zip java.lang.String\n\u00a0 City java.lang.String\n\u00a0 State java.lang.String\n\u00a0 LatVal java.lang.Double\n\u00a0 LongVal java.lang.Double\n)\n\nEVENT LIST DEFINITION :\u00a0\nTYPE Samples.MerchantTxRate CREATED 2018-06-12 09:46:21\nATTRIBUTES (\n\u00a0 merchantId java.lang.String KEY\u00a0\n\u00a0 zip java.lang.String\n\u00a0 startTime org.joda.time.DateTime\n\u00a0 count java.lang.Integer\n\u00a0 totalAmount java.lang.Double\n\u00a0 hourlyAve java.lang.Integer\n\u00a0 upperLimit java.lang.Double\n\u00a0 lowerLimit java.lang.Double\n\u00a0 category java.lang.String\n\u00a0 status java.lang.String\n)\nQuerying a WActionStore using the REST APIYou may query a WActionStore using either a SELECT statement or keys from the REST API. The URI syntax for SELECT statements is:http://<IPAddress>:<port>/wactions/search?name=<namespace>.<WActionStore name>&query=<select statement>&token=<token>See\u00a0Getting a REST API authentication token.The SELECT statement uses the same syntax as a CQ or a dashboard query. End the select statement with a semicolon.The syntax for REST API keys is:http://<IPAddress>:<port>/wactions/search?name=<namespace>.<WActionStore name>\n[&fields=<field name> ,...]\n[&filter=\n [startTime:<query start time>,]\n [endTime:<query end time>,]\n [key:<key field value >,]\n [sortBy:<field name>,]\n [sortDir:asc,]\n [limit:<maximum number of results>,]\n [singleWactions:True]\n]\n&token=<token>NoteKey:value pairs in the URI are case-sensitive.If &fields and &filter are omitted, the query will return the WAction key and context fields from all WActions in random order.Include &fields to specify the names of the fields to return, separated by commas. Alternatively, use &fields=default-allEvents to return the WAction key, context fields, and all events, or &fields=eventList to return only the events.Include &filter to filter and/or sort the results:startTime: if specified (Unix time in milliseconds), only WActions with a timestamp of this value or later will be returnedendTime: if specified, only WActions with a timestamp of this value or earlier will be returnedkey: if specified, only WActions with the specified key field value will be returned <field name>:$IN$ <value>~...: if specified, only WActions with the specified value(s) will be returned, for example, State:$IN$ California~Missouri~NevadasingleWactions:True: if specified, all WActions will be returned; omit to return only\u00a0the most recent WAction for each key field value)sortBy: if specified, results will be sorted by this fieldsortDir:asc: use with sortBy to sort results in ascending order; omit to sort results in descending orderlimit:<n>: if specified, only the first n results will be returnedThe following operators may be used in URI values:operatorURI stringexample>$GT$HourlyAve$GT$1000 returns WActions where the hourly average is greater than than 1000<$LT$HourlyAve$GT$1000 returns WActions where the hourly average is less 1000>=$GTE$HourlyAve$GT$1000 returns WActions where the hourly average is greater than or equal to 1000<=$LTE$HourlyAve$GT$1000 returns WActions where the hourly average is less than or equal to 1000!=$NE$HourlyAve$NE$0 returns WActions where the hourly average is not zeroWActions are returned in JSON format. For example, here is one WAction from PosApp: {\n \"{\\\"id\\\":\\\"01e3baa6-06ea-6f0b-be8a-28cfe91e2b2b\\\",\\\"key\\\":\\\"JGudv50ThZhzaAz1s2EhbtIg8qHLXlnHfIg\\\"}\": {\n \"context-Category\": \"WARM\", \n \"context-City\": \"Garfield\", \n \"context-CompanyName\": \"RueLaLa.com\", \n \"context-Count\": 938, \n \"context-HourlyAve\": 948, \n \"context-LatVal\": 46.9946, \n \"context-LongVal\": -117.1523, \n \"context-LowerLimit\": 790.625, \n \"context-MerchantId\": \"JGudv50ThZhzaAz1s2EhbtIg8qHLXlnHfIg\", \n \"context-StartTime\": 1363146829000, \n \"context-State\": \"Washington\", \n \"context-Status\": \"OK\", \n \"context-UpperLimit\": 1137.6, \n \"context-Zip\": \"99130\", \n \"key\": \"JGudv50ThZhzaAz1s2EhbtIg8qHLXlnHfIg\", \n \"timestamp\": 1396470806180, \n \"totalEvents\": 41\n }\n}The id portion of the second line is the universally unique identifier (UUID) for this WAction. The key portion is the value of the WActionStore's key field, in this case MerchantID.Lines 3-16 are the WAction's context field names and values.key is another copy of the value of the WActionStore's key field.timestamp is the time when the WAction was created (not the timestamp for the event, which is the context-StartTime value).totalEvents is the number of events associated with this WAction.Some examples using curl (which requires the spaces and & in the select statement to be escaped):Return all WActions in Samples.MerchantActivity:curl -X GET http://localhost:9080/wactions/search?\\\n\"name=Samples.MerchantActivity&\\\nquery=select%20*%20from%20Samples.MerchantActivity&\\\ntoken=01e930a6-53c8-2201-8729-8cae4cf129d6\";Return WActions from Samples.MerchantActivity where the State field value is California, Missouri, or Nevada:curl -X GET http://localhost:9080/wactions/search?\\\n\"name=Samples.MerchantActivity&\\\nState:$IN$%20California~Missouri~Nevada&\\\ntoken=01e930a1-cc1e-cfb1-8729-8cae4cf129d6\";Return only the State field value from all WActions in Samples.MerchantActivity:curl -X GET http://localhost:9080/wactions/search?\\\n\"name=Samples.MerchantActivity&fields=State&\\\ntoken=01e930a1-cc1e-cfb1-8729-8cae4cf129d6\";These examples will all work if you run Samples.PosApp and replace the token value with a current one.Monitoring using the system health REST APIThis REST API endpoint allows you to retrieve various statistics about a Striim cluster. The basic URI syntax is: http://<IP address>:<port>/health?token=<token>See\u00a0Getting a REST API authentication token.For example: http://localhost:9080/health?token=01e56161-9e42-3811-8157-685b3587069eIf you pretty-print the return, it will look something like this:{\n \"healthRecords\": [\n {\n \"kafkaHealthMap\": {},\n \"waStoreHealthMap\": {\n \"Samples.UnusualActivity\": {\n \"fqWAStoreName\": \"Samples.UnusualActivity\",\n \"writeRate\": 0,\n \"lastWriteTime\": 1508429845353\n } ...\n },\n \"cacheHealthMap\": {\n \"Samples.MLogZipLookup\": {\n \"size\": 87130,\n \"lastRefresh\": 1508429842638,\n \"fqCacheName\": \"Samples.MLogZipLookup\"\n } ...\n },\n \"clusterSize\": 1,\n \"appHealthMap\": {\n \"ns3.ProxyCheck\": {\n \"lastModifiedTime\": 1508371187853,\n \"fqAppName\": \"ns3.ProxyCheck\",\n \"status\": \"CREATED\"\n } ...\n },\n \"serverHealthMap\": {\n \"Global.S192_168_1_14\": {\n \"memory\": 3693349800,\n \"cpu\": \"10.6%\",\n \"elasticsearchFree\": \"56GB\",\n \"fqServerName\": \"Global.S192_168_1_14\",\n \"diskFree\": \"/: 56GB\"\n }\n },\n \"sourceHealthMap\": {\n \"Samples.AccessLogSource\": {\n \"eventRate\": 0,\n \"lastEventTime\": 1508429845353,\n \"fqSourceName\": \"Samples.AccessLogSource\"\n } ...\n },\n \"elasticSearch\": true,\n \"targetHealthMap\": {\n \"Samples.CompanyAlertSub\": {\n \"eventRate\": 0,\n \"fqTargetName\": \"Samples.CompanyAlertSub\",\n \"lastWriteTime\": 1508429850358\n } ...\n },\n \"stateChangeList\": [\n {\n \"currentStatus\": \"CREATED\",\n \"type\": \"APPLICATION\",\n \"fqName\": \"Samples.MultiLogApp\",\n \"previousStatus\": \"UNKNOWN\",\n \"timestamp\": 1508429825651\n } ...\n ],\n \"issuesList\": [],\n \"startTime\": 1508429825345,\n \"id\": \"01e7b4e8-f298-76e2-ade3-685b3587069e\",\n \"endTime\": 1508429855366,\n \"derbyAlive\": true,\n \"agentCount\": 0\n }\n ],\n \"next\": \"/healthRecords?size=1&from=1\",\n \"prev\": \"/healthRecords?size=1&from=0\"\n}\nTimes are in milliseconds.In cacheHealthMap,\u00a0size is the amount of memory used, in bytes.In serverHealthMap,\u00a0cpu is the percentage used by the Java virtual machine at the time the server health was recorded, and\u00a0memory is the amount of free memory usable by the server, in bytes.issuesList will contain any log entries of level ERROR.You can use\u00a0start and\u00a0end switches to return records from a specific time range, for example,\u00a0http://<IP address>:<port>/health/healthRecords?start=<start time in milliseconds>&end=<end time in millisections>&token=<token>.You can use the id value from the summary to return a subset of the data using the following syntax:http://<IP address>:<port>/health/<id>/{agents|apps|caches|clustersize|derby|es|issues|servers|sources|statechanges|targets|wastores}?token=<token>For example, curl -X GET http://<IP address>:<port>/health/<id>/apps?token=<token> will return only the appHealthMap portion of the data.Using the Java event publishing APIYou may use the Java event publishing API to develop Java applications that write directly to Striim streams.Contact Striim support to download the SDK. Then extract the .zip file and open EventPublishAPI/docs/index.html for more information.When an application using the stream is created with the WITH ENCRYPTION option (see CREATE APPLICATION ... END APPLICATION), the API will automatically encrypt the stream.In this section: API GuideUsing the Striim Application Management REST APIGetting a REST API authentication tokenRetrieving a WActionStore definition using the REST APIQuerying a WActionStore using the REST APIMonitoring using the system health REST APIUsing the Java event publishing APISearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/api-guide.html", "title": "API Guide", "language": "en"}} {"page_content": "\n\nConsole commandsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Console commandsPrevNextConsole commandsUsing the console in the web UIAt the bottom of the Console page, enter a command, then click Execute.The command's output will appear above.If a command requires a file as an argument, upload it as described in Manage Striim - Files. Specify the path as UploadedFiles/<user name>/<file name>, for example, UploadedFiles/MyUserName/MyApp.tql.In this release, SELECT commands (ad-hoc queries) are not supported in the web console.Using the console in a terminal or command promptIf Striim is installed in /opt, the command to run the console is:/opt/Striim/bin/console.sh -c <cluster name>In Windows, if Striim is installed in c:\\striim, the command to run the console is:\\striim\\bin\\console -c <cluster name>NoteFor the Windows command prompt, set the font to Consola, Lucida Console, or another monospace font that has box-drawing characters.The following switches may be used:-c <cluster name>: the name of the cluster to connect to (if omitted, will default to the current user name)-f <path><file name>.tql: a TQL file containing commands to run when console starts (if not specified from root, path\u00a0is relative to the Striim program directory)-H false: if HTTPS has been disabled, use this to connect to Striim using HTTP-i <IP address>: if the system has multiple IP addresses, specify the one for the console to use-p <password>: the password to use to log in (if omitted, you will be prompted after the console connects)-S <IP address>: the IP address of the Striim server (the value of ServerNodeAddress in startUp.properties; required if HTTP has been disabled)-t <port>: specify the HTTP port if not 9080-T <port>: specify the HTTPS port if not 9081-u <user name>: the user to log in as (if omitted, you will be prompted after the console connects)The following commands are intended primarily for use at the command line rather than in .tql application files. DDL and component reference and ad-hoc queries may also be entered at the command line.@@<path>/<file name>.tql [passphrase=<passphrase>];Run the commands in the specified TQL file (typically all the commands required to create an application). If not specified from root, path is relative to the Striim program directory. If the TQL was exported with a passphrase (see Exporting applications and dashboards), specify it with the passphrase option. Note that if the file contains DDL defining an application, the name will be defined by the\u00a0CREATE APPLICATION statement, not the TQL file name.Exporting applications and dashboardsDEPLOYDeploys an application in the Created state. See Managing deployment groups.DESCRIBEDESCRIBE <namespace>.<object name>;Returns all properties of the specified component, application, or flow.\u00a0DESCRIBE CLUSTER; returns the cluster name, information about the metadata repository, and license details.\u00a0EXPORTEXPORT APPLICATION { ALL | <namespace>.<application name>,... } [TO \"<path>\" [passphrase=\"<passphrase>\"]];Exports applications as TQL files. Specify either ALL to export all applications or a list of applications separated by commas. See Encrypted passwords for discussion of passphrases. If you do not specify a path, files will be saved to the striim directory.Except for applications in the current namespace when the command is run, component names in the exported TQL will include their namespaces. Remove those before importing the TQL into a different namespace.EXPORT <stream name>;See\u00a0Reading a Kafka stream with an external Kafka consumer.HISTORYHISTORY;Lists all commands previously entered in the current console session.LISTLIST { <component type> };Returns a list of all objects of the specified type.LIST LIBRARIES;Lists currently loaded open processors (see Loading and unloading open processors).LOAD / UNLOADSee Loading standalone sources, caches, and WActionStores, or Creating an open processor component.METERUsage-based metering continuously tracks resource consumption by the apps in a Striim cluster. You can use this information to track your usage breakdowns at an app and adapter level. With Striim Cloud, you can also view usage rate information that can help you track how you will be billed for usage charges.Show billing and metering cycle informationMETER CYCLE -billing;Lists the billing cycles.METER CYCLE -metering <billingId>;Lists the metering cycles for the billing cycle billingId.Show usage informationMETER USAGE -summary;Lists the usage summaries for all billing cycles.METER USAGE -byapp <billingId>;Lists the usage and consumption for all applications in the billing cycle billingId.METER USAGE -bycycle -app <appId | appName>;Lists the usage and consumption for the application appId/appName in all billing cyclesMETER USAGE -current;Lists the usage and consumption for all adapters in the current billing cycle.METER USAGE -current -app <appId | appName>;List the usage and consumptions for all adapters in the application appId/appName and the current billing cycle.METER USAGE -itemized <billingId>;List the usage and consumptions for all adapters in the billing cycle billingId.METER USAGE -itemized <billingId> -app <appId | appName>;List the usage and consumptions for all adapters in the application appId/appName and the billing cycle billingIdbillingId.Show consumption informationMETER CONSUMPTION -aggregated <billingId> <itemId>;List the consumptions for all components in the billing cycle billingId and the adapter itemId.METER CONSUMPTION -aggregated <billingId> <itemId> -app <appId | appName>;List the consumptions for all components in the billing cycle billingId, the adapter itemId and the application appId/appName.METER CONSUMPTION -drilled <billingId> -component <componentId | componentName>;List the consumptions for the component componentId/componentName in the billing cycle billingId drilled down to the metering cycles.Show usage rate informationThe METER RATE commands apply to Striim Cloud with its usage-based billing. Rates are not applicable to Striim Platform. Usage has two parts:Consumption usage:\u00a0Consumptions\u00a0from the flow components are converted to usage based on a predetermined formula for their consumption types. \u00a0Feature usage:\u00a0Usage for each subscribed add-on feature is calculated by applying an acceleration factor on top of the total consumption-based usage.METER RATE -adapter;List the usage rates and acceleration factors for different adapter tiers.METER RATE -storage;List the usage rates for storage consumptions.METER RATE -feature;List the usage accelerator factors for the features.METER RATE -item <itemId>;List the metering rate and unit for the usage item itemId.Show applications, adapters and componentsMETER LIST -app;Lists the metered applications.METER LIST -adapter;Lists all adapters.METER LIST -adapter -used;List the used and metered adapters.METER LIST -component;List the metered components, active and dropped.METER LIST -component -active;List the metered components, active only.Show configurationMETER CONFIG -kafka;Describes the configuration of the internal Kafka cluster used by persistent streams.MONITORSee Using the MON command.PREVIEWPREVIEW <namespace>.<CQ name> { INPUT | OUTPUT | INPUTOUTPUT } [LIMIT <maximum number of events>]Returns current input and/or output events for the specified CQ. If you do not specify value for LIMIT, the command will return a maximum of 1000 input events and/or 1000 output events.NoteThe CQ must be running before you execute the PREVIEW command.Each event in the command's output includes:Source: name of the component that emitted the event (either one of the components in the FROM clause of the CQ or the CQ itself)IO: I for input or O for outputAction: added or removedServer: name of the server executing the specified CQData: the event payloadFor example, the command PREVIEW Samples.PosData5Minutes inputoutput limit 1; will return something similar to:Processing - preview samples.CsvToPosData inputoutput limit 1\nSource:\"Samples.CsvStream\",IO:\"I\",Action:\"added\",Server:\"S192_168_7_91\",\nData:\"[[COMPANY 366761, 9XGDirhiN2UPnJ9w5GqmISM2QXe1Coav3Fq, 0558659360821268472, 6, \n20130312174714, 0615, USD, 8.42, 8641415475152637, 61064, Polo], \n{FileOffset=0, RecordEnd=49798966, RecordOffset=49798836, FileName=posdata.csv, \nRecordStatus=VALID_RECORD}]\"\nSource:\"Samples.CsvToPosData\",IO:\"O\",Action:\"added\",Server:\"S192_168_7_91\",\nData:\"[9XGDirhiN2UPnJ9w5GqmISM2QXe1Coav3Fq, 2013-03-12T17:47:14.000-07:00, 17, 8.42, 61064];\"\nQUIESCEQUIESCE <namespace>.<application name> [CASCADE];Pauses all sources.Flushes out all data in process. This can result in partial batches of events, such as a 100-event window emitting a batch of only 20 events, or a five-minute window emitting a one-minute batch, which may result in functions such as COUNT and SUM returning anomalous results outside of the normal range. Pattern matching CQs may also return anomalous results.After all data is flushed, records all information required for recovery, if it is enabled (see Recovering applications).Stops the application. Its status will be QUIESCED.If you specify the CASCADE option, any downstream applications that consume events from the specified application via persisted streams will also be quiesced.NoteIf when you start an application for the first time it reads from a persisted stream that was previously quiesced, it will start reading the stream after the point at which it was most recently quiesced. Similarly, if an application was offline when an upstream application was quiesced, when it gets to the quiesce command in the persisted stream it will quiesce.The primary uses for QUIESCE are to flush out remaining data at the end of a data set and to create a recovery checkpoint with no data in process prior to using\u00a0ALTER\u00a0on an application with recovery enabled (see ALTER and RECOMPILE).Due to long-running open transactions, OracleReader may be unable to pause within 30 seconds, in which case the application and all its sources will resume as if the QUIESCE command had not been issued.To support QUIESCE with OracleReader, see\u00a0Creating the QUIESCEMARKER table for Oracle Reader.REPORTSeee\u00a0Using the REPORT START / STOP command.RESUMEWhen an application is in the TERMINATED state and the condition that caused it to terminate has been corrected,\u00a0RESUME <application name>; will attempt to resume operation from the point where the application terminated. Data may be lost if recovery was not enabled (see Recovering applications).SELECTQueries a WActionStore. See\u00a0Browsing data with ad-hoc queries.SHOWShow current stream outputSHOW <namespace>.<stream name>;Returns the output of a stream. Press Ctrl-D to end.Show open transactions in Oracle ReaderWith OracleReader, SHOW can also be used for Viewing open transactions.Show checkpoint historySee Recovering applications.Recovering applicationsShow OJet status and memory usageSee Runtime considerations when using OJet.STARTSTART <namespace>.<application name>;Starts the specified application.STATUSSTATUS <server name>;Lists all applications deployed or running on the specified server.STATUS <namespace>.<application name>;Returns the status of the specified application. See Application states.STOPSTOP <namespace>.<application name>;Stops a running application and, if recovery is enabled, writes recovery checkpoints. If recovery is enabled and\u00a0 the application was not dropped after it was stopped, recoverable sources will restart from the point immediately after the last written events (see\u00a0Recovering applications). If recovery is not enabled, any data currently being processed will be lost.The Stop command is also available when an application is in the Starting state. This can be useful when an application is stuck in the Starting state and you want to stop and debug it rather than waiting for it to time out and revert to the Deployed state.UNDEPLOYUndeploys a deployed application. See\u00a0Managing deployment groups.The Undeploy command is also available when an application is in the Deploying state. This can be useful when an application is stuck in the Deploying state and you want to stop and debug it rather than waiting for it to time out and revert to the Created state.UNLOADSee Loading standalone sources, caches, and WActionStores, or Creating an open processor component.USAGEUSAGE [<namespace>.<application name>];Lists sources and how much total data each has acquired. If you do not specify an application name, USAGE; lists all sources for all applications.In this section: Console commandsUsing the console in the web UIUsing the console in a terminal or command prompt@DEPLOYDESCRIBEEXPORTHISTORYLISTLOAD / UNLOADMETERShow billing and metering cycle informationShow usage informationShow consumption informationShow usage rate informationShow applications, adapters and componentsShow configurationMONITORPREVIEWQUIESCEREPORTRESUMESELECTSHOWSTARTSTATUSSTOPUNDEPLOYUNLOADUSAGESearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-31\n", "metadata": {"source": "https://www.striim.com/docs/en/console-commands.html", "title": "Console commands", "language": "en"}} {"page_content": "\n\nWeb UI GuideSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Web UI GuidePrevNextWeb UI GuideWeb UI OverviewThe left portion of the main menu at the upper left corner of the Striim web UI allows you to navigate to the UI's various pages. An administrator will see the seven choices shown above. Users with limited permissions may see fewer choices.The right portion of the main menu lets you view the documentation in HTML or PDF format (see www.striim.com/docs for the latest update), view alerts, see information about Striim or your user account, contact support, or log out.When you have alerts, the Alert (bell) icon will display a number. Click the icon to see a list of the alert. The alerts above are from the PosApp sample application.At the bottom of the web UI is the Message Log.Home pageStriim's home page appears when you log in. This page gives you a summary of system status, alerts for any potential problems, and quick access to some of the resources you most recently accessed. What you see on this page will depend on your roles; click Customize Homepage to change it.Alert Manager pageSee Sending alerts about servers and applications.App Wizard pageSee\u00a0Creating an application using a template.Apps pageAll currently created, deployed, and running applications for which you have read permission appear on the Apps page. The search box at the top allows you to find apps by name, status, and/or namespace.Click Create New to create an app from scratch or by importing TQL. For more information on creating apps, see\u00a0Fundamentals of TQL programming, Creating apps using templates,\u00a0Creating and modifying apps using the Flow Designer, and Creating or modifying apps using Source Preview.NoteWhen importing a TQL file, the web browser process must have all necessary privileges on the file. For example, in a Linux environment, if the files are owned by root, and you are logged in as another user, you may be able to select the files for import but they will not actually be created.The entry for each application shows its current state (see Application states), some performance data, and icons for its sources and targets. States are color-coded: gray for Created, blue for Deployed, Stopped, or Quiesced, green for Running, red for Terminated or Halt, and so on. The namespace is displayed below the name of the application.Application statesUse the\u00a0... menu to deploy, start, monitor, stop, undeploy, or drop an application, export it to a TQL file, move it to a group, or view it in the Flow Designer (see Modifying an application using the Flow Designer). The choices on the menu vary depending on the application's status: for example, Start appears only when the status is Stopped, and Drop appears only when the status is Created. Depending on your permissions, some menu choices may not appear.If an application halts or terminates due to an error you can fix without undeploying it, such as by restarting an external source that has gone down, you may use the Resume command to continue after resolving the problem.Optionally, personalize your Apps Page by grouping applications. To create a group, select Move to a group from the \u00a0... menu, enter a name for the group, and click Save. Alternatively, select multiple apps to act on at once using their checkboxes. Once you have created a group, use the same command to add additional applications. Click the up and down arrows to reorder groups, the pencil to rename or add a description, \u2228 to collapse a group, > to expand, and \u24e7 to delete a group (the apps will not be deleted).To export an application to a TQL file, select ... > Export. Alternatively, select multiple apps using their checkboxes, then click Export (on the right, just below the search box.Console pageSee Using the console in the web UI.Create App pageSee Creating an app using the Flow Designer, Creating apps using templates, or Creating apps by importing TQL.Creating apps using templatesDashboards pageAll currently loaded dashboards appear on the Dashboards page. Click a dashboard tile to view it. (If the associated application is not running, you will see errors.)To import or create a new dashboard, click Add Dashboard. Use the\u00a0... menu to rename, edit, or delete a dashboard (depending on your permissions, some menu choices may not appear). The Export command appears at the top right when you edit the dashboard.For more information, see Viewing dashboards and Dashboard Guide.Flow Designer pageSee\u00a0Creating and modifying apps using the Flow Designer.Manage Striim - FilesThis page lets you upload and download files to and from Striim.To upload a fileSelect Manage Striim > Files > Upload FIle.Select the directory to which you want to upload the file.Drag and drop the file onto the dialog (or click Browse, navigate to the file, and double-click it), then click Done.To download a previously uploaded fileSelect Manage Striim > Files.Fine the file and click its Download File link.Manage Striim - Metadata ManagerThe Metadata Manager lets you browse all existing components you have permission to view. You may filter the list by namespace, application, and type. You may also view the Metadata Manager while in the Flow Designer, which can be useful when you want to copy settings from a component in another application or flow.Manage Striim - Property SetsThis page lets you create the property sets required for Sending alerts from applications using the web UI.Manage Striim - Property VariablesThis page lets you create property variables using the web UI. See CREATE PROPERTYVARIABLE for more information.Manage Striim - VaultsThis page lets you manage vaults using the web UI. See Using vaults for more information.Using vaultsMessage LogThe Message Log appears at the bottom of all web UI pages. Any errors in your applications or dashboards will appear here, along with notifications of various common events, including, among others:systemall system apps are running (after server start)adaptersevents:first event received (after application start)millionth event received10-millionth event receivedconnections:initial connection establishedretrying connectionreconnection successfulconnection failed, application haltedDatabase Reader: initial load completedtargets when schema evolution is enabled (see Handling schema evolution)DDL operation propagatedDDL operation ignoredunsupported DDL operation, application quiesced or haltedCDC readersall metadata has been fetchedwhen schema evolution is enabled (see Handling schema evolution)DDL operation capturedDDL operation ignoredunsupported DDL operation, application quiesced or haltedOracle Readerout-of-order SCN encounteredtotal open transactions has exceeded 10,000transaction buffer has started spillover to diskMetadata Manager pageSee Manage Striim - Metadata Manager.Monitor pageSee Monitoring using the web UI.My FilesSee Manage Striim - Files.Source Preview pageThe Source Preview page allows you to create sources and caches by browsing available data files of certain types. See Creating sources and caches using Source Preview.Users pageAdministrators (members of the Global.admin group) can access the Users page to perform the tasks described in Managing users, permissions, and roles.In this section: Web UI GuideWeb UI OverviewHome pageAlert Manager pageApp Wizard pageApps pageConsole pageCreate App pageDashboards pageFlow Designer pageManage Striim - FilesManage Striim - Metadata ManagerManage Striim - Property SetsManage Striim - Property VariablesManage Striim - VaultsMessage LogMetadata Manager pageMonitor pageMy FilesSource Preview pageUsers pageSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/web-ui-guide.html", "title": "Web UI Guide", "language": "en"}} {"page_content": "\n\nDashboard GuideSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Dashboard GuidePrevNextDashboard GuideThe web UI's Dashboards can visualize application data in various ways. Run the PosApp and MultiLogApp sample applications to see some examples and follow the instructions in the Hands-on quick tour to create one.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/dashboard-guide.html", "title": "Dashboard Guide", "language": "en"}} {"page_content": "\n\nDashboard rules and best practicesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Dashboard GuideDashboard rules and best practicesPrevNextDashboard rules and best practicesYou must follow these rules when creating or editing a dashboard:Before creating a dashboard, deploy the application. Otherwise the field-selection menus in the dashboard property dialogs will be empty.Create the dashboard in the same namespace as the application.Include the namespace in query names (for example, MyWorkspace.q1).End SELECT statements with a semicolon.We also recommend these best practices:Before creating a visualization, create any drill-down pages it will link to, so you will be able to select the drill-down page(s) in the visualization properties.If the dashboard UI gets too much data it may become unresponsive, so write your SELECT statements to minimize the amount of data sent to the visualization. For example, for a bar chart showing totals by state, select sum(Count) as Count, State from Samples.MerchantActivity group by State; will send the visualization only one pair of Count and State values for each of the 50 states.If there are multiple applications in the same namespace, start the query name with the application name (for example, Samples.MyAppMyQuery), unless the query is shared by multiple apps.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/dashboard-rules-and-best-practices.html", "title": "Dashboard rules and best practices", "language": "en"}} {"page_content": "\n\nVisualization types and propertiesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Dashboard GuideVisualization types and propertiesPrevNextVisualization types and propertiesThis section describes the various types of visualizations that may be used in dashboards and properties you use to configure each type.Properties common to multiple visualization typesNote\u00a0If the visualization configuration\u00a0 dialog's properties display no fields,\u00a0 that means something is wrong with its query, or the application has not been deployed.Data Retention Type: Select All to include all data from the query, or Current to include only the most recent. \"All\" is not supported for pie / donut charts as they would quickly become unreadable.Group By: If specified, each value for this field will be visualized as a single point, bar, or slice. This setting interacts with Data Retention Type as follows:Data Retention TypeGroup BybehaviorAllunspecifiedone new chart element for each value (for example, ever more bars are added to a chart)Allspecifiedone chart element per Group By categoryeach new value for a category is added to the element (for example, a vertical bar gets ever higher)\ufeff Currentunspecifiedonly one chart element, which is updated for each new value (this setting would make sense for a gauge, which displays only a single value)\ufeff\ufeffCurrentspecifiedone chart element per Group By categoryeach new value for a category updates the element (for example, a vertical bar goes up and down)\ufeff The following shows the effect of the various combinations of Data Retention Type and Group By on a bar chart as each of four values (represented by red, yellow, green, and blue) is added.Color By: If specified, values from this field will populate the legend and (unless overridden by conditional colors) the color of chart elements.Data Series + / Delete series : For scatter plots and bar, line, pie and donut charts, adds or removes a data series.Series Alias: When you have defined multiple series, use this to define custom colors. These strings are also used for the legend labels, if any. See the line chart on the PosApp \"Company details\" page for an example.NoteIf you define multiple series, leave Group By unspecified.Type (for X and Y axes): select Linear or Logarithmic for numerical values, DateTime for time values, or Category for text values (for example, CompanyName)Filter settings - Visible: If enabled, when the visualization's data is being filtered, a pop-up will appear showing the filter criteria. See Creating links between pages.Time Field: A field of type DateTime to use when filtering by time (see\u00a0Viewing dashboards). If not specified, filters will use WAction start time.Polling: See\u00a0Defining dashboard queries.Drill Down and Page Filters: See Creating links between pages and Making visualizations interactive.Tooltip Configuration: Use this to defines which values will appear when you hover the mouse pointer over a map or chart point, slice, or bar. Check Show All to display all values selected by the query or click Add Another as many times as necessary to select individual fields.Add a conditional color: Allows you to manually select colors based on JavaScript expressions, such as field_name==\"value\". For example:The seriesAlias variable may be used to manually assign colors to series based on their Series Alias settings. For example, from the line chart on the Company details page in the PosApp sample application's dashboard:The above settings produce this result:Show Legend: Check to display a legend of the Color By values in the visualization.Maximum number of values to show / Maximum number of series to show: A series is an element in a visualization that may include one or more values. For example, in the line chart above, there are four series (the four lines) and 40 values (the number of points per line). If you specify a Group By field, the number of distinct values in that field is the number of series.The maximum number of values is per series, so, for example, if for the chart above it was set to 20, each of the four lines would have 20 points. The oldest value and the series that was updated least recently will be removed from the visualization as necessary to keep within the maximums.Be sure you set these maximums high enough to display all the desired values. For example, in the map on the main page of\u00a0PosAppDash, Maximum number of series to show needs to be at least 423 (the number of merchants), but Maximum number of values to show can be 1, since only one point is shown for each merchant.Regardless of this setting, each visualization can display a maximum of about 2000 values.Bar chartSet the Y (vertical) axis to the field containing the values that will control the height or length of the bars.Set the X (horizontal) axis to the field containing the labels for the bars and set the axis type to Category.For example, here are the settings for the horizontal bar chart on PosApp's main dashboard page:Here are the settings for the same chart but with vertical bars:See Properties common to multiple visualization types\u00a0for information on the other settings and an example of how to create a stacked bar chart using Group By.Choropleth mapThis is a beta preview feature. To run a sample application, copy the Choropleth directory from .../Striim/docs to .../Striim/samples, run Samples/ChoroplethDemo.tql, and view the ChoroplethDemoDash dashboard. See www.highcharts.com/docs/maps/getting-started for more information.Column range chartThis is a variation on a Bar chart. The Value Type, Low Value Field, and High Value Field properties control the start and end points for the bars and the labels for axis on which the values are plotted. The low and high value fields must match the Value Type setting.GaugeA gauge is similar to a bar chart with a single bar. It visualizes a single numeric value on a 180-degree gauge. The query for a gauge will typically use an aggregate function such as MIN, MAX, or LAST to select a single value from the input field. See the See the PosApp \"Company details\" page for an example.Units is a text label for the gauge. Set the Minimum to zero or whatever other number is appropriate and Maximum at least as high as the maximum expected value. These values appear as labels below the left and right ends of the gauge.By default, the color changes gradually from green near the minimum to yellow in the middle and red at the maximum. You may set colors manually by defining Thresholds, but they will still change gradually. For example, the gauge above has a threshold of 60 for green and 100 for red, so with a value of 71 the gauge is yellowish-brown.Minimum, Threshold, and Maximum values may be defined using field names, optionally including formulas such as HistoricalAverage * 0.25\u00a0or HistoricalAverage * 2.Heat mapIn a heat map, the X and Y axis fields define a grid, with the squares of the grid colored based on the values of the Z axis. If the Label Cells option is checked, the Z values will appear in the cells.See Properties common to multiple visualization types for information on the other settings.IconAn icon can display various icons based on conditions (see the discussion of Add a conditional color in Properties common to multiple visualization types). The above properties, from the PosApp company details page, display a plus sign, minus sign, or check mark corresponding to the merchant's current status.For a visual guide to the hundreds of available icons, see: https://fontawesome.com/v4/cheatsheet/LabelA label is a simple text block. Use labels to add headings or explanations to your dashboards. A label does not have a query, so if you wish to include data from the application in the text, use a\u00a0Value visualization instead.Label: the text to be displayedText: select desired text colorBackground: select desired background color (red slash on black is transparent)Heading Type: select one of the predefined text label types shown below:Leaflet mapThis is similar to the Vector map, but when you zoom in the map includes place names, streets, and street names.Longitude and Latitude: the fields containing the values to use to plot the map pointsValue: the field containing the values that will control the map point colors and sizesMin Bubble Size and Max Bubble Size: the range of sizes for the map pointsView Zoom: set to 1 to show the whole map, or higher to zoom inView Center Longitude and View Center Latitude: When View Zoom is specified, these settings control where the map is centered.Tiles URL: The web server from which the map gets its detail data. This should typically be left at its default, http://{s}.tile.osm.org/{z}/{x}/{y}.png.See Properties common to multiple visualization types for information on the other settings.Line, scatter, bubble, and area chartsSet the Y (vertical) axis to the values that will control the vertical position of the dots and the X (horizontal) axis to the field that will control the horizontal position.In line charts, Upper Bound and Lower Bound let you specify the range the chart. Set Lower Bound to 0 for a zero baseline.A bubble chart is identical to a scatter chart except there is a third axis, Z, that controls the size of the bubbles, so the chart can visualize data from three fields.See Properties common to multiple visualization types for information on the other settings.By default, area charts are like line charts, but the area below the line is filled in.If you change the Stacking option to Stacked, the areas are stacked like this:If you change the Stacking option to Percent Stacked, the areas are shown as portions of 100%:Pie and donut chartsSet the X (horizontal) axis to the field containing the labels for the pie slices and set the axis type to Category.Set the Y (vertical) axis to the field containing the values that will control the size of the pie sizes. The type may be Linear or Logarithmic.See Properties common to multiple visualization types for information on the other settings.TableA table displays values in a grid. Optionally, it may contain a search box.Category: If specified, the table will contain only one row for each value of the specified field. If you want to aggregate values, you must do that in the query.Show Headers: The first row will contain the column's Label string.Show Lines: Shows or hides the table grid.Rows per page: the number of rows to display at a time (note that if you add too many rows to fit in the query frame, the bottom rows will be cut off and not display)Column configuration: Defines the columns in the table. Click Add another to add sets of properties for each column in the table.Label: String to display in the header, if it is enabled.Sort Order: By default, columns appear in the order specified. You may use this property to rearrange them. Columns will appear from left to right in their sort order from low to high.Source Field: Field to provide the values.Icons configuration: Adds icons to rows based on the specified expressions. You could use this, for example, to show a thumbs-up icon for expected values and a thumbs-down icon for out-of-bounds values. The expression syntax is the same as for conditional colors.See Properties common to multiple visualization types for information on the other settings.ValueA value is a relatively open-ended visualization type that can be used to add almost any valid HTML to a dashboard.A value visualization does not require a query. Define a query only if you want to use data from the application in conditional expressions or include data in the value (as in the examples below).When you add a new value visualization, click Add a conditional template.Expression: If you have only one conditional template, set to true (this setting is case-sensitive, True will not work).If you have multiple conditional templates, use this field to determine which condition is used based on field values. For example:Text: select desired text colorBackground: select desired background color (red slash on black is transparent)Heading Type: select one of the predefined text label types shown below, or leave blank to format using CSS in the Template field:Template: Enter an HTML string to define what the value will display. To include a value from a query alias in the string, put its alias in double brackets. For example, from MultiLogApp:The template for that label is:<div style=\"padding: 5px; font-weight: 100\">Unusual Activity&nbsp;&nbsp;&nbsp; {{ cnt }} WActions</div>The query is SELECT COUNT(*) AS cnt FROM UnusualActivity; which returns a single value representing the total number of WActions in the UnusualActivity WActionStore as the alias cnt.Vector mapA vector map plots points on a simple world map using the latitude and longitude values in the data. See PosApp for an example of using Zip codes to look up latitude and longitude in a cache.Longitude and Latitude: the fields containing the values to use to plot the map pointsValue: the field containing the value that will control the map point colorsView Zoom: leave blank or set to 1 to show the whole map, or enter a digital fraction (such as .5) to zoom inX Offset and Y Offset: When View Zoom is less than 1, these settings control where the map is centered. The range for X Offset is 0 (maximum west) to 10000 (maximum east). The range for Y Offset is -10000 (maximum north) to 10000 (maximum south). With X Offset set to 1400 and Y Offset set to -7700, the map will be centered on the United States. With X Offset 5300 and Y Offset -7750, it will be centered on Turkey.See Properties common to multiple visualization types for information on the other settings.Word cloudThis visualization displays text strings in various sizes proportional to a specified field value. For example, in the data for the above visualization of the top ten all-time best selling singles, the strings are the names of songs, and the values are the number of copies sold.Word Text: the field containing the stringsWord Size: the field containing the values that will word sizesMaximum number of values to show: Set this to a value larger than the total number of occurrences of all words to be displayed or the size of the words will be based on an unpredictable subset of the events.See Properties common to multiple visualization types for information on the other settings.In this section: Visualization types and propertiesProperties common to multiple visualization typesBar chartChoropleth mapColumn range chartGaugeHeat mapIconLabelLeaflet mapLine, scatter, bubble, and area chartsPie and donut chartsTableValueVector mapWord cloudSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/en/visualization-types-and-properties.html", "title": "Visualization types and properties", "language": "en"}} {"page_content": "\n\nWorking with dashboardsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Dashboard GuideWorking with dashboardsPrevNextWorking with dashboardsAdding visualizations to a dashboardThe following instructions will create bar and donut charts. Before following these instructions, make sure PosApp is loaded in the Samples workspace, deployed, and running. (See the Hands-on quick tour for an introduction to creating dashboards.)From the top menu, select Dashboards > View All Dashboards, then click PosAppDash. The dashboard's main page appears.At the top right, click Edit this page (the pencil icon).At the left of the page, select the Pages tab, then click + to add a new page.Select the visualizations tab, then drag the Bar visualization icon into the workspace and drop it. This creates a bar chart.At the top of the chart, click Edit Query the (< > icon). The query editor opens.In the Name field, enter Samples.MyQuery. In the SELECT statement field, edit the query to read select count(*) as Count, Category from MerchantActivity group by Category; (you can copy and paste from this document), being sure to end with a semicolon. This will select only the data needed for the chart: the count of WActions for each category and the category names. Click Save Query.To open the visualization editor, click Configure (the pencil icon) or double-click anywhere in the chart.Set Group By and Color By to Category, the vertical axis to Count, the horizontal axis to Category, and the horizontal axis Type to Category, then click Save visualization. (The Category in the Type menu is a chart property, not the name of a field.) The chart appears in the workspace.Drag the Donut visualization icon into the workspace and drop it to create a donut chart.This chart will use the same query as the bar chart: click the Edit Query button, click in the Name field, select Samples.MyQuery, and click Save Query.Click the Configure button, set the options as you did for the bar chart, and click Save visualization.Defining dashboard queriesOnce you have added a new visualization to a dashboard, the next step is defining its query. Until you do this, the drop-down field menus in the visualization editor will be empty.For example, the query for the PosApp donut chart above is:select count(*) as Count, Status from MerchantActivity group by Status;This results in four Count values representing the number of merchants with each status. The Count values control the size of the slices and the Status values provide the labels.NoteWhen you save a query, the previous version is overwritten, and there is no record of it within Striim. Thus, if you are making substantial changes, it is a good idea to give the query a new name so as to preserve the original, or to copy the original and paste it in a text editor.NoteThe dashboard page- and visualization-level filters (see\u00a0Dashboards page) are available only when querying a WActionStore that is persisted to Elasticsearch (see\u00a0CREATE WACTIONSTORE).By default, dashboard queries are executed only once, when the visualization is loaded. To update the data, refresh the browser to reload the visualization.To have the a query update continuously, enable polling in the visualization properties. (This property is hidden if the query contains AND PUSH).\u00a0The query will then be re-run every five seconds, or, if the query takes longer, as soon as it completes.When querying a WActionStore, the following additional syntax is available:SELECT ... FROM <WActionStore> \n[\n{ <integer> { SECOND | MINUTE | HOUR } |\n JUMPING <integer> { SECOND | MINUTE | HOUR } |\n <integer> { SECOND | MINUTE | HOUR } AND PUSH }\n] ...[<time period>]\u00a0without\u00a0JUMPING or\u00a0AND PUSH returns data for the specified period and does not update until you reload the page.[JUMPING <time period] immediately returns data for the specified time period and runs the query again at the same interval. For example,\u00a0SELECT * FROM MerchantActivity [JUMPING 1 MINUTE]\u00a0returns data for the past minute and then runs the query again every minute, so you always have one minute's worth of data that is never more than one minute old.[<time period> AND PUSH] returns data for the specified time period and adds more data every time an event is added to the WActionStore. For example,\u00a0SELECT * FROM MerchantActivity [15 MINUTE AND PUSH] returns data for the past 15 minutes and adds new events indefinitely. Old data is not removed until you reload the page.Do not specify both\u00a0JUMPING and\u00a0AND PUSH.It is essential, particularly when using AND PUSH, to include LIMIT and ORDER BY clauses in the query to avoid overloading the dashboard with more data than it can display.Any WHERE, GROUP BY, HAVING, ORDER BY, or LIMIT clause must follow the ]. When\u00a0PUSH is specified,\u00a0polling is disabled.The SAMPLE BY clause (see CREATE CQ (query) may be useful when a visualization contains too many data points. For example, consider this chart using PosApp data with the query SELECT * FROM MerchantActivity ORDER BY time:CREATE CQ (query)The dashboard shows as much of the data as it can hold, which is only the last half hour. If you change the query to SELECT * FROM MerchantActivity ORDER BY time SAMPLE BY count, the dashboard can show 3-1/2 hours of data:Creating links between pagesTo create a drill-down link similar to the one that leads from the Main page to the \"Company details\" page in PosAppDash:If it does not exist already, create the target page (the page to be linked to).Switch to the source page (the page with the link), select the visualization that will contain the drill-down link, and click Configure (the cog icon).Under Drill Down Configuration, check Enabled.From the Page drop-down, select the target page.Under Page Filters, click Add Another.In Source FIeld, select the field you want to use to select the data to be shown in the drill-down. For example, in PosApp, if you wanted to drill down from a map point for more information about a merchant, you would select MerchantId.In Id, enter an alias to refer to the source field in the drill-down visualization's query. For example, for MerchantId, you might enter mid.If you need more than one source field for the drill-down visualization's query, add additional page filters.Save the visualization.Return to the target page, create a query if it does not exist already, or click the existing query's cog icon.Enter a SELECT statement using the Id(s) you defined in the source visualization and save the query. For example, to create a line chart of Count by StartTime and a table containing merchant details for the PosApp map drill-down described above, you might use this:SELECT CompanyName, Count, StartTime, City, State, Zip FROM \nSamples.MerchantActivity WHERE MerchantID=:mid ORDER BY StartTime;Enter a SELECT statement using the Id(s) you defined in the source visualization and save the query. For example, to create a line chart of Count by StartTime and a table containing merchant details for the PosApp map drill-down described above, you might use this:If one does not already exist, create a visualization in the query, then click Done.Return to the source page and click whatever is associated with the drill-down. For example, for a line chart or map, click a point. The drill-down visualization should appear, populated with the data defined by the Source Field value for the point you clicked.Making visualizations interactiveDrill-downs can be used within a single page to filter data interactively. For example, the PosApp visualization below can be filtered by clicking on the pie-chart labels:If you click the WARM label of the right-hand pie chart, the map and heat map display only data for merchants currently in that category:This interactivity is defined by setting the drill-down configuration in the pie charts, then using their Id values as in the queries for the two other visualizations. The drill-down configurations are:visualizationdrill-down configurationStatus (left) pie chartPage: Interactive HeatMap Id: status Source field: StatusCategory (right) pie chartPage: Interactive HeatMap Id: category Source field: CategorySince these visualizations are on the Interactive HeatMap page, the drill-down filters the data without switching to another page.The US map's query is:select * from Samples.MerchantActivity [15 minute and push] where (:status \nIS NULL or (Status = :status)) and (:category IS NULL or (Category = :category)) \ngroup by merchantId;When the page is first loaded, the :status and :category values are null, so the map displays all data. When you click the WARM label, the :category value is set to WARM, and the map updates accordingly.To clear the filter and return to viewing all data, click Clear All (next to the funnel icon at left).To explore this more, run PosApp and go to the Interactive HeatMap page.Embedding a dashboard page in a web pageUse the Embed command to generate HTML code for an iframe you can use to display a Striim dashboard page in a web page. Embedded dashboard pages include both page-level and visualization-level search and search and filter controls.The embedded dashboard will not require a Striim login. Instead, it uses a special user account with READ permission on the dashboard's namespace and SELECT permissions on the dashboard, its queries, and related the CQs, streams, and WActionStores.Log in as admin or another user with the Global.admin role.Go to the dashboard page you want to embed and click Embed.The\u00a0exact permissions that will be granted to the special user account are displayed. Click I Agree to create the user account.Enter the desired width, height, and border width for the iframe, then click Copy.Paste the iframe in the appropriate location in the web page's HTML.Preview the web page (the Striim application must be running) and adjust the iframe's properties as necessary.In this section: Working with dashboardsAdding visualizations to a dashboardDefining dashboard queriesCreating links between pagesMaking visualizations interactiveEmbedding a dashboard page in a web pageSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/working-with-dashboards.html", "title": "Working with dashboards", "language": "en"}} {"page_content": "\n\nInstallation and upgradesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Installation and upgradesPrevNextInstallation and upgradesTo install or upgrade Striim Platform, see Installation and configuration.To deploy Striim Cloud, see Deploying and managing Striim Cloud.Deploying and managing Striim CloudTo upgrade Striim Cloud, see Upgrading Striim Cloud.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-09\n", "metadata": {"source": "https://www.striim.com/docs/en/installation-and-upgrades.html", "title": "Installation and upgrades", "language": "en"}} {"page_content": "\n\nConfiguring remote hostsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Configuring remote hostsPrevNextConfiguring remote hostsThis section of the documentation describes various ways to configure remote hosts to send data to Striim.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-05-12\n", "metadata": {"source": "https://www.striim.com/docs/en/configuring-remote-hosts.html", "title": "Configuring remote hosts", "language": "en"}} {"page_content": "\n\nStriim Forwarding Agent installation and configurationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Configuring remote hostsStriim Forwarding Agent installation and configurationPrevNextStriim Forwarding Agent installation and configurationThe Striim Forwarding Agent is a stripped-down version of the Striim server that can be used to run sources locally on a remote host. For more information, see Using the Forwarding Agent.To install the Forwarding Agent when you deploy Striim from the AWS Marketplace, contact\u00a0Striim support.Striim Forwarding Agent system requirementsmemory: 256MB to 1GB depending on adapter usedfree disk space: 500MB (free disk space must never drop below 10%)supported operating systems: any 64-bit version of Linux, Mac OS X, or Microsoft Windowssupported Java environments:recommended: 64-bit Oracle Java SE 8 JRE (JDK required to use HTTPReader or SNMPParser)also supported: 64-bit OpenJDK 8 JREfirewall: the following ports must be open:port 5701 inbound for TCP (Hazelcast): if 5701 is in use, Hazelcast will use 5702; if both 5701 and 5702 are busy, it will use 5703; and so onport 9081 outbound for TCP (HTTPS) for authentication with the Striim cluster; alternatiely, to use port 9080, set striim.cluster.https.enabled=false in agent/conf/agent.conf.ports 49152-65535 outbound for TCPAny driver required by a source that will run on the Agent must be installed in\u00a0Agent/lib. See\u00a0Installing third-party drivers in the Forwarding Agent.The Forwarding Agent gets its license from the Striim cluster. It does not need to be specified on the remote host where the agent is installed.Running the Forwarding Agent as a processUsing a supported web browser, log into Striim at\u00a0<DNS name>:9080 with username\u00a0admin and the admin password you specified when creating the Striim server.Select Help > Download Agent and save Striim_Agent_4.2.0.zip\u00a0 to an appropriate location.\u00a0\u00a0If you can not download it directly to the \u00a0host, download it to your local system and use scp to copy it to the host.Unzip the file. If you used scp to copy the file to the host, use ssh to log into the host and unzip it.Follow the instructions in\u00a0Configuring the Forwarding Agent.To start the agent, make sure the Striim cluster the Agent will connect to is running, open a shell terminal, command prompt, or ssh session, change to the Agent directory, and enter\u00a0bin/agent.sh or, in Windows,\u00a0bin\\agent.bat.Once you have successfully started the agent and connected to Striim, you may delete the .zip file.To stop the agent, switch to the command prompt in which it is running and press Ctrl-C.Running the Forwarding Agent as a service in CentOSLog in to Linux.Download striim-agent-4.2.0-Linux.rpm.Install the package: sudo rpm -ivh striim-agent-4.2.0-Linux.rpm. By default, the agent is installed in\u00a0/opt/striim/agent/.Follow the instructions in\u00a0Configuring the Forwarding Agent.Make sure the Striim server is running, then:For CentOS 6, enter\u00a0sudo start striim-agent.For CentOS 7, enter:sudo systemctl enable striim-agent\nsudo systemctl start striim-agentTo verify that the agent connected to the server, look on the Monitoring page (see\u00a0Monitoring using the web UI).Running the Forwarding Agent as a service in UbuntuLog in to Linux and run sudo su to switch to root.Download striim-agent-4.2.0-Linux.deb.Install the package: dpkg -i striim-agent-4.2.0-Linux.deb. The agent is installed at /opt/striim/agent.Follow the instructions in\u00a0Configuring the Forwarding Agent.Make sure the Striim server is running, then:for Ubuntu 14.04, enter\u00a0start striim-agentfor Ubuntu 16.04 or later, enter\u00a0systemctl enable striim-agent &&\u00a0systemctl start striim-agentTo verify that the agent connected to the server, look on the Monitoring page (see\u00a0Monitoring using the web UI).Running the Forwarding Agent as a service in WindowsThis requires Windows PowerShell 5.0 or later.Follow the instructions in\u00a0Running the Forwarding Agent as a process, but do not start the agent.Start Windows PowerShell as administrator and run the script Agent\\conf\\windowsAgent\\setupWindowsAgent.ps1. Note that if you are in the\u00a0windowsAgent directory, to run the script you must include the path:\u00a0.\\setupWindowsAgent.ps1.The message\u00a0Error in OpenSCManager is of no concern if the script completes successfully.Start the Striim Agent service manually, or reboot to verify that it starts automatically.To uninstall the service, stop it, then run this batch file:Agent/conf/windowsAgent/yajsw_agent/bat/uninstallService.batConfiguring the Forwarding AgentBefore starting the agent, you must:If running Striim Cloud, Contact Striim support to get the sys password, striim.cluster.clusterName, and striim.node.servernode.address.Run agent/bin/aksConfig.sh or aksConfig.bat. When prompted, enter a password for the agent's local keystore and the sys password (used to authenticate the agent when it connects to the Striim cluster).Install any third-party drivers required by the readers that will run in the Forwarding agent (see Installing third-party drivers in the Forwarding Agent). No drivers are bundled with the Forwarding Agent.Edit agent/conf/agent.conf to specify the agent's settings, as follows:Always required: for the\u00a0striim.cluster.clusterName property value, specify the cluster to connect to. Cluster names are case-sensitive.Always required: for the striim.node.servernode.address property value, specify the server's IP address or fully-qualified domain name. If the agent may connect to multiple servers, specify them all, separated by commas.If the system on which the agent is running has more than one IP address, specify the one you want the agent to use as the value for the striim.node.interface property.By default, the agent will join the Agent deployment group. If you wish to change that, specify another deployment group as the value for the\u00a0striim.cluster.deploymentGroups property.\u00a0If the specified deployment group does not exist, it will be created automatically.\u00a0When multiple agents on different remote hosts will be used by the same source, they must belong to the same deployment group.When using Striim Cloud:Connect the system running the Forwarding Agent with Striim Cloud using a VPN tunnel.In agent.conf, add striim.node.isSaaS=trueIn Striim Cloud, open the Console and enter set SYSTEMPROP='striim.node.serverDomainName,<IP address>', replacing <IP address> with the Striim Cloud-side IP address for the VPN tunnel.Installing third-party drivers in the Forwarding AgentThis section describes installation of the third-party drivers required to use some adapters.Install the HP NonStop JDBC driver in a Forwarding AgentThis driver must be installed in every Forwarding Agent that will read from HP NonStop SQL/MX.CautionDo not install drivers for multiple versions of SQL/MX on the same Striim server. If you need to write to multiple versions of SQL/MX, install their drivers on different Striim servers and run each version's applications on the appropriate Striim server(s).Follow the instructions in the \"Installing and Verifying the Type 4 Driver\" section of\u00a0HPE NonStop JDBC Type 4 Driver Programmer's Reference for SQL/MX for the SQL/MX version you are running to copy the driver .tar file from the HP NonStop system with the tables that will be read\u00a0to a client workstation and untar it. Do not install the driver.Copy the t4sqlmx.jar file from the untarred directory to Agent/lib.Stop and restart the Forwarding Agent (see Starting and stopping the Forwarding Agent).Install the MariaDB JDBC driver in a Forwarding AgentThis driver must be installed in every Forwarding Agent that will read from MariaDB.Download mariadb-java-client-2.4.3.jar from http://downloads.mariadb.com/Connectors/java/connector-java-2.4.3.Copy that file\u00a0to Agent/lib.Stop and restart the Forwarding Agent (see Starting and stopping the Forwarding Agent).Install the MemSQL JDBC driver in a Forwarding AgentMemSQL uses MySQL's JDBC driver. See\u00a0Install the MySQL JDBC driver in a Forwarding Agent.Install the Microsoft JDBC Driver in a Forwarding AgentThis driver must be installed in every Forwarding Agent that will read from Microsoft SQL Server, Azure SQL Database, or Azure Synapse.That driver is not bundled with the Forwarding Agent, so to read from those sources, you must install one of the drivers.CautionDo not install both versions of the driver in the same Striim server or Forwarding Agent.For SQL Server 2008:Download the Microsoft JDBC Driver 6.0 for SQL Server\u00a0.gz package from https://www.microsoft.com/en-us/download/details.aspx?id=11774 and extract it.Copy enu/jre8/sqljdbc42.jar to agent/lib.Stop and restart the Forwarding Agent (see Starting and stopping the Forwarding Agent).For all other versions:Download the Microsoft JDBC Driver 7.2 for SQL Server\u00a0.gz package from https://www.microsoft.com/en-us/download/details.aspx?id=57782 and extract it.Copy enu/mssql-jdbc-7.2.2.jre8.jar to agent/lib.Stop and restart the Forwarding Agent (see Starting and stopping the Forwarding Agent).Install the MySQL JDBC driver in a Forwarding AgentThis driver must be installed in every Forwarding Agent that will read from MySQL.Download the Connector/J 8.0.27 package from\u00a0https://downloads.mysql.com/archives/c-j/\u00a0and extract it.Copy mysql-connector-java-8.0.27.jar\u00a0to Agent/lib.Stop and restart the Forwarding Agent (see Starting and stopping the Forwarding Agent).Install the Oracle Instant Client in a Forwarding AgentThe Oracle Instant Client version 21c must be installed and configured in the Linux host environment of every Forwarding Agent that will run OJet.Download the client from https://download.oracle.com/otn_software/linux/instantclient/211000/instantclient-basic-linux.x64-21.1.0.0.0.zip and follow the installation procedure provided by Oracle for the operating system of the Forwarding Agent host.Edit Agent/conf/agent.conf and add the NATIVE_LIBS property to specify the Instant Client path, for example, NATIVE_LIBS=/usr/local/instantclient_21.1.Stop and restart the Forwarding Agent (see Starting and stopping the Forwarding Agent).Install the Oracle JDBC driver in a Forwarding AgentThis driver must be installed on each Forwarding Agent that will read from Oracle.Download ojdbc8.jar from oracle.com.Save that file in\u00a0agent/lib.Stop and restart the Forwarding Agent (Starting and stopping the Forwarding Agent).Install the PostgreSQL JDBC driverThe PostgreSQL JDBC driver is bundled with Striim Platform and Striim Cloud, but must be installed on each Forwarding Agent that will read from PostgreSQL.Download the PostgreSQL JDBC 4.2 driver from jdbc.postgresql.org/download/.Copy it to agent/lib.Stop and restart the Forwarding Agent (Starting and stopping the Forwarding Agent).Install the Snowflake JDBC driverThis driver is must be installed in every Forwarding Agent that will write to Snowflake.Download the Snowflake JDBC driver version 3.13.15 (snowflake-jdbc-3.13.15.jar) as described in Downloading / Integrating the JDBC Driver.Copy the downloaded .jar file to Agent/lib.Stop and restart the Forwarding Agent (Starting and stopping the Forwarding Agent).Install the Teradata JDBC driver in a Forwarding AgentThis driver is must be installed in every Forwarding Agent that will read from Teradata.Download the Teradata JDBC .tgz or .zip package from http://downloads.teradata.com/download/connectivity/jdbc-driver and extract it.Copy\u00a0tdgssconfig.jar\u00a0and\u00a0terajdbc4.jar to agent/lib.Stop and restart the Forwarding Agent (see Starting and stopping the Forwarding Agent).Testing the Forwarding AgentYou may use this application (the sample data is installed with the agent) to verify that the agent is working:CREATE APPLICATION agentTest;\n\nCREATE FLOW AgentFlow;\nCREATE SOURCE CsvDataSource USING FileReader (\n directory:'SampleData',\n wildcard:'PosDataAgentSample.csv',\n positionByEOF:false)\nPARSE USING DSVParser (\n header:Yes,\n trimquote:false)\nOUTPUT TO CsvStream;\nEND FLOW AgentFlow;\n\nCREATE FLOW ServerFlow;\nCREATE TARGET t USING FileWriter( filename:'AgentOut')\nFORMAT USING DSVFormatter ()\nINPUT FROM CsvStream;\nEND FLOW ServerFlow;\n\nEND APPLICATION agentTest;Deploy AgentFlow in the Agent group and ServerFlow and agentTest in the Default group. Then run the application and verify that AgentOut.00 has been written to the Striim program directory.Starting and stopping the Forwarding AgentTo start a Forwarding AgentIf the agent was installed as a service:In CentOS 6 or Ubuntu 14.04:sudo start striim-agentIn CentOS 7 or Ubuntu 16.04 or later:sudo systemctl start striim-agentTo start the agent as a process, run\u00a0Agent/bin/server.bat\u00a0in Windows or\u00a0Agent/bin/server.sh\u00a0in OS X or Linux.To stop a Forwarding AgentIf the agent is running as a service:In CentOS 6 or Ubuntu 14.04:sudo stop striim-agentIn CentOS 7 or Ubuntu 16.04 or later:sudo systemctl stop striim-agentIn Windows (using PowerShell as an administrator):service-stop \"Striim Agent\"\nIf the agent is running as a process, press Ctrl-C in the terminal running\u00a0agent.bat\u00a0or\u00a0agent.sh.Upgrading Forwarding AgentsStop the agent (see Starting and stopping the Forwarding Agent).Uninstall the agent:If the agent is running as a process, stop the process, copy agent.conf and any drivers you added to agent/lib to another location, and delete the agent directory.If the agent is running as a service in CentOS, open a terminal and run rpm -e striim-agent.If the agent is running as a service in Ubuntu, open a terminal and run dpkg --remove striim-agent.If the agent is running as a service in Windows, stop the Striim Agent process, copy agent.conf and any drivers you added to agent/lib to another location, then run Agent/conf/windowsAgent/yajsw_agent/bat/uninstallService.bat.Install the new version of the agent as described in the \"Running the Forwarding Agent\" topic appropriate for your environment. If running the agent as a process or as a service in Windows, use the backed-up copy of agent.conf, and before starting the agent copy any backed-up drivers to agent/lib. (If using rpm or dpkg, these files are preserved.)Updating the sys user password on the Forwarding AgentIf the sys user's password changes, use the following command to update the password (required to connect to a Striim cluster) on the agent:agent/bin/aksConfig.sh -p <new password>Enabling agent failoverWhen a flow in an application with recovery enabled is deployed ON ONE (see DEPLOY APPLICATION) in a deployment group with more than one agent, and that agent goes down, Striim will automatically deploy the flow on another agent in the deployment group. The flow will continue running on the other agent even after the original agent comes back up.In this section: Striim Forwarding Agent installation and configurationStriim Forwarding Agent system requirementsRunning the Forwarding Agent as a processRunning the Forwarding Agent as a service in CentOSRunning the Forwarding Agent as a service in UbuntuRunning the Forwarding Agent as a service in WindowsConfiguring the Forwarding AgentInstalling third-party drivers in the Forwarding AgentInstall the HP NonStop JDBC driver in a Forwarding AgentInstall the MariaDB JDBC driver in a Forwarding AgentInstall the MemSQL JDBC driver in a Forwarding AgentInstall the Microsoft JDBC Driver in a Forwarding AgentInstall the MySQL JDBC driver in a Forwarding AgentInstall the Oracle Instant Client in a Forwarding AgentInstall the Oracle JDBC driver in a Forwarding AgentInstall the PostgreSQL JDBC driverInstall the Snowflake JDBC driverInstall the Teradata JDBC driver in a Forwarding AgentTesting the Forwarding AgentStarting and stopping the Forwarding AgentUpgrading Forwarding AgentsUpdating the sys user password on the Forwarding AgentEnabling agent failoverSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-02\n", "metadata": {"source": "https://www.striim.com/docs/en/striim-forwarding-agent-installation-and-configuration.html", "title": "Striim Forwarding Agent installation and configuration", "language": "en"}} {"page_content": "\n\nApache Flume integrationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Configuring remote hostsApache Flume integrationPrevNextApache Flume integrationOn the system running Flume:1. Extract striim_FlumeWebActionSink_....tgz on the system running Flume.2. Save the following as flume/conf/flume-env.sh (replace ... with the correct path):JAVA_OPTS=\"-Xms512m -Xmx1024m\"\nFLUME_CLASSPATH=\"/.../WebActionSink/lib/*\"If you are setting up the sample application, return to Using Apache Flume and continue with step 2.3. Create a configuration such as flume/conf/waflume.conf, as described in Using Apache Flume. Alternatively, add the properties to an existing .conf file.4. Start Flume, specifying the configuration file:bin/flume-ng agent --conf conf --conf-file conf/waflume.conf --name agent -Dflume.root.logger=INFO,consoleIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/apache-flume-integration.html", "title": "Apache Flume integration", "language": "en"}} {"page_content": "\n\ncollectd configurationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Configuring remote hostscollectd configurationPrevNextcollectd configurationBefore an application can use the CollectdParser, the following properties must be set in collectd.conf:LoadPlugin syslog\nLoadPlugin aggregation\nLoadPlugin cpu\nLoadPlugin df\nLoadPlugin disk\nLoadPlugin interface\nLoadPlugin load\nLoadPlugin memory\nLoadPlugin network\nLoadPlugin rrdtool\nLoadPlugin swap\nLoadPlugin uptime\n<Plugin \"aggregation\">\n <Aggregation>\n Plugin \"cpu\"\n Type \"cpu\"\n GroupBy \"Host\"\n GroupBy \"TypeInstance\"\n CalculateNum false\n CalculateSum false\n CalculateAverage true\n CalculateMinimum false\n CalculateMaximum false\n CalculateStddev false\n </Aggregation>\n <Aggregation>\n Plugin \"memory\"\n Type \"memory\"\n GroupBy \"Host\"\n GroupBy \"TypeInstance\"\n CalculateNum false\n CalculateSum false\n CalculateAverage true\n CalculateMinimum false\n CalculateMaximum false\n CalculateStddev false\n </Aggregation>\n <Aggregation>\n Plugin \"swap\"\n Type \"swap\"\n GroupBy \"Host\"\n GroupBy \"TypeInstance\"\n CalculateNum false\n CalculateSum false\n CalculateAverage true\n CalculateMinimum false\n CalculateMaximum false\n CalculateStddev false\n </Aggregation>\n</Plugin>\n<Plugin df>\n\tMountPoint \"/\"\n\tIgnoreSelected false\n\tReportReserved false\n\tReportInodes false\n</Plugin>\n<Plugin disk>\n\tDisk \"/^[hs]d[a-f][0-9]?$/\"\n\tIgnoreSelected false\n</Plugin>\n<Plugin network>\n\t<Server \"127.0.0.1\">\n\t</Server>\n</Plugin>\n<Plugin \"swap\">\n\tReportByDevice false\n\tReportBytes true\n</Plugin>Replace 127.0.0.1 with the UDPReader's IP address. If the UDPReader's port setting is not 25826, specify it as well:<Plugin network>\n Server \"192.168.0.42\" \"25827\"\n</Plugin>See collectd.org for more information.Multiple remote hosts may send data to the same UDPReader source.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/collectd-configuration.html", "title": "collectd configuration", "language": "en"}} {"page_content": "\n\nSNMP configurationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Configuring remote hostsSNMP configurationPrevNextSNMP configurationThis sample snmpd.conf will cause SNMP to collect system information and send it to the receiver of the sample application discussed in SNMPParser.rocommunity public\n# Default SNMPV3 user name\nagentSecName disman\n# Creating user, this user will be used by 'monitor' process to fetch system metrics.\ncreateUser disman MD5 sercrt@1\n# Setting user to have read only access to system metrics.\nrouser disman auth\n# SNMPV1 trap destination.\ntrapsink 10.1.10.114:15021 public\n# Instruct agent to monitor disk space\ndisk / 50% \n# Instruct agent to monitor CPU load.\nload 1 1 1\n# Setting up monitoring job to send alert/trap message.\nmonitor -S -D -r 5 -i sysName.0 -i hrSystemDate.0 -i sysUpTime.0 -o ifIndex \n -o ifSpeed -o ifHighSpeed -o ifPhysAddress -o ifInOctets -o ifInUcastPkts -o ifInDiscards \n -o ifInErrors -o ifOutOctets -o ifOutUcastPkts -o ifOutDiscards -o ifOutErrors \n -o ifOperStatus \"Interface Details\" ifOutOctets 0 0Multiple remote hosts may be configured to send to the same trap destination.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-01-26\n", "metadata": {"source": "https://www.striim.com/docs/en/snmp-configuration.html", "title": "SNMP configuration", "language": "en"}} {"page_content": "\n\nGlossarySkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 GlossaryPrevNextGlossarytermdefinition / notesadaptersee Source and Targetcachesee CacheCDCsee\u00a0Change Data Capture (CDC)clustersee Concepts Guidecomponenta Cache, Continuous query (CQ), Source, Stream, Subscription, Target, WActionStore, or Windowcontext streamsee WAction and WActionStorecontext typesee WAction and WActionStoreCQsee Continuous query (CQ)data typesee Typeeventsee Eventflowsee Flowindexsee WAction and WActionStorejumpingsee WindowKafka streamsee\u00a0Kafka streamsnamespacesee TQL programming rules and best practices, Using namespaces, and Managing users, permissions, and rolesnodein the context of monitoring or deployment groups, a Striim server or Forwarding Agent that is part of a Striim cluster (see\u00a0Monitoring Guide and\u00a0Managing deployment groups)objectin the context of console commands or permissions, an application, flow, or componentquerysee Continuous query (CQ)rolesee Managing users, permissions, and rolessizesee Windowslidingsee Windowsourcesee Sourcestreamsee Streamsubscriptionsee Subscriptiontargetsee TargetTungsten Query Language (TQL)see Fundamentals of TQL programmingWActionStoresee WAction and WActionStorewindowsee WindowIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-13\n", "metadata": {"source": "https://www.striim.com/docs/en/glossary.html", "title": "Glossary", "language": "en"}} {"page_content": "\n\nTypographical and syntax conventionsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Typographical and syntax conventionsPrevNextTypographical and syntax conventionsIn TQL and console-command syntax descriptions, the following conventions are used:Monospace indicates text to be typed into the console or a web UI dialog.Bold indicates a menu item, button name, or other web UI element that you select or click.Braces indicate you must choose one of two or more elements separated by vertical bars. For example, LIST { ROLES | USERS } means the available commands are LIST ROLES and LIST USERS.Angle brackets indicate elements you must replace. For example, you would replace <application name> with the name of a specific application.Square brackets indicate optional elements in a command. For example, MON [<application name>]\u00a0means you may use the command MON by itself or followed by the name of an application.An ellipsis indicates that you may optionally specify multiple items. For example, fields=<field name>,... means you may specify multiple field names separated by commas, such as fields=city,state,zip. Similarly, <field name>:$IN$ <value>~... indicates that you may specify multiple field values separated by tildes, such as State:$IN$ California~Missouri~Nevada.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-01-10\n", "metadata": {"source": "https://www.striim.com/docs/en/typographical-and-syntax-conventions.html", "title": "Typographical and syntax conventions", "language": "en"}} {"page_content": "\n\nRelease notesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Release notesPrevNextRelease notesThe following are the release notes for Striim Cloud 4.2.0.Changes that may require modification of your TQL code, workflow, or environmentStarting with release 4.2.0, TRUNCATE commands are supported by schema evolution (see Handling schema evolution). If you do not want to delete events in the target (for example, because you are writing to a data warehouse in Append Only mode), precede the writer with a CQ with the select statement SELECT * FROM <input stream name> WHERE META(x, OperationName) != \u201cTruncate\u201d; (replacing <input stream name> with the name of the writer's input stream). Note that there will be no record in the target that the affected events were deleted.Handling schema evolutionIf you are using MSJet, Contact Striim support for additional files required to resolve known issue DEV-37025.MongoDB Reader no longer supports MongoDB versions prior to 3.6.MongoDB Reader reads from MongoDB change streams rather than the oplog. Applications created in earlier releases will continue to read from the oplog after upgrading to 4.2. To switch to change streams:Export the application to a TQL file.Drop the application.Revise the TQL as necessary to support new features, for example, changing the Connection URL to read from multiple shards.Import the TQL to recreate the application.Databricks Writer's Upload Policy's default eventcount value has been increased from 10000 to 100000 (see Databricks Writer properties).Customer-reported issues fixed in release 4.2.0DEV-11526: Oracle Reader hangs after network outageDEV-19035: unable to drop the JMSReader applicationDEV-22094: PostgreSQL Reader > DAtabase Writer with Oracle fails on timestamp with time zoneDEV-25489: Oracle Reader not checking supplemental loggingDEV-26693: Incremental Batch Reader with PostgreSQL error with TIMESTAMPTZDEV-27618: Mongo CosmosDB Writer is slow for initial loadDEV-28054: LEE does not show correct values when the app contains an open processorDEV-28500: GGTrail Reader not capturing ROWID in before imagesDEV-28534: alert manager SMTP reset issueDEV-28785: can't view Apps pageDEV-29264: REPORT LEE fails with com.webaction.wactionstore.Utility.reportExecutionTime errorDEV-29970: web UI message log missing messagesDEV-30476: high memory usage when GG Trail Reader processes large LOB dataDEV-31352: after making some changes in Flow Designer and dropping the app, the web UI hangsDEV-31423: error in UpgradeMetadataReposOracleTo4101.sqlDEV-31579: notification issueDEV-31993: MySQL Reader fails on a DDL change to a table not specified in the Tables propertyDEV-32268: PostgreSQL Reader issue with non-lowercase schemasDEV-32275: MongoDB Reader timed out after 10000 ms while waiting to connectDEV-32632: MS SQL Reader > Database Writer with SQL Server missing eventsDEV-33086: MongoDB Reader can't read from MongoDB version 5.0.13DEV-33166: Databricks Writer \"This request is not authorized to perform this operation using this permission\"DEV-33426: Databricks Writer \"table not found\" error when table existsDEV-33510: MongoDB CDC sending Insert operation as Update operationDEV-33543: exported TQL has stream and router DDL out of orderDEV-34346: MongoDB Reader with SSL > S3Writer crashDEV-34365: MariaDBReader does not halt when when binlog file is not presentDEV-34399: Role tab of User page in web UI is blankDEV-34551: GG Trail Reader uses old an old TDR record to create type and app crashes with ColumnType mismatchDEV-34623: Azure Synapse Writer \"Incorrect syntax near 'PERCENT'\"DEV-34725: PostgreSQL Reader > Database Writer with PostgreSQL JSON operator errorDEV-34768: unable to run setupOjet due to missing ojdbc-21.1.jarDEV-34874: cannot enable CDDL Capture with Start Time/ Start PositionDEV-34926: UI slowDEV-34966: Database Reader converting NULLs to 0 when selecting from int (unsigned) columnsDEV-35054: Oracle Reader SQLIntegrityConstraintViolationException: Column 'FILENAME' cannot accept a NULL value.DEV-35096: Alert Manager: email address with special characters not acceptedDEV-35164: Apps page keeps reloading after dropping appDEV-35195: ADLS Gen2 Writer error \"Component Type: TARGET. Cause: null\"DEV-35196: Issues when receiving alert mail during app crashDEV-35349: BigQueryWriter in streaming mode \"Could not parse '2023-03-01 12:41:16.216614+00' as a timestamp\"DEV-35379: Database Reader with SQL Server \"Error occured while creating type for table, Problem creating type\"DEV-35405: OJet firstSCN issueDEV-35428: MySQL Reader ignores DDL if there is a space before the schema name in the Tables stringDEV-35429: can't deploy app with router component after upgradeDEV-35481: MS SQL Reader > Database Writer with Oracle NO_OP_UPDATE exceptionDEV-35548: HTTP Reader is binding to non-SSL portDEV-35581: exceptions not showing up in exception storeDEV-35654: property variable created in the web UI doesn't work in appDEV-35790: BEFORE() function issueDEV-35881: Snowflake Writer \"Timestamp '2023-01-01 \ufffd\ufffd:\ufffd\ufffd:\ufffd\ufffd.000000000 ' is not recognized\"DEV-35974: Alert Manager page is blankDEV-35994: All vaults lost after restarting Striim Platform or Striim CloudDEV-36135: app goes into quiesce state every time it is restartedDEV-36157: Database Reader with MySql or MariaDB \"Error occured while creating type for table {xxx.xxx}\"DEV-36158: OJet issue when table has both primary and unique keysDEV-36166: MariaDB \"Out of range value for column 'asn' : value 4220006002 is not in class java.lang.Integer range\"DEV-36307: \"invalid bytecode org.objectweb.asm.tree.analysis.AnalyzerException\" when starting StriimDEV-36308: upgrade fails with \"metadataDB field is not set to one of the options derby, oracle, or postgres\"DEV-36352: MariaDB Reader > DatabaseWriter with MySQL \"java.lang.Integer cannot be cast to java.lang.Short\"Resolved issuesThe following previously reported known issues were fixed in this release:DEV-29579: Databricks Writer cannot be used in WindowsDEV-31993: control characters in DDL statements cause application to haltKnown issues from past releasesDEV-5701: Dashboard queries not dropped with the dashboard or overwritten on importWhen you drop a dashboard, its queries are not dropped. If you drop and re-import a dashboard, the queries in the JSON file do not overwrite those already in Striim.Workaround: drop the namespace or LIST NAMEDQUERIES, then manually drop each one.DEV-8142: SORTER objects do not appear in the UIDEV-8933: DatabaseWriter shows no error in UI when MySQL credentials are incorrectIf your DatabaseWriter Username or Password values are correct, you will see no error in the UI but no data will be written to MySQL. You will see errors in webaction.server.log regarding DatabaseWriter containing \"Failure in Processing query\" and \"command denied to user.\"DEV-11305: DatabaseWriter needs separate checkpoint table for each node when deployed on multiple nodesDEV-17653: Import of custom Java function failsIMPORT STATIC may fail. Workaround: use lowercase import static.DEV-19903: When DatabaseReader Tables property uses wildcard, views are also readWorkaround: use Excluded Tables to exclude the views.Third-party APIs, clients, and drivers used by readers and writersAzure Event Hub Writer uses the azure-eventhubs API version 3.0.2.Azure Synapse Writer uses the bundled SQL Server JDBC driver.BigQuery Writer uses google-cloud-bigquery version 2.3.3.Cassandra Cosmos DB Writer uses cassandra-jdbc-wrapper version 3.1.0Cassandra Writer uses cassandra-java-driver version 3.6.0.Cloudera Hive Writer uses hive-jdbc version 3.1.3.CosmosDB Reader uses Microsoft Azure Cosmos SDK for Azure Cosmos DB SQL API 4.29.0.CosmosDB Writer uses documentdb-bulkexecutor version 2.3.0.Databricks Writer in AWS uses aws-java-sdk-sts version 1.11.320, aws-java-sdk-s3 version 1.11.320 , and aws-java-sdk-kinesis version1.11.240.Derby: the internal Derby instance is version 10.9.1.0.Elasticsearch: the internal Elasticsearch cluster is\u00a0version 5.6.4.GCS Writer uses the google-cloud-storage client API version 1.106.0.Google PubSub Writer uses the google-cloud-pubsub client API version 1.110.0.HBase Writer uses HBase-client version 2.4.13.Hive Writer and Hortonworks Hive Writer use hive-jdbc version 3.1.3.The HP NonStop readers use OpenSSL 1.0.2n.JMS Reader and JMS Writer use the JMS API 1.1.Kafka: the internal Kafka cluster is\u00a0version 0.11.0.1.Kudu: the bundled Kudu Java client is version 1.13.0.Kinesis Writer uses aws-java-sdk-kinesis version 1.11.240.MapR DB Writer uses hbase-client version 2.4.10.MapR FS Reader and MapR FS Writer use Hadoop-client version 3.3.4.MariaDB Reader uses maria-binlog-connector-java-0.2.3-WA1.MariaDB Xpand Reader uses mysql-binlog-connector-java version 0.21.0 and mysql-connector-java version 8.0.27.Mongo Cosmos DB Reader, MongoDB Reader, and MongoDB Writer use mongodb-driver-sync version 4.8.2.MySQL Reader uses mysql-binlog-connector-java version 0.21.0 and mysql-connector-java version 8.0.27.Oracle: the bundled Oracle JDBC driver is ojdbc-21.1.jar.PostgreSQL: the bundled PostgreSQL JDBC 4.2 driver is version 42.4.0Redshift Writer uses aws-java-sdk-s3 1.11.320.S3 Reader and S3 Writer use aws-java-sdk-s3 1.11.320.Salesforce Reader uses the Force.com REST API version 53.1.0.Salesforce Writer: when Use Bulk Mode is True, uses Bulk API 2.0 Ingest; when Use Bulk Mode is False, uses the Force.com REST API version 53.1.0.Snowflake Writer: when Streaming Upload is False, uses snowflake-jdbc version.3.13.15; when Streaming Upload is True, uses Snowflake Ingest SDK 1.0.2-beta.5.Spanner Writer uses the google-cloud-spanner client API version 1.28.0 and the bundled JDBC driver is google-cloud-spanner-jdbc version 1.1.0.SQL Server: the bundled Microsoft SQL Server JDBC driver is version 7.2.2.In this section: Release notesChanges that may require modification of your TQL code, workflow, or environmentCustomer-reported issues fixed in release 4.2.0Resolved issuesKnown issues from past releasesThird-party APIs, clients, and drivers used by readers and writersSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-16\n", "metadata": {"source": "https://www.striim.com/docs/en/release-notes.html", "title": "Release notes", "language": "en"}} {"page_content": "\n\nStriim Platform features not currently available in Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Striim Platform features not currently available in Striim CloudPrevNextStriim Platform features not currently available in Striim Cloudmulti-server clusters and failoverthe schema conversion utility (initial load wizards with Auto Schema Creation are supported in both Striim Platform and Striim Cloud)using environment variables in adapter propertiesreading from SQL Server 2008 directly, without using a Forwarding Agentwriting to SQL Server 2008using Active Directory authentication with Azure SQL Database, Azure Synapse, or SQL Serverthe SysOut adapterusing LDAP authenticationreading log filesfile lineagemonitoring using JMXcreating custom Java functionsIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-08\n", "metadata": {"source": "https://www.striim.com/docs/en/striim-platform-features-not-currently-available-in-striim-cloud.html", "title": "Striim Platform features not currently available in Striim Cloud", "language": "en"}} {"page_content": "\n\nContact Striim supportSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Cloud 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Cloud 4.2.0 Contact Striim supportPrevContact Striim supportSelect Create ticket from the menu, fill out the Contact Support form, and click Submit.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/en/contact-striim-support.html", "title": "Contact Striim support", "language": "en"}} {"page_content": " Documentation | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Welcome to Striim Documentation Welcome to Striim! Find the links below to your product and access all relevant documentation. Striim Cloud All your documentation needs for our fully-managed Cloud SaaS solution Striim Platform All your documentation needs for our self-hosted solution Striim for BigQuery All your documentation needs for Striim for BigQuery Striim for Databricks All your documentation needs for Striim for Databricks Striim for Snowflake All your documentation needs for Striim for Snowflake Striim for StreamShift All your documentation needs for StreamShift \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/services-support/documentation/", "title": "Documentation | Striim", "language": "en-US"}} {"page_content": "\n\nWhat's new in Striim Platform 4.2.0Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 What's new in Striim Platform 4.2.0PrevNextWhat's new in Striim Platform 4.2.0Looking for the Striim Cloud documentation? Click here.The following features are new in Striim Platform 4.2.0.Web UIYou can set the online help to get the latest documentation from the web rather than the possibly outdated version bundled with the server (see Switching online help links to open the latest docs on the web).Resource usage policies can help prevent web UI slowdown (see Resource usage policies).The Apps page has improvements in filtering, sorting, monitoring, and organizing apps.ROUTER components may be created in the Flow Designer (see CREATE ROUTER).Application developmentWizards support initial schema creation for Salesforce, Salesforce Pardot, and ServiceNow sources.Schema evolution supports TRUNCATE TABLE for additional sources and targets (see Handling schema evolution).Schema evolution supports ALTER TABLE ... ADD PRIMARY KEY. and ALTER TABLE ... ADD UNIQUE for MariaDB and MySQL (see Handling schema evolution).Sources and targetsGCS Reader reads from Google Cloud Storage.Salesforce Pardot Reader reads Salesforce Pardot sObjects.Salesforce Reader supports JWT Bearer Flow authentication.BigQuery Writer supports parallel requests when using the Storage Write API and allows specifying HttpTransportOptions timeouts in TQL (see BigQuery Writer properties).MongoDB Writer:Supports exactly-once processing (see notes for the Checkpoint Collection property).Shard key updates have been added to the available Ignorable Exception Code property values.ServiceNow Writer writes to tables in ServiceNow.Change data captureMongoDB Reader:Supports MongoDB versions up to 6.3.x.In Incremental mode, a single MongoDB Reader can read from an entire cluster using a +srv connection URL.With MongoDB 4.2 and later, reads from change streams (see MongoDB Manual > Change Streams) instead of the oplog.When reading from change streams, supports transactions and unset operations, and provides additional metadata.Can select documents based on queries (see Selecting documents using MongoDB Config).MSJet supports compressed tables and indexes (see Learn / SQL / SQL Server / Enable Compression on a Table or Index).Oracle Reader supports Oracle Database 21c.Administration, monitoring, and alertsResource usage policies can help prevent issues such as running out of memory or disk space to cause applications to halt (see Resource usage policies).Cluster-level Smart Alerts can be modified in the web UI (see Managing Smart Alerts).You can set a timeout for web UI and console sessions (see Setting a web UI and console timeout).Vaults support Google Secrets Manager (see Using vaults).Installation and configurationYou can set the online help to get the latest documentation from the web rather than the possibly outdated version bundled with the server (see Switching online help links to open the latest docs on the web).You can configure Striim to automatically deactivate a user after a set number of failed login attempts (see Locking out users after failed logins). Deactivated users can be activated again on the Users page of the web UI.In this section: What's new in Striim Platform 4.2.0Web UIApplication developmentSources and targetsChange data captureAdministration, monitoring, and alertsInstallation and configurationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/what-s-new-in-striim-platform-4-2-0.html", "title": "What's new in Striim Platform 4.2.0", "language": "en"}} {"page_content": "\n\nWhat's new in Striim Platform 4.2.0Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 What's new in Striim Platform 4.2.0PrevNextWhat's new in Striim Platform 4.2.0Looking for the Striim Cloud documentation? Click here.The following features are new in Striim Platform 4.2.0.Web UIYou can set the online help to get the latest documentation from the web rather than the possibly outdated version bundled with the server (see Switching online help links to open the latest docs on the web).Resource usage policies can help prevent web UI slowdown (see Resource usage policies).The Apps page has improvements in filtering, sorting, monitoring, and organizing apps.ROUTER components may be created in the Flow Designer (see CREATE ROUTER).Application developmentWizards support initial schema creation for Salesforce, Salesforce Pardot, and ServiceNow sources.Schema evolution supports TRUNCATE TABLE for additional sources and targets (see Handling schema evolution).Schema evolution supports ALTER TABLE ... ADD PRIMARY KEY. and ALTER TABLE ... ADD UNIQUE for MariaDB and MySQL (see Handling schema evolution).Sources and targetsGCS Reader reads from Google Cloud Storage.Salesforce Pardot Reader reads Salesforce Pardot sObjects.Salesforce Reader supports JWT Bearer Flow authentication.BigQuery Writer supports parallel requests when using the Storage Write API and allows specifying HttpTransportOptions timeouts in TQL (see BigQuery Writer properties).MongoDB Writer:Supports exactly-once processing (see notes for the Checkpoint Collection property).Shard key updates have been added to the available Ignorable Exception Code property values.ServiceNow Writer writes to tables in ServiceNow.Change data captureMongoDB Reader:Supports MongoDB versions up to 6.3.x.In Incremental mode, a single MongoDB Reader can read from an entire cluster using a +srv connection URL.With MongoDB 4.2 and later, reads from change streams (see MongoDB Manual > Change Streams) instead of the oplog.When reading from change streams, supports transactions and unset operations, and provides additional metadata.Can select documents based on queries (see Selecting documents using MongoDB Config).MSJet supports compressed tables and indexes (see Learn / SQL / SQL Server / Enable Compression on a Table or Index).Oracle Reader supports Oracle Database 21c.Administration, monitoring, and alertsResource usage policies can help prevent issues such as running out of memory or disk space to cause applications to halt (see Resource usage policies).Cluster-level Smart Alerts can be modified in the web UI (see Managing Smart Alerts).You can set a timeout for web UI and console sessions (see Setting a web UI and console timeout).Vaults support Google Secrets Manager (see Using vaults).Installation and configurationYou can set the online help to get the latest documentation from the web rather than the possibly outdated version bundled with the server (see Switching online help links to open the latest docs on the web).You can configure Striim to automatically deactivate a user after a set number of failed login attempts (see Locking out users after failed logins). Deactivated users can be activated again on the Users page of the web UI.In this section: What's new in Striim Platform 4.2.0Web UIApplication developmentSources and targetsChange data captureAdministration, monitoring, and alertsInstallation and configurationSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-30\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/what-s-new-in-striim-platform-4-2-0.html", "title": "What's new in Striim Platform 4.2.0", "language": "en"}} {"page_content": "\n\nWhat is Striim?Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 What is Striim?PrevNextWhat is Striim?Striim is a complete, end-to-end, in-memory platform for collecting, filtering, transforming, enriching, aggregating, analyzing, and delivering data in real time. Built-in adapters collect data from and deliver data to SQL and no-SQL databases, data warehouses, applications, files, messaging systems, sensors, and more (see Sources and Targets for a complete list), on your premises or in the cloud. Integrated tools let you visualize live data in dashboards, explore it with SQL-like queries, and trigger alerts of anomalous conditions or security violations.Collecting dataStriim ingests real-time streaming data from a variety of sources including databases, logs, other files, message queues, and devices. Sources are defined and configured using our TQL scripting language or the web UI via a simple set of properties. We also provide wizards to simplify creating data flows from common sources to popular targets.Striim does not wait for files to be completely written before processing them in a batch-oriented fashion. Instead, the reader waits at the end of the file and streams out new data as it is written to the file. As such, it can turn any set of log files into a real-time streaming data source.Similarly, Striim's database readers do not have to wait for a database to completely ingest, correlate, and index new data before reading it by querying tables. Instead, using a technology known as Change Data Capture (CDC), Striim non-intrusively captures changes to the transaction log of the database and ingests each insert, update, and delete as it happens.A range of other sources are also available, including support for IoT and device data through TCP, UDP, HTTP, MQTT, and AMQP, network information through NetFlow and PCAP, and other message buses such as JMS, MQ Series, and Flume. (See\u00a0Sources for a complete list.)Data from all these sources can be delivered as is, or go through a series of transformations and enrichments to create exactly the data structure and content you need. Data can even be correlated and joined across sources.Processing dataTypically, you will want to filter your source data to remove everything but that which matches certain criteria. You may need to transform the data through string manipulation or data conversion, or send only aggregates to prevent data overload.\u00a0You may need to add additional context to the data. A lot of raw data may need to be joined with additional data to make it useful.Striim simplifies these crucial data processing tasks\u2014filtering, transformation, aggregation, and enrichment\u2014by using in-memory continuous queries defined in TQL, a language with constructs familiar to anyone with experience using SQL. Filtering is just a WHERE clause. Transformations are simple and can utilize a wide selection of built-in functions, CASE statements,\u00a0custom Java functions, and other mechanisms.\u00a0Aggregations utilize flexible windows that turn unbounded infinite data streams into continuously changing bounded sets of data. The queries can reference these windows and output data continuously as the windows change. This means a one-minute moving average is just an average function over a one-minute sliding window.Enrichment uses external data introduced into Striim through the use of distributed caches (also known as data grids). Caches can be loaded with large amounts of reference data, which is stored in-memory across the cluster. Queries can reference caches in a FROM clause the same way as they reference streams or windows, so joining against a cache is simply a join in a TQL query.Multiple stream sources, windows, and caches can be used and combined together in a single query, and queries can be chained together in directed graphs, known as data flows. All of this can be built through the UI or our scripting language, and can be easily deployed and scaled across a Striim cluster, without having to write any additional code.Analyzing dataStriim enables you to analyze data in memory, the same you process it\u2014through SQL-like continuous queries. These queries can join data streams together to perform correlation, and look for patterns (or specific sequences of events over time) across one or more data streams utilizing an extensive pattern-matching syntax.Continuous statistical functions and conditional logic enable anomaly detection, while built-in regression algorithms enable predictions into the future based on current events.Analytics can also be rooted in understanding large datasets. Striim customers have integrated machine learning into data flows to perform real-time inference and scoring based on existing models. This utilizes Striim in two ways. First, Striim\u00a0can prepare and deliver source data to targets in your desired format, enabling the real-time population of raw data used to generate machine learning models.Then, once a model has been constructed and exported, you can easily call the model from our SQL, passing real-time data into it, to infer outcomes continuously. The end result is a model that can be frequently updated from current data, and a real-time data flow that matches new data to the model, spots anomalies or unusual behavior, and enables faster responses.Visualizing dataThe final piece of analytics is visualizing and interacting with data. Striim includes a dashboard builder that lets you easily build custom, use-case-specific visualizations to highlight real-time data and the results of analytics. With a rich set of visualizations, and simple query-based integration with analytics results, dashboards can be configured to continually update and enable drill-down and in-page filtering.Delivering dataStriim can write continuously to a broad range of data targets, including databases, files, message queues, Hadoop environments, and cloud data stores such as Azure blob storage, Azure SQL DB, Amazon Redshift, and Google BigQuery (see Targets for a complete list). For targets that don't require a specific format, you may choose to format the output as Avro, delimited text, JSON, or XML. As with sources, targets\u00a0are defined and configured using our TQL scripting language or the web UI via a simple set of properties, and wizards are provided for creating apps with many source-target combinations.A single data flow can write to multiple targets at the same time in real time, with rules encoded as queries in between. For example, you can source data from Kafka and write some or all of it to Hadoop, Azure SQL DB, and your enterprise data warehouse simultaneously.Putting it all togetherEnabling all of these things in a single platform requires multiple major pieces of in-memory technology that have to be integrated seamlessly and tuned in order to be enterprise-grade. This means you have to consider the scalability, reliability, and security of the complete end-to-end architecture, not just a single piece.Joining streaming data with data cached in an in-memory data grid, for example, requires careful architectural consideration to ensure all pieces run in the same memory space so joins can be performed without expensive and time-consuming remote calls. Continually processing and analyzing hundreds of thousands, or millions, of events per second across a cluster in a reliable fashion is not a simple task, and can take many years of development time.Striim has been architected from the ground up to scale, and Striim clusters are inherently reliable with failover, recovery, and exactly-once processing guaranteed end to end, not just in one slice of the architecture.Security is also treated holistically, with a single role-based security model protecting everything from individual data streams to complete end-user dashboards.With Striim, you don't need\u00a0to design and\u00a0 build a massive infrastructure, or hire an army of developers to craft your required processing and analytics. Striim enables data scientists, business analysts, and other IT and data professionals to get right to work without having to learn and code to APIs.See our web site for additional information about what Striim is and what it can do for you: http://www.striim.com/products In this section: What is Striim?Collecting dataProcessing dataAnalyzing dataVisualizing dataDelivering dataPutting it all togetherSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/what-is-striim-.html", "title": "What is Striim?", "language": "en"}} {"page_content": "\n\nDifferences between Striim Platform and Striim CloudSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Differences between Striim Platform and Striim CloudPrevNextDifferences between Striim Platform and Striim CloudStriim is available as two different products: Striim Platform and Striim Cloud.Striim Platform is installed on your hardware or deployed on a virtual machine in AWS, Azure, or Google Cloud Platform. You must configure and manage it yourself.Striim Cloud is a fully managed SaaS platform available on Azure or Google Cloud Platform (not yet available in the GCP marketplace, Contact Striim support for assistance).In this documentation, when we say \"Striim\" without specifying Platform or Cloud, it applies to both.The following features are currently available only in Striim Platform, not in Striim Cloud:multi-server clusters and failoverthe schema conversion utility (initial load wizards with Auto Schema Creation are supported in both Striim Platform and Striim Cloud)using environment variables in adapter propertiesreading from SQL Server 2008 directly, without using a Forwarding Agentwriting to SQL Server 2008using Active Directory authentication with Azure SQL Database, Azure Synapse, or SQL Serverthe SysOut adapterusing LDAP authenticationreading log filesfile lineagemonitoring using JMXcreating custom Java functionsSee also the known issues listed in Release notes.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-08\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/differences-between-striim-platform-and-striim-cloud.html", "title": "Differences between Striim Platform and Striim Cloud", "language": "en"}} {"page_content": "\n\nGetting StartedSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Getting StartedPrevNextGetting StartedThis section of the documentation provides an introduction to the platform for new and prospective users.First, follow the instructions in Install Striim Platform for evaluation purposes.Install Striim Platform for evaluation purposesDeploying and managing Striim CloudOnce you do that, you may take the Hands-on quick tour, explore Striim on your own, or run the following demo and sample applications:The CDC (change data capture) demo apps highlight Striim's initial load and CDC replication capabilities. A Docker container with a PostgreSQL database is provided to try fast, high-volume data loading to another database, Kafka, or file storage. See Running the CDC demo apps.The PosApp sample application demonstrates how a credit card payment processor might use Striim to generate reports on current transaction activity by merchant and send alerts when transaction counts for a merchant are higher or lower than average for the time of day. This application is explained in great detail, so is useful for developers who want to learn how to write Striim applications.The MultiLogApp sample application demonstrates how Striim could be used to monitor and correlate logs from web and application server logs from the same web application. For developers who want to learn how to write Striim applications, this builds on the concepts covered in PosApp.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-09\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/getting-started.html", "title": "Getting Started", "language": "en"}} {"page_content": "\n\nCommon Striim use casesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Getting StartedCommon Striim use casesPrevNextCommon Striim use casesStriim is a distributed data integration and intelligence platform that can be used to design, deploy, and run data movement and data streaming pipelines. The following are common business applications for the Striim platform. (Note that these examples include just a small fraction of the thousands of source-target combinations Striim supports.)Cloud adoption, including database migration, database replication, and data distribution. Popular data pipelines for this scenario include:RDBMS to RDBMS, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to homogeneous or heterogeneous databases running on AWS, Google Cloud Platform, Microsoft Azure, or Oracle Cloud.RDBMS to data warehouse, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to Amazon Redshift, Azure Synapse, Databricks, Google BigQuery, or Snowflake.Hybrid cloud data integration, including on-premise to cloud, on-premise to on-premise, cloud to cloud, and cloud to on-premise topologies. Popular data pipelines for this scenario include:RDBMS to RDBMS, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to homogeneous or heterogeneous databases running on AWS, Google Cloud Platform, Microsoft Azure, or Oracle Cloud.RDBMS to queuing systems, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to Kafka or cloud-based messaging systems such as Amazon Kinesis, Azure Event Hub, or Google PubSub.Queuing systems to RDBMS, including from Kafka to MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server.RDBMS to cloud-based storage systems, including from MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server to Amazon S3, Azure Data Lake Storage, or Google Cloud Storage.Cloud-based storage systems to RDBMS, including from Amazon S3 to MariaDB, MySQL, HP NonStop, Oracle Database, PostgreSQL, or SQL Server.Digital transformation, including real-time data distribution, real-time reporting, real-time analytics, stream processing, operational monitoring, and machine learning. Popular use cases for this scenario include:Real-time alerting and notification for CDC workloads (see the discussion of alerts in Running the CDC demo apps).Running the CDC demo appsStreaming analytics using data windows (see Sample applications for programmers).Running SQL-based continuous queries on moving data pipelines.Creating real-time dashboards on CDC or Kafka workloads.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-02\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/common-striim-use-cases.html", "title": "Common Striim use cases", "language": "en"}} {"page_content": "\n\nInstall Striim Platform for evaluation purposesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Getting StartedInstall Striim Platform for evaluation purposesPrevNextInstall Striim Platform for evaluation purposesTo evaluate Striim Platform:The instructions in this section are intended for evaluation and development purposes only. If you are installing for production, see Installation and configuration.If your system requirements match those in Configuring your system to evaluate Striim\u00a0you may request an evaluation download from www.striim.com/download-striim, then install it following the instructions in Evaluating on Mac OS X, Linux, or Windows or Alternative installation method.Alternatively, deploy a Striim Platform solution from the Amazon AWS Marketplace, Azure Marketplace, or Google Cloud Platform Marketplace (see Running Striim in Amazon EC2, Running Striim in Azure, or Running Striim in the Google Cloud Platform).\u00a0Striim will be free for an initial trial period, during which you may be charged by Amazon, Microsoft, or Google for virtual machine usage. To minimize those charges, stop the virtual machine when you are not using it. Unlike Striim Cloud, which is managed by Striim (software as a service), these solutions are simply Striim Platform running in an AWS, Azure, or Google virtual machine (platform as a service).Configuring your system to evaluate StriimWarningDo not install Striim on a computer with 4GB or less RAM.The following are the minimum requirements for evaluating Striim:memory:4GB available for use by Striim (so the system should have at least 5\u00a0GB, preferably 8\u00a0GB or more)Running the CDC demo apps will require a system with a minimum of 8 GB, preferably 12 GB or moreusing Kafka streams may require a system with 12\u00a0GB or morefree disk space: minimum 10 GB, 20 GB recommendedsupported operating systems:Microsoft Windows 8.1 or 10Windows Server 2012Mac OS X 1064-bit CentOS 6.7 through 7.664-bit Ubuntu 14.04, 16.04, or 18.04supported Java environments:recommended: 64-bit Oracle SE Development Kit 8 (required to use HTTPReader or SNMPParser)64-bit OpenJDK 8To install Oracle Java SE Development Kit 8:on Windows, download the x64 installer from https://www.oracle.com/java/technologies/downloads/#java8-windows, run the installer. and follow the instructions.on macOS, download the installer from https://www.oracle.com/java/technologies/downloads/#java8-mac, double-click the update package, and follow the instructions.firewall settings:To allow other computers to connect to Striim's web UI, open port 9080 for TCP inbound.To allow Forwarding Agents to connect to Striim, or to create a multi-server cluster (not supported by a free evaluation license), configure the firewall as described in System requirements.System requirementsweb browser: The web client has been tested on Chrome. Other web browsers may work, but if you encounter bugs, try Chrome.recommended: 64-bit Oracle SE Development Kit 8 (required to use HTTPReader or SNMPParser or Kerberos authentication for Oracle or PostgreSQL); for a license, contact Oracle License Management Servicealso supported: 64-bit OpenJDK 8To verify that Java installed correctly, enter the following at the command line in a command prompt or terminal window:java -versionThe output should be:java version \"1.8.0_144\"\nJava(TM) SE Runtime Environment (build 1.8.0_144-b01)\nJava HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)\nContinue with the instructions in Installing Striim for evaluation purposes.Evaluating on Mac OS X, Linux, or WindowsBefore following these instructions, see Configuring your system to evaluate Striim and Running the CDC demo apps.If you run into any problems with the installer or web configuration tool, you may use the Alternative installation method.NoteThe Striim_420.jar installer requires Python 2 and is incompatible with Python 3. If Python 2 is not available, use the Alternative installation method.On a Mac with an M1 CPU. Use the Alternative installation method.To install the CDC demo apps, see the instructions in Installing the CDC demo apps).Download Striim_420.jar.Double-click the downloaded file to launch the installer. If nothing happens, open a shell window, switch to the directory where you downloaded the file, and enter the following command: java -jar Striim_<version>.jarWhen the installer appears, click Next.Optionally, change the installation path. In Windows, if the specified directory does not exist, create it. Then click Next > OK > Next.CautionKnown issue (DEV-22317): do not put the striim directory under a directory with a space in its name, such as C:\\Program Files.The web configuration tool will open in your default browser. Return to the installer and click Next > Done.Return to the web client, read the license agreement, then click Accept and Continue.If anything goes wrong in the following steps, Striim/logs/WebConfig.log may indicate the problem. If you\u00a0contact Striim support, give them a copy of this file.If you see \"Congratulations,\" click Continue. If you see messages that your computer can not run Striim, resolve the problems indicated, for example by installing the required version of Java or switching to Chrome. To restart the web configuration tool, run .../Striim/bin/WebConfig.sh.Enter the following in the appropriate fields:your company namethe name for the Striim cluster (this value defaults to the current user name, but you may change it)the password for the admin userthe password for the sys userthe Striim keystore passwordIf the system has more than one network interface and the installer has chosen the wrong one, choose the correct one.Click Save & Continue.If you have a license key, enter it. If not, leave the field blank to get a trial license. Click Continue.When you see the Launch button, click it. A video will play while Striim is launching, which may take a few minutes.When you see the Log In prompt, enter admin in the Username field, enter the password you set for the admin user in the Password field, and click Log In.Continue with Viewing dashboards.NoteTo stop, restart, or reconfigure Striim, run .../Striim/bin/WebConfig.sh.Alternative installation methodBefore following these instructions, see Configuring your system to evaluate Striim and Running the CDC demo apps.If you are evaluating Striim on Windows 8.1 (which does not support Docker Desktop, required to run the CDC demo apps), or have trouble with the Striim_4.2.0.jar installer, you may install as follows.Download Striim_4.2.0.zip.Extract the .zip file to a location of your choice. Adjust all the paths in the following instructions accordingly.CautionKnown issue (DEV-22317): do not put the striim directory under a directory with a space in its name, such as C:\\Program Files.Open a terminal or command prompt and change to the striim directory.In CentOS or Ubuntu, enter sudo su - striim bin/sksConfig.shIn OS X, enter bin/sksConfig.shIn Windows, enter bin\\sksConfigWhen prompted, enter passwords for the Striim keystore and the admin and sys users. Choose Derby as the MDR (metadata repository).Open Striim/conf/startUp.properties in a text editor, edit the following properties (removing any # characters and spaces from the beginning of the lines), and save the file:WAClusterName: a name for the Striim cluster (note that if an existing Striim cluster on the network has this name, Striim will try to join it)CompanyName: If you specify keys, this must exactly match the associated company name. If you are using a trial license, any name will work.ProductKey and LIcenseKey: If you have keys, specify them, otherwise leave blank to run Striim on a trial license. Note that you\u00a0cannot create a multi-server cluster using a trial license.Interfaces: If the system has more than one IP address, specify the one you want Striim to use, otherwise leave blank and Striim will set this automatically.Open a command prompt and run striim/bin/server.sh or striim\\bin\\server.bat.Wait for output similar to the following before going on to the next step:Please go to http://192.168.7.91:9080 or https://192.168.7.91:9081 to administer, or use consoleIf your operating system supports Docker Desktop, you may follow the instructions in \"Installing the CDC demo apps in an existing Striim server\" in Running the CDC demo apps).Running the CDC demo appsOpen a web browser, go to http:/localhost:9080, and log in with username admin and the password you provided in step 3.Continue with Viewing dashboards.NoteTo stop Striim, press Ctrl-C in the command prompt. To restart, repeat step 5.Evaluating Striim for SnowflakeA free trial of Striim for Snowflake is available through Snowflake Partner Connect.Getting your free trial of Striim for SnowflakePrerequisites for the free trial:a Snowflake account with access to Partner Connecta Snowflake login with the ACCOUNTADMIN roleFor more information, see Snowflake Partner Connect and System-Defined Roles.To start your free trial:Log in to Snowflake.On the right side of the top menu, click Partner Connect.From the drop-down menu under your user name, select Switch Role > ACCOUNTADMIN.Click Striim > Connect> Activate.Enter your company name, a domain name (this will be the first part of the URL to the Striim application), and a password, check the checkbox, and click Complete Sign Up.On the next page, click the Visit link.Enter the email address associated with your Snowflake account and the password you provided in the \"Create an account\" dialog.Enter a name for your Striim service and click Create. You should see something like this:If your database is behind a firewall, click the shield icon to get the IP address to add to the firewall's allow list. When the status changes from CREATING to RUNNING, click Open Application.Continue with the instructions in Creating an application in Striim for Snowflake.Creating an application in Striim for SnowflakeBefore creating an application, you must create a service as described in Getting your free trial of Striim for Snowflake. After completing those steps, you should be looking at the App Wizard page in Striim:Click the appropriate Lift and Shift wizard for your source database. This will create an application to copy all data from selected tables in your source database to Snowflake.Enter a name for your application and click Next.Enter the requested configuration details and credentials for the source database, then click Next. If the connection check is successful, click Next. Otherwise, click Back and fix the errors in the configuration.Select the schemas you want to move, then click Next. When validation is complete, click Next again.Select the tables to include, or click Select All. If any tables contain data types incompatible with the target database, they will not be selectable. When done selecting tables, click Next.The target properties are set automatically. Do not change them. Click Next.When a check appears next to \"Starting your data movement,\" click Next.The Application Progress dialog tracks the progress of your lift and shift operation.Above you see the status of the source. Select the target in the left column to see the status in Snowflake.The imported data is now available in the PC_STRIIM_DB database.To return to Striim for Snowflake, use the login link in the email you received from Striim.For more information or to request a full version of Striim, click the Drift icon in the lower left corner of the Striim for Snowflake window. If the chatbot can not provide you with the necessary resources, it will connect you with a Striim representative. Alternatively, Contact Striim support.In this section: Install Striim Platform for evaluation purposesConfiguring your system to evaluate StriimEvaluating on Mac OS X, Linux, or WindowsAlternative installation methodEvaluating Striim for SnowflakeGetting your free trial of Striim for SnowflakeCreating an application in Striim for SnowflakeSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/install-striim-platform-for-evaluation-purposes.html", "title": "Install Striim Platform for evaluation purposes", "language": "en"}} {"page_content": "\n\nRunning the CDC demo appsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Getting StartedRunning the CDC demo appsPrevNextRunning the CDC demo appsThe CDC demo applications demonstrate Striim's data migration capabilities using a PostgreSQL instance in a Docker container.If you installed Striim Platform as discussed in Evaluating on Mac OS X, Linux, or Windows with Docker Desktop running, the CDC demo apps are ready to use, no installation is necessary. Otherwise, see Installing the CDC demo apps.About the CDC demo appsThere are three groups of applications:SamplesDB demonstrates a SQL CDC source to database target pipeline, replicating data from one set of PostgreSQL tables to another set. The two applications that are similar to their real-world equivalents are:PostgresToPostgresInitialLoad150KRows uses Database Reader and Database Writer to replicate 150,000 existing records from the customer, nation, and region tables to the customertarget, nationtarget, and regiontarget tables. In a real-world application, the source and target would typically be different databases. For example, the source might be Oracle and the target might be Amazon Redshift; Azure SQL Data Warehouse, PostgreSQL, or SQL DB; Google BigQuery, Cloud SQL, or Spanner; or Snowflake.DatabaseReaderDatabaseWriterPostgresToPostgresCDC uses PostgreSQLReader (see PostgreSQL) and Database Writer to continuously update the target tables with changes to the source.DatabaseWriterSamplesDB2Kafka demonstrates a typical SQL CDC source to database target pipeline, replicating data from a set of PostgreSQL tables to a Kafka topic. The two applications that are similar to their real-world equivalents are:PostgresToKafkaInitialLoad150KRows uses Database Reader and Kafka Writer to replicate 150,000 existing records from the PostgreSQL customer, nation, and region tables to messages in a Kafka topic called kafkaPostgresTopic. In a real-world application, the target would be an external Kafka instance, either on-premise or in the cloud.DatabaseReaderPostgresToKafkaCDC uses PostgreSQLReader (see PostgreSQL) and Kafka Writer to continuously update the Kafka topic with changes to the PostgreSQL source tables. Note that updates and deletes in PostgreSQL create new messages in Kafka rather than updating or deleting previous messages relating to those rows.SamplesDB2File demonstrates a typical SQL CDC source to file target pipeline, replicating data from a set of PostgreSQL tables to files. The two applications that are similar to their real-world equivalents are:PostgresToFileInitialLoad150KRows uses Database Reader and File Writer to replicate 150,000 existing records from the PostgreSQL customer, nation, and region tables to files in striim/SampleOutput.. In a real-world application, the target directory would typically be on another host, perhaps in AWS S3, Azure Blob Storage or HD Insight Hadoop, or Google Cloud Storage.DatabaseReaderPostgresToFileCDC uses PostgreSQLReader (see PostgreSQL) and File Writer to continuously update the files with changes to the PostgreSQL source tables. Note that updates and deletes in PostgreSQL add new entries to the target files rather than updating or deleting previous entries relating to those rows.Striim provides wizards to help you create similar applications for many source-target combinations (see Creating apps using templates).Creating apps using templatesThe other applications use open processors (see Creating an open processor component) and other custom components to manage the PostgreSQL instance and generate inserts, updates, and deletes. In a real-world application, the source database would be updated by users and other applications.ValidatePostgres, ValidateKafka, and ValidateFile verify that the sources and targets used by the other apps are available.Execute250Inserts adds 250 rows to the source tables and stops automatically.Execute250Updates changes 250 rows in the source tables and stops automatically.Execute250Deletes removes 250 rows from the source tables and stops automatically.ResetPostgresSample, ResetKafkaSample, and ResetFileSample clear all the data created by the other apps, leaving the apps, PostgreSQL tables, Kafka, and SampleOutput directory in their original states.Running the applicationsWhen Striim, the PostgreSQL instance in Docker, and Kafka are running, you can use the PostgreSQL demo applications. The process is the same for all three sets of applications.Deploy and start the ValidatePostgres, ValidateKafka, and ValidateFile applications and leave them running.In the SamplesDB group, deploy and start the SamplesDB.PostgresToPostgresInitialLoad150KRows application.When you see the alert above, that means initial load has completed. Stop and undeploy the InitialLoad application.Deploy and start the SamplesDB.PostgresToPostgresCDC application.Once the CDC application is running, deploy and start the SamplesDB.Execute250Inserts application. It will add 250 rows to the customer table, give you an alert, and stop automatically. The CDC app will replicate the rows to the target.Deploy and start SamplesDB.Execute250Updates. It will update a random range of 250 rows in the customer table, give you an alert, and stop automatically. The PostgreSQL CDC app will replicate the changes to the corresponding rows in the customertarget table. PostgresToKafkaCDC will add messages describing the updates to the target topic. PostgresToFileCDC will add entries describing the updates to the files in SampleOutput.Deploy and start SamplesDB.Execute250Deletes. It will delete the first 250 rows in the customer table, give you an alert, and stop automatically. The PostgreSQL CDC app will delete the corresponding rows in the customertarget table. PostgresToKafkaCDC will add messages describing the deletes to the target topic. PostgresToFileCDC will add entries describing the deletes to the files in SampleOutput.Verifying PostgreSQL to PostgreSQL replicationTo view the results of the load, insert, update, and delete commands in the PostgreSQL target, use any PostgreSQL client to log in to localhost:5432 with username striim and password striim.Alternatively, you can access virtual machine's command line and run psql:In a Docker Quickstart, OS X, or Linux terminal, enter:docker exec -it striimpostgres /bin/bashWhen you see the bash prompt, enter:psql -U striim -d webactionBefore running PostgresToPostgresInitialLoad150KRows, the customer table has 150,000 rows and customertarget has none:webaction=# select count(*) from customer;\n\u00a0count \u00a0\n--------\n\u00a0150000\n(1 row)\n\nwebaction=# select count(*) from customertarget;\n\u00a0count\u00a0\n-------\n\u00a0\u00a0 \u00a0 0\n(1 row)\nAfter running PostgresToPostgresInitialLoad150KRows, customertarget has 150,000 rows:webaction=# select count(*) from customertarget;\n\u00a0count \u00a0\n--------\n\u00a0150000\n(1 row)\nAfter stopping PostgresToPostgresInitialLoad150KRows, starting PostgresToPostgresCDC, and running Execute250Inserts:webaction=# select count(*) from customertarget;\n\u00a0count \u00a0\n--------\n\u00a0150250\n(1 row)\nAfter running Execute250Updates:webaction=# select * from customer where c_custkey=113981;\n\u00a0c_custkey | \u00a0 \u00a0 \u00a0 c_name \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 c_address\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | c_nationkey ... \u00a0 \u00a0\n-----------+--------------------+------------------------------------+------------ ...\n\u00a0 \u00a0 113981 | Customer#000113981 | kpxLWwaZh3DpOr Qudn1OKolRYyIlFshOG | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 4 ...\n(1 row)\n\nwebaction=# select * from customertarget where c_custkey=113981;\n\u00a0c_custkey | \u00a0 \u00a0 \u00a0 c_name \u00a0 \u00a0 \u00a0 | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 c_address\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 | c_nationkey ... \u00a0 \u00a0\n-----------+--------------------+------------------------------------+------------ ...\n\u00a0 \u00a0 113981 | Customer#000113981 | kpxLWwaZh3DpOr Qudn1OKolRYyIlFshOG | \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 4 ...\n(1 row)After running Execute250Deletes:webaction=# select * from customer where c_custkey=1;\n\u00a0c_custkey | c_name | c_address | c_nationkey | c_phone | c_acctbal | c_mktsegment | c_comment\u00a0\n-----------+--------+-----------+-------------+---------+-----------+--------------+-----------\n(0 rows)\n\nwebaction=# select * from customertarget where c_custkey=1;\n\u00a0c_custkey | c_name | c_address | c_nationkey | c_phone | c_acctbal | c_mktsegment | c_comment\u00a0\n-----------+--------+-----------+-------------+---------+-----------+--------------+-----------\n(0 rows)\nViewing Kafka target dataTo see the output of PostgresToKafkaCDC, use Kafka Tool or a similar viewer. The Kafka cluster name is the same as your Striim cluster name. The Kafka version is 0.11.Viewing file target dataThe output of PostgresToFileCDC is in striim/SampleOutput.Running the applications again at a later timeIn a terminal or command prompt, enter:docker start striimpostgresIf Striim's internal Kafka instance is not running, start it (see Configuring Kafka for persisted streams).WarningOn Windows, Zookeeper and Kafka do not shut down cleanly. (This is a well-known problem.) Before you restart Kafka, you must delete the files they leave in\u00a0c:\\tmp.Deploy and start the ValidatePostgres, ValidateKafka, and ValidateFile applications and leave them running.Deploy and start the ResetPostgresSample, ResetKafkaSample, and ResetFileSample apps, then when they have completed undeploy them.Installing the CDC demo appsIf you are using Striim Cloud, or Striim Platform installed as discussed in Evaluating on Mac OS X, Linux, or Windows with Docker Desktop running, the CDC demo apps are ready to use, no installation is necessary.Installing Striim with the CDC demo appsNoteTo run these demo applications, we recommend using a computer with 12 GB or more memory. If you run them on a computer with 8\u00a0GB, watch memory usage closely and close other applications as necessary to avoid running out of memory. Do not attempt to run them on a computer with less than 8 GB.To install the CDC demo apps in a DEB, RPM, TGZ, or ZIP Striim installation, see Installing the CDC demo apps in an existing Striim server.If you have already installed Docker Desktop or Docker CE, skip this step.In Windows 10 or OS X, install Docker Desktop.In Linux, install Docker CE.If you are on OS X or Linux, skip this step, as it will be done for you by the Striim installer.In Windows 10, open a command prompt and enter:docker run --name striimpostgres -d -p 5432:5432 striim/striim-postgres-cdc:latestThis will download and start a Docker container with a PostgreSQL instance configured to serve as a source and target for the demo applications.With Docker Desktop running, install Striim as described in Alternative installation method or Evaluating on Mac OS X, Linux, or Windows.On the Select Installation Packages page (step 3), leave the PostgreSQL pack selected.On the Kafka configuration page, choose \"start Kafka locally.\"When Striim has launched, log in and go to the Apps page. You should see these three groups of seven apps each:The Postgres and Kafka icons should be green, indicating that the apps are ready to run. If instead they are red, see Troubleshooting the CDC demo apps.Installing the CDC demo apps in an existing Striim serverUse the following method to install the CDC demo apps when the Striim server was installed from a DEB, RPM, TGZ, or ZIP package. This may also be used to reiinitialize the PostgreSQL instance, Kafka topic, and SampleOutput directory if the demo apps become unusable.If you have already installed Docker Desktop, skip this step.In Windows 10 or OS X, install Docker Desktop.In Linux, install Docker CE.On Windows only, if Striim's internal Kafka instance is not running, start it (see Configuring Kafka for persisted streams).Open a terminal or command prompt, change to the striim/Samples/Scripts directory, and enter ./InitializePostgresSamples.sh <Striim cluster name> <Striim admin password> <Striim server IP address> (or on Windows InitializePostgresSamples.bat ...). Ignore any \"topic does not exist\" errors: after encountering them, the script will continue after a short time.Troubleshooting the CDC demo appsIf the apps don't work, or stop working, try running the initialization script or batch file as described in Installing the CDC demo apps in an existing Striim server. If that does not resolve the problem, Contact Striim support.In this section: Running the CDC demo appsInstalling the CDC demo appsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-the-cdc-demo-apps.html", "title": "Running the CDC demo apps", "language": "en"}} {"page_content": "\n\nHands-on quick tourSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Getting StartedHands-on quick tourPrevNextHands-on quick tourThis tour will give you a quick hands-on look at Striim's dashboards, Source Preview, Flow Designer, and more.If you already have Striim Platform installed and running, continue with Viewing dashboards.Otherwise, see Install Striim Platform for evaluation purposes.Viewing dashboardsSelect Apps > View All Apps.If you don't see PosApp, select Create App > Import TQL file, navigate to\u00a0Striim/Samples/PosApp, double-click PosApp.tql, enter Samples as the namespace, and click Import.At the bottom right corner of the PosApp tile, select ... > Deploy > Deploy.When deployment completes, select ... > Start. The counter on the bell (alert) icon at the top right should start counting up, indicating that alerts are being generated.From the top left menu, select Dashboards > View All Dashboards > PosAppDash.It may take a minute for enough data to load that your display looks like the following. The PosApp sample application shows credit card transaction data for several hundred merchants (for more information, see PosApp).Hover the mouse over a map or scatter plot point, bar, or heat-map segment to display a pop-up showing more details.Click a map or plot point to drill down for details on a particular merchant:To return to the main page, click Samples.PosAppDash in the breadcrumbs:You can filter the data displayed in the dashboard using page-level or visualization-level text search or time-range filters.With the above text search, the dashboard displays data only for Recreational Equipment Inc.Click the x in the search box to clear the filter.To try the time-range filter, click the filter icon at the top right of the scatter chart, select StartTime, and set the dialog as shown below:Filter: select is betweenValue: Enter 2013-03-12 and 8:45pm as the from start date and time, and 2013-03-12 and 9:00pm as the to date and time.Click Apply.Click Clear to clear the filter.When you are through exploring the dashboard, continue with Creating sources and caches using Source Preview.Creating sources and caches using Source PreviewSource Preview is a graphical alternative to defining sources and caches using TQL. With it, you:browse regular or HDFS volumes accessible by the Striim serverselect the file you wantselect the appropriate parser (Apache, structured text, unstructured text, or XML)choose settings for the selected parser, previewing the effects on how the data is parsedgenerate a new application containing the source or cache, or add it to an existing applicationFor sources, Source Preview will also create:a CQ to filter the raw data and convert the fields to Striim data typesa stream of type WAEvent linking the source and CQan output stream of a new type based on the parser settings you chose in Source PreviewCreate a sourceThe following steps create a source from the sample data used by PosApp:Select Apps > Create New > Source Preview > Samples > PosDataPreview.csv > Preview.Check Use first line for column names and set columndelimiter to , (comma).PosApp uses only the MERCHANTID, DATETIME, AUTHAMOUNT, and ZIP columns, so uncheck the others.Set the data types for DATETIME to DateTime (check Unix Timestamp) and for AUTHAMOUNT to Double. Leave MERCHANTID and ZIP set to String.The data is now parsed correctly, the columns have been selected, and their names and data types have been set, so click Save.For Name enter PosSourceApp.If you are logged in as admin, for Namespace enter PosSourceNS. Otherwise, select your personal namespace. Then click Next.For Name enter PosSource, then click Save.The new PosSourceApp application appears in the flow editor.At this point you could add additional components such as a window, CQ, and target to refine the application, or export it to TQL for use in manually coded applications.Add a cacheThe following steps will add a cache to the PosSourceApp application:Download USAddressesPreview.zip from github.com/striim/doc-downloads and unzip it.Select > My Files, click Select File next to No File Selected (not the one next to Cancel), navigate to and double-click USAddressesPreview.txt, and click Upload and SelectSelect Apps > Create New > Source Preview > Browse, select USAddressesPreview.txt, and click Select File.Check Use first line for column names, set columndelimiter to \\t (tab), set the data type for latVal and longVal to Double, and click Save.Select Use Existing, select PosSourceApp, and click Next.Select Create cache, for Name enter ZipCache, for Cache Key select Zip, leave Cache Refresh blank, and click Save.WarningIf you save as a cache and deploy the application, the entire file will be loaded into memory.Continue with Modifying an application using the Flow Designer.Modifying an application using Flow DesignerThe instructions in this topic assume you have completed the steps in Creating sources and caches using Source Preview and are looking at PosSourceApp in Flow Designer:We will enhance this application with a query to join the source and cache and populate a target and WActionStore.Collapse Sources and expand Base Components.Click WActionStore, drag it into the workspace, and drop.Set the name to PosSourceData.Click in the Type field and enter PosSourceContext as a new type.Click Add Field four times.Set the fields and data types as shown below. Click the key icon next to MerchantId to set it as the key for PosSourceContext.Add four more fields as shown below.Click the Save just below the types (not the one at the bottom of the property editor).Set Event Types to PosSourceContext, set Key Field to Merchant ID, and click Save (the one at the bottom of the property editor).Drag a continuous query (CQ) into the workspace.Set the name to GenerateWactionContext.Enter or paste the following in the Query field:SELECT p.MERCHANTID,\n p.DATETIME,\n p.AUTHAMOUNT,\n z.Zip,\n z.City, \n z.State,\n z.LatVal,\n z.LongVal\nFROM PosSource_TransformedStream p, ZipCache z\nWHERE p.ZIP = z.ZipSet Output to Existing Output and PosSourceData. The configuration dialog should look like this:Click Save. The application should look like this:The status should now show Created. Select Deploy App > Deploy.When the status changes to Deployed, select the stream icon below\u00a0GenerateWactionContext, then click the eye icon or Preview On Run. The data preview pane will appear at the bottom of the window.Click\u00a0Deployed and select Start App. Counts will appear above each of the application's components indicating how many events it is processing per second. (Since this application has a small amount of data, these counts may\u00a0return to zero before they are refreshed. Run MultiLogApp for a larger data set where the counts will be visible for longer.)The first 100 events from the GenerateWactionContext output stream\u00a0will be displayed in the preview pane.At this point, the WActionStore contains data, so we can query or visualize it. Continue with Browsing data with ad-hoc queries.Browsing data with ad-hoc queriesAd-hoc queries let you do free-form queries on WActionStores, caches, or streams in real time by entering select statements in the Tungsten console. The syntax is the same as for queries in TQL applications (see CREATE CQ (query)) .The following example assumes you performed the steps in Modifying an application using the Flow Designer, including deploying and starting the application.Open a terminal window and start the Tungsten console. If Striim is installed in /opt, the command is: /opt/Striim/bin/console.shLog in with username admin and the password you provided when you installed Striim.At the W (admin) > prompt, enter the following: select * from PosSourceNS.PosSourceData; You should see something like the following:[\n MerchantId = Mpc6ZXJBAqw7fOMSSj8Fnlyexx6wsDY7A4E\n DateTime = 2607-11-27T09:22:53.210-08:00\n Amount = 23.33\n Zip = 12228\n City = Albany\n State = NY\n LatVal = 42.6149\n LongVal = -73.9708\n]\n[\n MerchantId = Mpc6ZXJBAqw7fOMSSj8Fnlyexx6wsDY7A4E\n DateTime = 2607-11-27T09:22:53.210-08:00\n Amount = 34.26\n Zip = 23405\n City = Machipongo\n State = VA\n LatVal = 37.4014\n LongVal = -75.9082\n]Press Enter to exit the query.If you prefer, you can see the data in a tabular format. To try that, enter: set printformat=row_format;Press cursor up twice to recall the query, then press Enter to run it again. You should see the following (if necessary, widen the terminal window to format the table correctly):To switch back to the default format:set printformat=json;Continue with Creating a dashboard.Creating a dashboardIn Viewing dashboards you saw the dashboard of the PosApp sample application. Now you will create one from scratch.The following instructions assume you completed the steps in Modifying an application using the Flow Designer and Browsing data with ad-hoc queries and that the application is still running.From the main menu, select Dashboards > View All Dashboards.Click Add Dashboard, for Dashboard Name enter PosSourceDash, for Namespace select PosSourceNS as the namespace, and click Create Dashboard. A blank dashboard will appear.To add a visualization to the dashboard, drag a Vector Map from the visualization palette and drop it on the grid.The first step in configuring a dashboard is to specify its query: click Edit Query.In the Query Name field, enter PosSourceNS.PosSourceDataSelectAll, edit the query to read select * from PosSourceData; and click Save Query.Click Configure (the pencil icon).Set the map properties as shown above, then click Save Visualization.Since the data is all in the continental United States, you might want to edit the settings to center it there. You could also change the Bubble Size settings so that the dots on the map vary depending on the amount.Click Configure again, change the settings as shown above, click Save Visualization, then refresh your browser to apply the new zoom settings.Experiment with the settings or try more visualizations if you like. For more information on this subject, see Dashboard Guide.Continue with Exporting applications and dashboardsExporting applications and dashboardsTo save the work you have done so far, you can export the application and dashboard to files.From the upper-left menu, select Apps.From PosSourceApp's ... menu, select Export.Click Export (since the app contains no Encrypted passwords, do not specify a passphrase).Optionally, change the file name or directory, then click Save.From the top menu, select Dashboards > View All Dashbaords.Click PosSourceDash.Select Export. Optionally, change the file name or directory, then click Save.You may import the exported application TQL file and dashboard JSON file to any namespace. Note that for the dashboard to work you must import it to the same namespace as the application.You may edit the exported TQL file as discussed in Programmer's Guide.What next?See\u00a0Web UI Overview for a look at additional Striim features.Run the CDC demo applications to explore Striim's data migration capabilities (see Running the CDC demo apps).If you do not plan to write Striim applications but would like to create or modify dashboards, continue with the Dashboard Guide\u00a0 and\u00a0PosAppDash in the Programmer's Guide.NoteThe Striim platform's TQL programming language is in many ways similar to SQL, particularly as regards SELECT statements. The Programmer's Guide assumes basic knowledge of SQL.To learn to write Striim applications, continue with Programmer's Guide.In this section: Hands-on quick tourViewing dashboardsCreating sources and caches using Source PreviewModifying an application using Flow DesignerBrowsing data with ad-hoc queriesCreating a dashboardExporting applications and dashboardsWhat next?Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-28\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/hands-on-quick-tour.html", "title": "Hands-on quick tour", "language": "en"}} {"page_content": "\n\nResource usage policiesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Resource usage policiesPrevNextResource usage policiesThe core Striim application, your applications, and your users logged in and working on Striim \u2014 together, these consume the same set of CPU, memory and storage resources. How much Striim can do\u2014how many applications it can run at a time, how many users can log in at once, and so on\u2014is limited by the CPU cores, memory, and disk space available to it. If you try to do more than these available resources can handle effectively, it can lead to issues such as excessive CPU usage and out-of-memory errors, and these can ultimately degrade the performance of your Striim applications and environment. We have recommended the policy defaults discussed below to avoid accidentally running into such problems.When a resource policy limit is reached, a ResourceLimitException is displayed in the web UI and logged in striim.server.log, and the action that exceeded the limit (such as deploying an application or logging in) fails.NoteIn this release, resource usage policies can only be viewed and managed using the console.The default resource usage limits are in effect for all new Striim clusters. The resource usage limits are disabled for Striim clusters that you upgrade from previous versions. You can use the default usage policy limits, or adjust, and even disable, the resource usage limits as needed. You must log in with Striim admin credentials to be able to modify the resource usage policy limits.Resource usage policy limitsObject/componentDefault value and triggering actionNotesactive_users_limitNumber of active non-system users30Limit checked: when you create a new user.Scope: clusterA greater number of active users can mean that there are more applications, and may lead to overloading the system.api_call_rate_limitRate limit for Rest API calls500 (per sec)Limit checked: when you make a REST API call.Scope: serverServing the REST API calls takes resources on the server backend. Bursty REST API calls also are an indication of a user side uncontrolled application or at worst a DDOS type pattern.After altering / disabling this limit, you must restart the service.apps_per_cpu_limitNumber of running applications based on CPU cores4 (applications per available core on the server)Limit checked: during deployment.Scope: serverApplications are the primary consumers of resources.To\u00a0maintain a certain throughput level we need to limit the number of running applications as a function of the available\u00a0vCPUs.apps_per_gb_limitNumber of running applications based on memory2 (applications per 1 GB of memory available to the Java virtual machine running Striim)Limit checked: during deployment.Scope: serverThe number of running applications is combination of both the CPU cores and memory limits.See Application resource policies.cluster_size_limitNumber of servers in the cluster7Limit checked: when you add a new server to the cluster.Scope: clusterBenefits: The probability of a server failure increases as their number grows.num_queries_limitNumber of adhoc and named (dashbaord) queries50Limit checked: when you run an ad-hoc query or a dashboard runs a named query.Scope: serverToo many unmanaged tasks/queries taking up system resources and destabilizing running applications will be limited.ui_sessions_limitNumber of concurrent active web UI sessions10Limit checked: when a user logs in through the UI.Scope: serverThe Striim user interface is an active page which when loaded and open actively receives various types of data. Having too many of these pages open may lead to memory pressure.Application resource usage policiesApplications are the primary consumers of resources. Your Striim environment may be constrained by CPU resources or memory resources depending on the configuration of your underlying infrastructure. Thus, there are two separate policy limits that apply to the maximum number of applications that can concurrently run in your environment:Number of running applications based on CPU coresNumber of running applications based on memory available to the Java virtual machine running StriimThe maximum number of concurrently running applications is determined by the combination of these two resource policy limits. That is, the limit will be a minimum of the number of applications that can run based on CPU and memory resources. For example, a server with 8 CPU cores and 16 GB memory can be considered to be constrained by memory resources. If you configure the application resource policies to allow a maximum of 1 application per GB of memory and 4 applications per CPU core, then Striim will allow a maximum of 16 applications to run at anytime because it is the lower of the 2 limits - 16 applications on the basis of memory and 32 applications on the basis of CPU cores.Viewing resource policiesThe following command shows the current value for a given resource limit policy:describe resource_limit_policy <resourceLimitname>;The following command lists all the names of the resource limit policies:list resource_limit_policies;Enabling or disabling resource policies as a groupYou can enable or disable resource limits as a group using the alter cluster command:alter cluster { enable | disable } resource_limit_policy;After enabling or disabling resource limits, you must restart Striim before the change will take effect (see Starting and stopping Striim Platform).Disabling the resource_limit_policy turns off all resource limit checks.Enabling the resource_limit_policy turns resource limit checks with the values of the limits reverting to those set by the user before disabling, or to the default values if no changes to the defaults have been made.Modifying individual resource usage policiesYou can enable or disable resource limits individually or change their values using the alter resource_limit_policy command. Policies apply to all servers in the cluster; you cannot set different policies for each server.If you make a change to api_call_rate_limit, you must restart Striim before the change will take effect (see Starting and stopping Striim Platform).To disable an individual resource usage policy or several enumerated policies:alter resource_limit_policy\u00a0unset\u00a0<\"resourcelimitname\">, ...;For example, alter RESOURCE_LIMIT_POLICY unset \"CLUSTER_SIZE_LIMIT\", \"APPS_PER_CPU_LIMIT\";To enable a resource usage policy or set a new value for the default limit:alter resource_limit_policy set\u00a0<\"resourcelimitname\">, ...;For example, alter RESOURCE_LIMIT_POLICY set (CLUSTER_SIZE_LIMIT : 3, APPS_PER_CPU_LIMIT : 14);The value must be a positive integer.In this section: Resource usage policiesResource usage policy limitsApplication resource usage policiesViewing resource policiesEnabling or disabling resource policies as a groupModifying individual resource usage policiesSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-22\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/resource-usage-policies.html", "title": "Resource usage policies", "language": "en"}} {"page_content": "\n\nPipelinesSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 PipelinesPrevNextPipelinesAs discussed in Common Striim use cases, Striim applications can do many different things. When the primary purpose of an application is to move or copy data from a source to a target, we call that a \"pipeline\" application. For an introduction to the subject, see What is a Data Pipeline.Common source-target combinationsThe following examples are just the most popular among Striim's customers. There are many other possibilities.Database to database, for example, from MySQL, Oracle, or SQL Server to MariaDb, PostgreSQL, or Spanner in the cloud. See the Change Data Capture (CDC) for a full list of supported sources and Targets for a full list of supported targets.The most common use for this kind of pipeline is to allow a gradual migration from on-premise to cloud. Applications built on top of the on-premise database can be gradually replaced with new applications built on the cloud database. Once all the legacy applications are replaced, the pipeline can be shut down and the on-premise database can be retired.In this model, updates, and delete operations on the the source tables are replicated to the target with no duplicates or missing data (that is, \"exactly once processing or E1P\"). This consistency is ensured even after events such as a server crash require restarting the application (see Recovering applications).Recovering applicationsDatabase to data warehouse, for example, from Oracle, PostgreSQL, or SQL Server (on premise or in the cloud) to Google BigQuery, Amazon Redshift, Azure Synapse, or Snowflake. See Sources for a full list of supported sources and Targets for a full list of supported targets.The primary use for this kind of pipeline is to update data warehouses with new data in near real time rather than in periodic batches.Typically data warehouses retain all data so that business intelligence reports can be generated from historical data. Consequently, when rows are updated or deleted in the source tables, instead of overwriting the old data in the target Striim appends a record of the update or delete operation. Striim ensures that all data is replicated to the target, though after events such as a server crash require restarting the application there may be duplicates in the target (that is, \"at least once processing\" or A1P).Supported sources and targets for pipeline appsThe following sources (all SQL databases) and targets may be directly connected by a WAEvent stream.Supported WAEvent sourcesSupported targetsCosmos DB ReaderGCS ReaderHP NonStop SQL/MX using Database Reader or Incremental Batch ReaderHP NonStop Enscribe, SQL/MP, and SQL/MX readers (CDC)MariaDB Reader (CDC)MariaDB using Database Reader or Incremental Batch ReaderMongo Cosmos DB ReaderMySQL Reader (CDC)MySQL using Database Reader or Incremental Batch ReaderOracle Reader (CDC)OJetOracle Database using Database Reader or Incremental Batch ReaderPostgreSQL Reader (CDC)PostgreSQL using Database Reader or Incremental Batch ReaderSalesforce Pardot ReaderServiceNow Reader (in this release, supports insert and update operations only, not deletes)SQL Server using MSJet (CDC)SQL Server CDC using MS SQL Reader (CDC)SQL Server using Database Reader or Incremental Batch ReaderSybase using Database Reader or Incremental Batch ReaderTeradata using Database Reader or Incremental Batch ReaderAzure Synapse using Azure SQL DWH WriterBigQuery WriterCassandra Cosmos DB WriterCassandra WriterCloudera Hive WriterCosmos DB WriterDatabricks WriterHazelcast WriterHBase WriterHP NonStop SQL/MX using Database WriterHortonworks Hive WriterKafka WriterKudu WriterMariaDB using Database WriterMongo Cosmos DB WriterMongoDB WriterMySQL using Database WriterOracle Database using Database WriterPostgreSQL using Database WriterRedshift WriterSalesforce Writer (in MERGE mode)SAP HANA using Database WriterServiceNow WriterSinglestore (MemSQL) using Database WriterSnowflake WriterSpanner WriterSQL Server using Database WriterThe following sources and targets may be directly connected by a JSONNodeEvent stream.Supported JSONNodeEvent sourcesSupported targetsCosmos DB ReaderJMX ReaderMongoDB ReaderMongo Cosmos DB ReaderADLS Writer (Gen1 and Gen2)Azure Blob WriterAzure Event Hub WriterCosmos DB WriterFile WriterGCS WriterGoogle PubSub WriterHDFS WriterJMS WriterKafka WriterKinesis WriterMapR FS WriterMapR Stream WriterMongoDB Cosmos DB WriterMongoDB WriterS3 WriterSchema migrationSome of Striim's writers require you to create tables corresponding to the source tables in the target. Some initial load templates will automate this task. See Creating apps using templates for details.Striim Platform also provides a script that can automate some of that work. See Using the schema conversion utility for details.Using the schema conversion utilityMapping and filteringThe simplest pipeline applications simply replicate the data from the source tables to target tables with the same names, column names, and data types. If your requirements are more complex, see the following:Using database event transformersMasking functionsMasking functionsModifying and masking values in the WAEvent data array using MODIFYModifying the WAEvent data array using replace functionsMapping columnsModifying output using ColumnMapValidating table mappingSchema evolutionFor some CDC sources, Striim can capture DDL changes. Depending on the target, it can replicate those changes to the target tables, or take other actions, such as quiescing or halting the application, For mote information, see Handling schema evolution:Initial load versus continuous replicationTypically, setting up a data pipeline occurs in two phases.The first step is the initial load, copying all existing data from the source to the target. You may write a Striim application or use a third-party tool for this step. If the source and target are homogenous (for example, MySQL to MariaDB, Oracle to Oracle Exadata, or SQL Server to Azure SQL Server managed instance), it its be fastest and easiest to use the native copy or backup-restore tools.Depending on the amount and complexity of data in the source tables, this may take minutes, hours, days, or weeks. You may monitor progress by Creating a data validation dashboard.Creating a data validation dashboardOnce the initial load is complete, you will start the Striim pipeline application to pick up where the initial load left off. See Switching from initial load to continuous replication for technical details.Monitoring your pipelineYou may monitor your pipeline by Creating a data validation dashboard.Creating a data validation dashboardYou should also set up alerts to let you know if anything goes wrong. See Sending alerts about servers and applications.Sending alerts about servers and applicationsSetting up alerts for your pipelineSystem alerts for potential problems are automatically enabled. You may also create custom alerts. For more information. (see Sending alerts about servers and applications.Sending alerts about servers and applicationsScaling up for better performanceWhen a single reader can not keep up with the data being added to your source, create multiple readers. Use the Tables property to distribute tables among the readers:Assign each table to only one reader.When tables are related (by primary or foreign key) or to ensure transaction integrity among a set of tables, assign them all to the same reader.When dividing tables among readers, distribute them according to how busy they are rather than simply by the number of tables. For example, if one table generates 50% of the entries in the CDC log, you might assign it and any related tables to one reader and all the other tables to another.The following is a simple example of how you could use two Oracle Readers, with one reading a very busy table and the other reading the rest of the tables in the same schema:CREATE SOURCE OracleSource1 USING OracleReader ( \n FetchSize: 1,\n Compression: false,\n Username: 'myname',\n Password: '7ip2lhUSP0o=',\n ConnectionURL: '198.51.100.15:1521:orcl',\n ReaderType: 'LogMiner',\n Tables: 'MYSCHEMA.VERYBUSYTABLE'\n) \nOUTPUT TO OracleSourcre_ChangeDataStream;\n\nCREATE SOURCE OracleSource2 USING OracleReader ( \n FetchSize: 1,\n CommittedTransactions: true,\n Compression: false,\n Username: 'myname',\n Password: '7ip2lhUSP0o=',\n ConnectionURL: '198.51.100.15:1521:orcl',\n ReaderType: 'LogMiner',\n Tables: 'MYSCHEMA.%',\n ExcludedTables: 'MYSCHEMA.VERYBUSYTABLE'\n) \nOUTPUT TO OracleSourcre_ChangeDataStream;When a single writer can not keep up with the data it is receiving from the source (that is, when it is backpressured), create multiple writers. For many writers, you can simply use the Parallel Threads property to create additional instances and Striim will automatically distribute data among them (see Creating multiple writer instances). For other writers, use the same approach as for sources, described above.In this section: PipelinesCommon source-target combinationsSupported sources and targets for pipeline appsSchema migrationMapping and filteringSchema evolutionInitial load versus continuous replicationMonitoring your pipelineSetting up alerts for your pipelineScaling up for better performanceSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/pipelines.html", "title": "Pipelines", "language": "en"}} {"page_content": "\n\nInstallation and configurationSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationPrevNextInstallation and configurationThis section of the documentation describes how to create and configure Striim clusters.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-16\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/installation-and-configuration.html", "title": "Installation and configuration", "language": "en"}} {"page_content": "\n\nSystem requirementsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationSystem requirementsPrevNextSystem requirementsThe following are the minimum requirements for a Striim server. Additional memory and disk space will be required depending on the size and number of events stored in memory and persisted to disk by your applications.CPU cores: at least 8, or more depending on application requirementsmemory: at least 32 GB, or more depending on application requirements (see Changing the amount of memory available to a Striim server)free disk space required:for evaluation and development: minimum 10 GB, 20 GB recommendedfor production: 100 GB or more depending on application requirementsfree disk space must never drop below 10% on any serverThe following will increase disk space requirements:BigQuery Writer when Streaming Upload is Falseevent tables persisted to Elasticsearch (see CREATE EVENTTABLE)exception stores persisted to Elasticsearch (see CREATE EXCEPTIONSTORE)Kafka streams (see Persisting a stream to Kafka)Persisting a stream to Kafkathe metadata repository if hosted on the internal Derby instance (see Configuring Striim's metadata repository)MSSQL Reader when Transaction Buffer Type is Disk (see MS SQL Reader properties)Oracle Reader when Transaction Buffer Type is Disk (see Oracle Reader properties)server log files (see Changing log file retention settings)WActionStores persisted to Elasticsearch (see CREATE WACTIONSTORE)See also Configuring low disk space monitoring.certified operating systems (if you need to run Striim on another operating system or a different version, please Contact Striim support)64-bit CentOS 7.964-bit Red Hat Enterprise Linux (RHEL) 7.6, 7.9, and 8.764-bit Ubuntu 18.04 LTS and 20.04 LTS64-bit Windows Server 2019 and 2022Mac OS X 13.1 (for evaluation and development purposes only)64-bit Windows 11 (for evaluation and development purposes only)supported Java environments (all servers and Forwarding Agents in a Striim cluster must run the same version)recommended: 64-bit Oracle SE Development Kit 8 (required to use HTTPReader or SNMPParser or Kerberos authentication for Oracle or PostgreSQL); for a license, contact Oracle License Management Servicealso supported: 64-bit OpenJDK 8firewall: the following ports must be open inbound for communication among servers and Forwarding Agents in the cluster (see also Striim Forwarding Agent system requirements)on servers where you want remote access, port 22 for SSH and/or SCPon the server running the Derby metadata repository, port 1527 for TCP*on servers running the web UI, port 9080 (http) and/or 9081 (https) for TCP (or see Changing the web UI ports)on all servers*port 5701 for TCP for Hazelcast (if you wish to use Hazelcast Enterprise, Contact Striim support) plus one additional port per server and Forwarding Agent in the 5702-5799 range. For example, with three servers and two Forwarding Agents, 5701-5705.port 9300 for TCP (Elasticsearch)ports 49152-65535 inbound for ZeroMQ (see Narrowing the ZeroMQ port range)port 54327 for multicast UDP on an IP address in the 239 range chosen based on the cluster name (to ensure that each cluster uses a different address). To find that address, install Striim on the first server and look in striim-node.log (see Reading log files) for a message such as \"Using Multicast to discover the cluster members on group 239.189.210.200 port 54327.\" (If you do not wish to use multicast UDP, see Using TCP/IP instead of multicast UDP.)*Not required if you have only one server and no Forwarding Agents.The web client has been tested on Chrome. Other web browsers may work, but if you encounter bugs, try Chrome. Some ad-blocking plugins may prevent the UI from loading.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/system-requirements.html", "title": "System requirements", "language": "en"}} {"page_content": "\n\nInstalling StriimSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimPrevNextInstalling StriimThis section describes how to install Striim in various environments for various purposes.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-08-05\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/installing-striim.html", "title": "Installing Striim", "language": "en"}} {"page_content": "\n\nConfiguring Striim's metadata repositorySkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimConfiguring Striim's metadata repositoryPrevNextConfiguring Striim's metadata repositoryYou may host Striim's metadata repository on Derby, Oracle, or PostgreSQL. By default, Striim uses the integrated, preconfigured Derby instance. To use Oracle or PostgreSQL instead, follow the instructions in this section.CautionWhen using the integrated Derby instance in a production environment, we strongly recommend Changing the Derby password.See also Changing metadata repository connection retry settings.CautionWhen Striim is processing very large amounts of data at high velocity, Derby may fail to reclaim the unused space in the metadata repository, resulting in it eventually filling all available disk space and crashing. In this situation, Striim will crash and restart will fail. To work around this issue, we recommend hosting the metadata repository on Oracle or PostgreSQL instead.As a short-term workaround for Derby, use the following command to compress the metadata repository tables:striim/bin/derbyTools.sh -A compressDbThis uses a lot of Derby resources, so it should be performed when Striim is not busy.Configuring Oracle to host the metadata repositoryCopy the SQL scripts\u00a0/opt/striim/conf/DefineMetadataReposOracle.sql and DefineMeteringReposOracle.sqlto the Oracle host.Using sqlplus, log in to Oracle as an administrator and create the user Striim will use to create, write to, and read from the repository tables (replace ****** with a strong password):create user striimrepo identified by ******;\ngrant connect, resource to striimrepo;Log out of sqlplus, log in again as the user you just created, and run the\u00a0DefineMetadataReposOracle.sql and DefineMeteringReposOracle.sql scripts.Configuring PostgreSQL to host the metadata repositoryCopy the SQL scripts\u00a0/opt/striim/conf/DefineMetadataReposPostgres.sql and DefineMeteringReposPostgres.sql to the PostgreSQL host.Using psql, connect to PostgreSQL as an administrator and create the user Striim will use to create, write to, and read from the repository tables (replace ****** with a strong password):sudo -u postgres psql\ncreate user striim with password '******';\ncreate database striimrepo;\ngrant all on database striimrepo to striim;\n\\q\nConnect again as the user you just created, create a schema, set the search path, and run the\u00a0DefineMetadataReposPostgres.sql and DefineMeteringReposPostgres.sql scripts:psql -U striim -d striimrepo;\ncreate schema striim;\nalter role striim set search_path to striim;\n\\q\npsql -U striim -d striimrepo -f DefineMetadataReposPostgres.sql\npsql -U striim -d striimrepo -f DefineMeteringReposPostgres.sql\nSetting startUp.properties for the metadata repositoryGenerally you should follow the instructions in this section only when they are referred to from instructions for creating or adding a server to a cluster.... when hosted on DerbyTypically, when using the internal Derby instance, the necessary properties will be set automatically.If Derby is not running on port 1527, set the following properties:MetaDataRepositoryLocation=<IP address>:<port>\nDERBY_PORT=<port>... when hosted on OracleSet the following properties:MetadataDb=oracle\nMetaDataRepositoryLocation=<connection URL>\nMetaDataRepositoryDBname=striimrepo\nMetaDataRepositoryUname=striimrepoIf you use an SID, the connection URL has the format\u00a0jdbc:oracle:thin:@<IP address>:<port>:<SID>, for example,\u00a0jdbc:oracle:thin:@192.0.2.0:1521:orcl. If you use a service name, it has the format\u00a0jdbc:oracle:thin:@<IP address>:<port>/<service name>, for example, jdbc:oracle:thin:@192.0.2.01521:/orcl. IIn a high availability active-standby or RAC environment, specify all servers, for example, MetaDataRepositoryLocation=jdbc:oracle:thin:@(DESCRIPTION_LIST=(LOAD_BALANCE=off)(FAILOVER=on)(DESCRIPTION= (CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= 192.0.2.100)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=racdb.localdomain)))(DESCRIPTION= (CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= 192.0.2.101)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=racdb.localdomain)))) (see Features Specific to JDBC Thin for more information.)When the connection uses SSL, the connection URL has the format:MetaDataRepositoryLocation=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)\n(HOST=<IP address>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<service name>)))For SSL with a PKCS12 keystore, copy the ewallet.p12 file to the Striim server's local environment and set the following properties:OracleMDRTrustStoreType=PKCS12\nOracleMDRTrustStore=<path>/ewallet.p12For SSL with an SSO keystore, copy the cwallet.ss0 file to the Striim server's local environment and set the following properties:OracleMDRTrustStoreType=SSO\nOracleMDRTrustStore=<path>/cwallet.sso... when hosted on PostgreSQLSet the following properties:MetadataDb=postgres\nMetaDataRepositoryLocation=<connection URL>\nMetaDataRepositoryDBname=striimrepo\nMetaDataRepositoryUname=striimThe PostgreSQL connection URL has the format <IP address>:<port>/striimrepo, for example 192.0.2.100:5432/striimrepo.In a high availability environment, specify the IP addresses of both the primary and standby servers, separated by a comma, for example, 192.0.2.100,192.0.2.101:5432/striimrepo.When the connection uses SSL, copy the postgresql.crt file to the Striim server's local environment and set the following property:PostgresMDRCertPath=<path>/postgresql.crtIf using Azure Database for PostgreSQL, see Hosting Striim's metadata repository on Azure Database for PostgreSQL.If using Google Cloud SQL for PostgreSQL, see How To Configure SSL Connection to Google Cloud SQL Postgres as Striim MDR? in the Striim Support knowledge base.Moving the metadata repository to Oracle or PostgreSQLTo move the metadata repository from the Striim internal Derby instance to Oracle or PostgreSQL, do the following. This will require bringing down the Striim cluster, so you should schedule it for a maintenance window.Follow the instructions in Configuring Oracle to host the metadata repository or Configuring PostgreSQL to host the metadata repository.Back up Derby as described in Backing up the metadata repository host.Stop the Derby instance (striim-dbms) and all servers in the Striim cluster (see Starting and stopping Striim Platform).On the server running Derby, export the metadata:cd /opt/striim\nsudo bin/tools.sh -A export -F export.json\nMake a backup copy of startUp.properties:cd /opt/striim/conf\ncp startUp.properties\u00a0*.bakOn each server in the cluster, edit startUp.properties and change the value of MetaDataRepositoryLocation to reflect the new repository host (see the Oracle or PostgreSQL section of Setting startUp.properties for the metadata repository).On each server in the cluster, update the metadata repository user's password in the Striim keystore:cd /opt/striim\nsudo su - striim bin/sksConfig.sh -pOn the server where you exported the metadata, import it.For Oracle:cd /opt/striim\nsudo bin/tools.sh -A import -F export.json -f 4.2.0 -r oracle\nFor PostgreSQL:cd /opt/striim\nsudo bin/tools.sh -A import -F export.json -f 4.2.0 -r postgres\nStop Derby from starting automatically.sudo systemctl disable striim-dbmsIf you are Running Striim as a process, set NO_DERBY=true as an environment variable before running server.sh.Restart the Striim cluster (see Starting and stopping Striim Platform).In this section: Configuring Striim's metadata repositoryConfiguring Oracle to host the metadata repositoryConfiguring PostgreSQL to host the metadata repositorySetting startUp.properties for the metadata repository... when hosted on Derby... when hosted on Oracle... when hosted on PostgreSQLMoving the metadata repository to Oracle or PostgreSQLSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/configuring-striim-s-metadata-repository.html", "title": "Configuring Striim's metadata repository", "language": "en"}} {"page_content": "\n\nRunning Striim in Amazon EC2Skip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in Amazon EC2PrevNextRunning Striim in Amazon EC2The fastest and easiest way to run Striim in Amazon EC2 is to get it from the AWS Marketplace (see Deploying Striim from the AWS Marketplace). If you prefer to run Striim in your own EC2 VM, contact Striim support for assistance.Striim currently offers the following AWS EC2 solutions:Striim for Amazon Web Services - BYOL Platform: the full Striim platform using a free trial license or a license purchased from Striim (BYOL = \"bring your own license\")Striim for Amazon Redshift: licensed for MySQL, Oracle, SQL Server sources and RedshiftWriter and FileWriter targets. (FileWriter can be useful for debugging and testing.)With the BYOL solution, you will be billed monthly by Amazon for virtual machine usage and purchase your license directly from Striim. After the trial period, you must Contact Striim support to purchase a license.With the other solutions, you will be billed by Amazon monthly according to usage. See the individual solutions for pricing details.Deploying Striim from the AWS MarketplaceTo deploy a Striim AWS Marketplace solution:Go to the appropriate page in the AWS Marketplace.Click Continue to Subscribe > Continue to Configuration.Select the desired region, then click Continue to Launch.Optionally, change the EC2 Instance Type.Under Security Group Settings, click Create New Based on Seller Settings, enter a name and description, and click Save.Under Key Pair Settings, select the key pair to use with Striim.If you do not have a key pair in the selected region, click Create a key pair in EC2, change the region to the one you selected previously, click Create Key Pair, enter a name, click Create, save the .pem file, and return to the Launch page, and refresh the key pair list. If the new key pair is not selected automatically, select it.Click Launch.Click EC2 Console.Make note of the Instance ID, then in the left-side menu click Elastic IPs.Click Allocate new address. If you see a choice of options, select VPC. Then click Allocate > Close. Make note of the IP address as you will use it to access Striim.Select Actions > Associate address, select the EC2 instance ID, select its private IP, and click Associate > Close.In the left-side menu, click Instances.If you have more than one instance, select the one you just created. Copy its Public DNS (IPv4) string (at the top of the right column on the Description tab at the bottom of the page).Paste the public DNS string in your browser's address bar, add :9070 (for example, ec2-3-91-177-185.compute-1.amazonaws.com:9070), and press Enter. You should see this message:Click OK to allow cookies, then click Accept Striim EULA and Continue to accept the license agreement.For Striim BYOL only, enter your name, email, company name, and email address. If you already have a license, be sure that the company name exactly matches the name associated with your license.Enter a name for the Striim cluster and sys, admin, and keystore passwords. Be sure to remember the cluster name and passwords, since you will need them to log in to Striim and run the Striim console or Forwarding Agent.Click Save and Continue.For the BYOL Platform only, provide a license key if you have one. Otherwise, leave the license key field blank to proceed with a trial license, then click Continue.Click Launch.When launch has completed, click Log In. (You can return to the login page using the\u00a0View Instances > Access Software link on the Your Software page.)Log in as admin with the admin password you provided above.If you are new to Striim, see\u00a0Getting Started.Installing Striim from an Amazon EC2 AMITo create a multi-server Striim cluster in EC2, you must install it from an AMI. Contact\u00a0Striim support for assistance.Using ssh to run the Striim console in EC2To access the Striim instance's command line, use ssh. On Windows, you can get ssh by installing the Windows Subsystem for Linux or, if your Windows version does not support that, you can install Cygwin or the third-party utilities WinSCP and Putty, or use the AWS Command Line Interface (AWS CLI).The syntax is:ssh -i <private key file name> centos@<Public DNS string>The private key file is the .pem file specified in the \"Select an existing key pair or create a key pair\" step above. Before using ssh, you must run this command on the private key file once:\u00a0chmod 400 <private key file name>centos is the Linux user name used by the Striim server.You can find the Public DNS string for the Striim instance on the AWS Instances page. The format of this string is ec2-<Public IP>.<EC2 region>.compute.amazon.com.Putting all these together, the command would look something like this (assuming you ran it from the same directory where the .pem file is stored):ssh -i MyStriimUser.pem centos@ec2-203.0.113.23.us-west-1.compute.amazonaws.comWhen ssh connects to the Striim instance, you will see something like this:TIBCO Silver - CentOS 6.5 x86_64 AMI\nBootstrap complete. Visit http://silver.tibco.com for more information.\nus-west-1b-i-07e1133b674e72aed\n[centos:~]$To run the console, enter:sudo /opt/Striim/bin/console.sh -c StriimClusterIn this section: Running Striim in Amazon EC2Deploying Striim from the AWS MarketplaceInstalling Striim from an Amazon EC2 AMIUsing ssh to run the Striim console in EC2Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-in-amazon-ec2.html", "title": "Running Striim in Amazon EC2", "language": "en"}} {"page_content": "\n\nRunning Striim in AzureSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in AzurePrevNextRunning Striim in AzureStriim currently offers the following Azure Marketplace solutions:StreamShift by Striim: see the StreamShift for Azure documentationStriim: the full Striim platform on a pay-as-you-go model with an initial free trialStriim BYOL: the full Striim platform, requires license from Striim, you will be billed by Azure for virtual machine usageStriim Cloud Enterprise: see the Striim Cloud documentationStriim for Real-Time Integration to Azure Storage: licensed for all sources and Azure Blob Storage targetsStriim for Real-Time Data Integration to Cosmos DB: licensed for all sources and CosmosDBWriter targetsStriim for Real-Time Integration to PostgreSQL: licensed for all sources and PostgreSQL targetsStriim for Data Integration to SQL Data Warehouse: licensed for all sources and Azure SQL Data Warehouse targetsStriim for Real-Time Integration to SQL Database: licensed for all sources and Azure SQL Database targetsStriim ROAR Real-Time Offloading Analytics & Reporting for Azure PostgreSQL: licensed for all sources and PostgreSQL targetsStriim VM Subscription: the full Striim platform with an annual license and Gold supportYou will be billed by Microsoft monthly according to usage. See the individual solutions for pricing details.Deploying Striim Using an Azure Marketplace SolutionFirst, create, purchase, and deploy your solution:Go to the Azure Marketplace and search for Striim.Click the solution you want.Click Get It Now > Continue > Create.Enter the resource group name, VM user name, and VM user password (make note of the user name and password as you will need them to access the Striim server via ssh). If you select one of your existing resource groups, you will need to open the ports required by Striim (see System requirements).Optionally, change the location, then click Next: Striim Cluster Settings.If you want to create multiple Striim servers, change Standalone to Cluster, set the number of servers, and optionally choose a larger VM.Enter the Striim cluster name (required to connect the Forwarding Agent) and password, then click Next: Striim Access Settings.Enter the first part of the domain name for your Striim instance and the admin password (make note of the password as you will need it to log into Striim).Optionally, select your own VNET, then click Next: Review + create.Click Create.\u00a0Deployment may take several minutes.When deployment is complete, click Go to resource group.Click <cluster name>-masternode (the VM hosting the master Striim server) and make note of the the DNS name (<domain name>:<Azure region>.cloudapp.azure.com) as you will need it to perform the remaining steps.Install JDBC driversWhen deployment is complete, install any JDBC drivers that will be required by your sources and, if you deployed the Azure SQL Database and SQL Data Warehouse solutions, the Microsoft JDBC Driver 4.0 for SQL Server. This requires the Linux utilities scp and ssh. On Windows, you can get scp and ssh by installing the Windows Subsystem for Linux or, if your Windows version does not support that, you can install Cygwin or the third-party utilities WinSCP and Putty, or use Azure Cloud Shell.The following instructions are for the SQL Server driver but the procedure is the same for the other drivers discussed in Installing third-party drivers in Striim Platform.Download the Microsoft JDBC Driver 7.2 for SQL Server\u00a0.gz package from https://www.microsoft.com/en-us/download/details.aspx?id=57782 and extract it.Open a terminal, switch to the directory that was extracted, and enter the following command to copy the driver to Striim:scp enu/mssql-jdbc-7.2.2.jre8 <VM user name>@<DNS name>:/home/<VM user name>When prompted, enter the VM password.Enter the following command to log into the Striim VM:ssh <VM user name>@<DNS name>\nWhen prompted, enter the VM password.Enter the following to install the driver in Striim and restart the Striim server:sudo su\n<VM user password>\ncp mssql-jdbc-7.2.2.jre8 /opt/striim/lib\nsystemctl stop striim-node\nsystemctl start striim-node\nexit\nexitIf you created multiple Striim servers, repeat the above steps on each one. The VM user names and DNS names for the other servers are the same as the master's but with digits, starting with 0, appended to the server name. For example, if the master's name was mycluster, the first additional server's name would be mycluster0 and its DNS name would be mycluster0.westus.cloudapp.azure.com.Log in to the Striim web UIUsing a compatible Web browser (we recommend Chrome), go to http://<DNS name>:9080, enter admin as the user name, enter the Striim admin password, and click Log In. At this point, if you are new to Striim, see\u00a0Getting Started. See also\u00a0\u00a0Creating apps using templates.Creating apps using templatesAccessing Striim in Azure via sshEnter the following command to log into the Striim VM:ssh <VM user name>@<DNS name>\nHosting Striim's metadata repository on Azure Database for PostgreSQLWhen running Striim in Azure, you may host the metadata repository on Azure Database for PostgreSQL. Follow the instructions for PostgreSQL in Moving the metadata repository to Oracle or PostgreSQL, except in step 6 specify the values in startUp.properties as follows:MetadataDb=azurepostgres\nMetaDataRepositoryLocation=<server name>:5432/striimrepo\nMetaDataRepositoryDBname=striimrepo\nMetaDataRepositoryUname=striim@<hostname>You can find the Server name on the database's Overview page:The hostname is the first element of the server name. So, for example:MetadataDb=azurepostgres\nMetaDataRepositoryLocation=striimtest1.postgres.database.azure.com:5432/striimrepo\nMetaDataRepositoryDBname=striimrepo\nMetaDataRepositoryUname=striim@striimtest1In this section: Running Striim in AzureDeploying Striim Using an Azure Marketplace SolutionHosting Striim's metadata repository on Azure Database for PostgreSQLSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-in-azure.html", "title": "Running Striim in Azure", "language": "en"}} {"page_content": "\n\nRunning Striim in CentOSSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in CentOSPrevNextRunning Striim in CentOSTo create a Striim cluster in CentOS, install the first server as described in\u00a0Creating a cluster in CentOS. To add additional servers to the cluster, follow the instructions in\u00a0Adding a server to a cluster in CentOS.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-06-08\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-in-centos.html", "title": "Running Striim in CentOS", "language": "en"}} {"page_content": "\n\nCreating a cluster in CentOSSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in CentOSCreating a cluster in CentOSPrevNextCreating a cluster in CentOSImportantBefore following the instructions below, read the Release notes.Release notesFollow these instructions to set up the first server in a Striim cluster. Installation will create the system account striim and all files installed will be owned by that account.Verify that the system meets the System requirements.System requirementsIf you will host the metadata repository in Oracle or PostgreSQL, follow the instructions in Configuring Striim's metadata repository.Download striim-node-4.2.0-Linux.rpm.If you plan to host the metadata repository on the internal Derby instance, download\u00a0striim-dbms-4.2.0-Linux.rpm.Optionally, download the sample applications,\u00a0striim-samples-4.2.0-Linux.rpm.Install the node package:sudo rpm -ivh striim-node-4.2.0-Linux.rpmIf using Derby to host the metadata repository, install its package:sudo rpm -ivh striim-dbms-4.2.0-Linux.rpmOptionally, install the sample application package:sudo rpm -ivh striim-samples-4.2.0-Linux.rpmRun sudo su - striim /opt/striim/bin/sksConfig.sh and enter passwords for the Striim keystore and the admin and sys users. If hosting the metadata repository on Oracle or PostgreSQL, enter that password as well (see Configuring Striim's metadata repository). If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped:\u00a0, . _ + : @ % / -If hosting the metadata repository on Derby, change its password as described in Changing the Derby password.Edit /opt/striim/conf/startUp.properties,\u00a0edit the following property values (removing any # characters and spaces from the beginning of the lines), and save the file:WAClusterName: a name for the Striim cluster (note that if an existing Striim cluster on the network has this name, Striim will try to join it)CompanyName: If you specify keys, this must exactly match the associated company name. If you are using a trial license, any name will work.ProductKey and LIcenseKey: If you have keys, specify them, otherwise leave blank to run Striim on a trial license. Note that you\u00a0cannot create a multi-server cluster using a trial license.Interfaces: If the system has more than one IP address, specify the one you want Striim to use, otherwise leave blank and Striim will set this automatically.If not hosting the metadata repository on the internal Derby instance with its default settings, see Setting startUp.properties for the metadata repository.Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use from the default of 4GB (see Changing the amount of memory available to a Striim server).For CentOS 6, enter\u00a0sudo start striim-dbms, wait ten seconds, then enter\u00a0sudo start striim-node.For CentOS 7, enter:sudo systemctl enable striim-dbms\nsudo systemctl start striim-dbms\u00a0Wait ten seconds, then enter:sudo systemctl enable striim-node\nsudo systemctl start striim-nodeThen\u00a0sudo tail -F /opt/striim/logs/striim-node.log and\u00a0wait for the message\u00a0Please go to ... to administer, or use console.To uninstall:sudo rpm -e striim-node\nsudo rpm -e striim-dbms\nsudo rpm -e striim-samples\nsudo rm -rf /etc/systemd/system/multi-user.target.wants/striim-node.serviceNoteIf you have installed multiple versions of Striim on the same system, use\u00a0rpm -qa | grep striim\u00a0to identify the package(s) for 4.2.0 to be uninstalled.After uninstalling, you may remove\u00a0/opt/striim,\u00a0/var/striim, and\u00a0/var/log/striim.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/creating-a-cluster-in-centos.html", "title": "Creating a cluster in CentOS", "language": "en"}} {"page_content": "\n\nAdding a server to a cluster in CentOSSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in CentOSAdding a server to a cluster in CentOSPrevNextAdding a server to a cluster in CentOSNoteAll servers in a cluster must run the same version of Striim.\u00a0You cannot create a multi-server cluster using a trial license.The following steps require that you have created the first server in the cluster as described above and\u00a0that it is running.Verify that the system meets the System requirements.System requirementsLog in to Linux\u00a0and download striim-node-4.2.0-Linux.rpm.Install that package:sudo rpm -ivh striim-node-4.2.0-Linux.rpmCopy sks.jks, sksKey.pwd, and startUp.properties from /opt/striim/conf/ on the first server to /opt/striim/conf/ on the new server.Assign ownership of the keystore files to Striim:sudo chown striim sks.jks\nsudo chown striim sksKey.pwdSet the MetadataRepository settings to match those of the other servers in the cluster. For more information, see Setting startUp.properties for the metadata repository.If\u00a0Interfaces is specified in startUp.properties, change its value to an IP address of the current system. If IsTcpIpCluster=true,add that IP address to the ServerNodeAddress list on each server in the cluster.If using TCP/IP instead of multicast UDP, see Using TCP/IP instead of multicast UDP.Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use from the default of 4GB (see Changing the amount of memory available to a Striim server).Reboot the system and verify that Striim has restarted automatically.Alternatively:For CentOS 6, enter\u00a0sudo start striim-node.For CentOS 7, enter:sudo systemctl enable striim-node\nsudo systemctl start striim-nodeThen\u00a0sudo tail -F /opt/striim/logs/striim-node.log and\u00a0wait for the message\u00a0 Please go to ... to administer, or use console.Log in as admin and check the Monitor page to verify that the server has been added to the cluster (see\u00a0Monitoring using the web UI).To uninstall:sudo rpm -e striim-nodeNoteIf you have installed multiple versions of Striim on the same system, use\u00a0rpm -qa | grep striim\u00a0to identify the package(s) for 4.2.0 to be uninstalled.After uninstalling, you may remove\u00a0/opt/striim,\u00a0/var/striim, and\u00a0/var/log/striim.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/adding-a-server-to-a-cluster-in-centos.html", "title": "Adding a server to a cluster in CentOS", "language": "en"}} {"page_content": "\n\nRunning Striim in the Google Cloud PlatformSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in the Google Cloud PlatformPrevNextRunning Striim in the Google Cloud PlatformStriim currently offers the following Google Cloud Platform solutions:Striim Cloud Enterprise: see the Striim Cloud documentationStriim for BigQuery: see the Striim for BigQuery documentationStriim Subscription: the full Striim platform and gold support paid annuallyStriim BYOL: the full Striim platform using a license purchased from StriimSTRIIM (metered): the full Striim platform, billed based on usage**free trial availableWith the full Striim platform solution, you will be billed monthly by Google for virtual machine usage and purchase your license directly from Striim. After the trial period, you must Contact Striim support to purchase a license.With the other solutions, you will be billed by Google monthly according to usage. See the individual solution listings in the Google Cloud Marketplace for pricing details.Deploying Striim in the Google Cloud PlatformIf you have not done so already, sign up for the Google Cloud Platform and set up billing.If you do not already have a Google Cloud Platform project suitable for this deployment, create one (see\u00a0Creating and Managing Projects).Go to the Google Marketplace, search for Striim, click the desired solution, and click Launch.Select the project to deploy in.Optionally, change the deployment name, region, and virtual machine settings. (This may affect your cost.)Click Deploy. Deployment may take a few minutes.Click\u00a0Visit the site.If you deployed any solution other than non-metered STRIIM full platform,\u00a0log in using the username\u00a0admin and the password shown on the deployment preview page. If you do not get a login prompt, wait a few more minutes for Striim to complete startup and click\u00a0Visit the site again.\u00a0For discussion of the App Wizard page, which is the first thing you will see when you log in, see Creating apps using templates.Creating apps using templatesIf you deployed the non-metered Striim full platform solution, continue with the following steps.Click Visit the site.You should see \"Congratulations! You have successfully installed\u00a0Striim.\" Click Accept Striim EULA and Continue.Enter your name, email address, company name (which must exactly match the company name associated with your license and product keys), a name for the Striim cluster, and sys, admin, and keystore passwords. Make note of the cluster name and both passwords as they are necessary for various tasks you may need to perform in the future. Click Save and Continue.Enter the license and product keys you received from Striim or leave the fields blank to use a trial license. Click Save and Continue.Click Launch.Click Log In, enter admin and the admin password you specified above, and click Log In.If you are new to Striim, click Next to start the tutorial.Additional steps required after deployment is completeIf the Google Deployment Manager preview page says you need to open any firewall ports, follow the instructions provided.Install any JDBC drivers required by your sources (see Installing third-party drivers in Striim Platform) using one of the methods suitable for Linux discussed in\u00a0Transferring Files to Instances.Recommended next stepsChange the VM's public IP address from Ephemeral to Static or it will change every time you restart the VM. See Promoting an ephemeral external IP address.For an introduction to Striim, see Getting Started.Google will suggest changing the admin password. The randomly generated \"temporary\" password should be quite secure, but if you wish to change it, see\u00a0Running the console in the Google Cloud Platform, and use the command\u00a0ALTER USER admin SET ( password:\"<new password>\" );.Running the console in the Google Cloud PlatformGo to\u00a0https://console.cloud.google.com/dm/deployments, select the project you deployed to,\u00a0and click the name of the deployment.\u00a0Click SSH.Enter the following:sudo su\n/opt/striim/bin/console.sh -c <cluster name>If you are using the full Striim platform (Bring Your Own License) solution, you specified the cluster name when configuring Striim. If you are using another solution, the cluster name is\u00a0Striim. Cluster names are case-sensitive.In this section: Running Striim in the Google Cloud PlatformDeploying Striim in the Google Cloud PlatformRunning the console in the Google Cloud PlatformSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-in-the-google-cloud-platform.html", "title": "Running Striim in the Google Cloud Platform", "language": "en"}} {"page_content": "\n\nRunning Striim in Microsoft WindowsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in Microsoft WindowsPrevNextRunning Striim in Microsoft WindowsSee System requirements.To install Striim for evaluation purposes, see\u00a0Evaluating on Mac OS X, Linux, or Windows.To run Striim as a process, see Running Striim as a process.To run Striim as a service:Install the first server as described in\u00a0Creating a cluster in Microsoft Windows. To\u00a0add additional servers to the cluster, follow the instructions in\u00a0Adding a server to a cluster in Microsoft Windows.Creating a cluster in Microsoft WindowsImportantBefore following the instructions below, read the Release notes.Release notesVerify that the system meets the System requirements.System requirementsDownload Striim_4.2.0.zip, extract it, and move the extracted striim directory to an appropriate location. Keep the path including the drive letter and striim under 30 characters total to avoid encountering errors due to Windows' maximum path length limitation.CautionKnown issue (DEV-22317): do not put the striim directory under a directory with a space in its name, such as C:\\Program Files.Start Windows PowerShell as administrator (right-click the Windows Powershell icon and select Run as administrator), change to the striim/conf/windowsService directory, and enter .\\setupWindowsService.ps1.Run striim\\bin\\sksConfig.bat and enter passwords for the Striim keystore and the admin and sys users. If hosting the metadata repository on Oracle or PostgreSQL, enter that password as well (see Configuring Striim's metadata repository). If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped:\u00a0, . _ + : @ % / -Change the ownership of striim/conf/sks.jks and striim/conf/sksKey.pwd to match the other files in that directory.If hosting the metadata repository on Derby, change its password as described in Changing the Derby password.Edit striim\\conf\\startUp.properties,\u00a0edit the following property values (removing any # characters and spaces from the beginning of the lines), and save the file:WAClusterName: a name for the Striim cluster (note that if an existing Striim cluster on the network has this name, Striim will try to join it)CompanyName: If you specify keys, this must exactly match the associated company name. If you are using a trial license, any name will work.ProductKey and LIcenseKey: If you have keys, specify them, otherwise leave blank to run Striim on a trial license. Note that you\u00a0cannot create a multi-server cluster using a trial license.Interfaces: If the system has more than one IP address, specify the one you want Striim to use, otherwise leave blank and Striim will set this automatically.If not hosting the metadata repository on the internal Derby instance with its default settings, see Setting startUp.properties for the metadata repository.Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use from the default of 4GB (see Changing the amount of memory available to a Striim server).Start the Derby and Striim services manually, or reboot to verify that they start automatically.The stdout logs for these services are located in striim\\conf\\windowsService\\yajsw_server\\log\\wrapper.log and striim\\conf\\windowsService\\yajsw_derby\\log\\wrapper.log.To tail the log files in PowerShell, enter Get-Content .\\striim.server.log -Tail 10 -WaitUninstall Striim services from Microsoft WindowsTo uninstall the services, stop them, change to the striim\\conf\\windowsService\\yajsw\\bat\\ directory, and enter the following commands:.\\uninstallService\nsc delete Derby\nsc delete com.webaction.runtime.ServerNote: the sc command does not work in PowerShell versions 5 and earlier, but they do work at a regular Windows command prompt.Adding a server to a cluster in Microsoft WindowsVerify that the system meets the System requirements.System requirementsDownload Striim_4.2.0.zip and extract it to the desired location.Start Windows PowerShell as administrator (right-click the Windows Powershell icon and select Run as administrator), change to the striim/conf/windowsService directory, and enter:setupWindowsService.ps1 -noderbyCopy sks.jks, sksKsy.pwd, and startUp.properties from striim/conf/ from the first server to striim/conf/ on the new server.Set the MetadataRepository settings to match those of the other servers in the cluster. For more information, see Setting startUp.properties for the metadata repository.If\u00a0Interfaces is specified in startUp.properties, change its value to an IP address of the current system. If IsTcpIpCluster=true,add that IP address to the ServerNodeAddress list on each server in the cluster.If using TCP/IP instead of multicast UDP, see Using TCP/IP instead of multicast UDP.Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use from the default of 4GB (see Changing the amount of memory available to a Striim server).Start the Striim service manually, or reboot to verify that it starts automatically.In this section: Running Striim in Microsoft WindowsCreating a cluster in Microsoft WindowsUninstall Striim services from Microsoft WindowsAdding a server to a cluster in Microsoft WindowsSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-in-microsoft-windows.html", "title": "Running Striim in Microsoft Windows", "language": "en"}} {"page_content": "\n\nRunning Striim in SnowflakeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in SnowflakePrevNextRunning Striim in SnowflakeSee Getting your free trial of Striim for Snowflake.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-06-24\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-in-snowflake.html", "title": "Running Striim in Snowflake", "language": "en"}} {"page_content": "\n\nRunning Striim in UbuntuSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim in UbuntuPrevNextRunning Striim in UbuntuTo create a Striim cluster in Ubuntu, install the first server as described in\u00a0Creating a cluster in Ubuntu. To\u00a0add additional servers to the cluster, follow the instructions in\u00a0Adding a server to a cluster in Ubuntu.Creating a cluster in UbuntuImportantBefore following the instructions below, read the Release notes.Release notesFollow these instructions to set up the first server in a Striim cluster. Installation will create the system account striim and all files installed will be owned by that account.Verify that the system meets the System requirements.System requirementsIf you will host the metadata repository in Oracle or PostgreSQL, follow the instructions in Configuring Striim's metadata repository.Download striim-node-4.2.0-Linux.deb,.If you plan to host the metadata repository on the internal Derby instance, download\u00a0striim-dbms-4.2.0-Linux.deb.Optionally, download the sample applications,\u00a0striim-samples-4.2.0-Linux.deb.Install the node package:sudo dpkg -i striim-node-4.2.0-Linux.debIf using Derby to host the metadata repository, install its package:sudo dpkg -i striim-dbms-4.2.0-Linux.debOptionally, install the sample application package:sudo dpkg -i striim-samples-4.2.0-Linux.debRun sudo su - striim /opt/striim/bin/sksConfig.sh and enter passwords for the Striim keystore and the admin and sys users. If hosting the metadata repository on Oracle or PostgreSQL, enter that password as well (see Configuring Striim's metadata repository). If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped:\u00a0, . _ + : @ % / -If hosting the metadata repository on Derby, change its password as described in Changing the Derby password.Edit /opt/striim/conf/startUp.properties,\u00a0edit the following property values (removing any # characters and spaces from the beginning of the lines), and save the file:WAClusterName: a name for the Striim cluster (note that if an existing Striim cluster on the network has this name, Striim will try to join it)CompanyName: If you specify keys, this must exactly match the associated company name. If you are using a trial license, any name will work.ProductKey and LIcenseKey: If you have keys, specify them, otherwise leave blank to run Striim on a trial license. Note that you\u00a0cannot create a multi-server cluster using a trial license.Interfaces: If the system has more than one IP address, specify the one you want Striim to use, otherwise leave blank and Striim will set this automatically.If not hosting the metadata repository on the internal Derby instance with its default settings, see Setting startUp.properties for the metadata repository.Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use from the default of 4GB (see Changing the amount of memory available to a Striim server).Reboot the system and verify that Striim has restarted automatically.Alternatively:For Ubuntu 14.04, enter\u00a0sudo start striim-dbms\u00a0, wait ten seconds, then enter\u00a0sudo start striim-node.For Ubuntu 16.04 or later, enter:sudo systemctl enable striim-dbms\nsudo systemctl start striim-dbms\u00a0Wait ten seconds, then enter:sudo systemctl enable striim-node\nsudo systemctl start striim-nodeThen\u00a0sudo tail -F /opt/striim/logs/striim-node.log and\u00a0wait for the message\u00a0Please go to ... to administer, or use console.To uninstall:sudo dpkg -r striim-node\nsudo dpkg -r striim-dbms\nsudo dpkg -r striim-samplesNoteIf you have installed multiple versions of Striim on the same system, use\u00a0rpm -qa | grep striim\u00a0to identify the package(s) for 4.2.0 to be uninstalled.After uninstalling, you may remove\u00a0/opt/striim,\u00a0/var/striim, and\u00a0/var/log/striim.Adding a server to a cluster in UbuntuNoteAll servers in a cluster must run the same version of Striim.\u00a0You cannot create a multi-server cluster using a trial license.The following steps require that you have created the first server in the cluster as described above and\u00a0that it is running.Verify that the system meets the System requirements.System requirementsLog in to Linux and download striim-node-4.2.0-Linux.deb.Install that package:sudo dpkg -i striim-node-4.2.0-Linux.debCopy sks.jks, sksKey.pwd, and startUp.properties from /opt/striim/conf/ on the first server to /opt/striim/conf/ on the new server.Assign ownership of the keystore files to Striim:sudo chown striim sks.jks\nsudo chown striim sksKey.pwdSet the MetadataRepository settings to match those of the other servers in the cluster. For more information, see Setting startUp.properties for the metadata repository.If\u00a0Interfaces is specified in startUp.properties, change its value to an IP address of the current system. If IsTcpIpCluster=true,add that IP address to the ServerNodeAddress list on each server in the cluster.If using TCP/IP instead of multicast UDP, see Using TCP/IP instead of multicast UDP.Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use from the default of 4GB (see Changing the amount of memory available to a Striim server).Reboot the system and verify that Striim has restarted automatically.Alternatively:For Ubuntu 14.04, enter\u00a0sudo start striim-node.For Ubuntu 16.04 or later, enter:sudo systemctl enable striim-node\nsudo systemctl start striim-nodeThen\u00a0sudo tail -F /opt/striim/logs/striim-node.log and\u00a0wait for the message\u00a0 Please go to ... to administer, or use console.Log in as admin and check the Monitor page to verify that the server has been added to the cluster (see\u00a0Monitoring using the web UI).To uninstall:sudo dpkg -r striim-nodeNoteIf you have installed multiple versions of Striim on the same system, use\u00a0rpm -qa | grep striim\u00a0to identify the package(s) for 4.2.0 to be uninstalled.After uninstalling, you may remove\u00a0/opt/striim,\u00a0/var/striim, and\u00a0/var/log/striim.In this section: Running Striim in UbuntuCreating a cluster in UbuntuAdding a server to a cluster in UbuntuSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-in-ubuntu.html", "title": "Running Striim in Ubuntu", "language": "en"}} {"page_content": "\n\nRunning Striim as a processSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationInstalling StriimRunning Striim as a processPrevNextRunning Striim as a processNoteBefore installing Striim, verify that the system meets the System requirements.System requirementsFor development and testing, it can be useful to run Striim as a process, so you can easily restart with different options or switch between various releases. We do not recommend running Striim as a process in a production environment. If you do not need to run Striim as a service, you can install simply by extracting a Striim-<version>.tgz or .zip archive. For example, to extract to the /opt directory (the typical location, though you may install wherever you like):tar zxvf Striim-<version>.tgz -C /optCautionKnown issue (DEV-22317): do not put the striim directory under a directory with a space in its name, such as C:\\Program Files.Once you have extracted the package, set the server properties:Open a terminal or command prompt and change to the striim directory.In CentOS or Ubuntu, enter sudo su - striim bin/sksConfig.shIn OS X, enter bin/sksConfig.shIn Windows, enter bin\\sksConfigWhen prompted, enter passwords for the Striim keystore and the admin and sys users. If hosting the metadata repository on Oracle or PostgreSQL, enter that password as well (see Configuring Striim's metadata repository). If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped:\u00a0, . _ + : @ % / -Edit /opt/striim/conf/startUp.properties,\u00a0edit the following property values (removing any # characters and spaces from the beginning of the lines), and save the file:WAClusterName: a name for the Striim cluster (note that if an existing Striim cluster on the network has this name, Striim will try to join it)CompanyName: If you specify keys, this must exactly match the associated company name. If you are using a trial license, any name will work.ProductKey and LIcenseKey: If you have keys, specify them, otherwise leave blank to run Striim on a trial license. Note that you\u00a0cannot create a multi-server cluster using a trial license.Interfaces: If the system has more than one IP address, specify the one you want Striim to use, otherwise leave blank and Striim will set this automatically.If not hosting the metadata repository on the internal Derby instance with its default settings, see Setting startUp.properties for the metadata repository.If hosting the metadata repository on Derby in a production environment (not recommended), change its password (see Changing the Derby password).Optionally, perform additional tasks described in Configuring Striim, such as increasing the maximum amount of memory the server can use from the default of 4GB (see Changing the amount of memory available to a Striim server).Save the file.Once you have specified the necessary server properties, start the server with\u00a0striim\\bin\\server.bat (for Windows) or\u00a0striim/bin/server.sh (for OS X or Linux).When the server has started, you will see a message indicating the URL where you can access its web UI.When running as a process,\u00a0SysOut output will be written to the terminal running the server process rather than to striim-node.log.To start the console, enter\u00a0striim\\bin\\console.bat -c <cluster name>\u00a0(for Windows) or striim/bin/console.sh -c <cluster name> (for OS X or Linux).To stop the server, press Ctrl-C in the terminal window.To restart, run server.bat or server.sh. Optionally you may specify a startup properties file as an argument, which can save time if you need to run Striim with various settings for testing or development purposes.NoteIf installing a new release of Striim, delete the\u00a0striim directory and empty the trash before extracting the new\u00a0.tgz or .zip archive, and clear your browser cache before logging in to the web UI.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/running-striim-as-a-process.html", "title": "Running Striim as a process", "language": "en"}} {"page_content": "\n\nConfiguring Striim PlatformSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformPrevNextConfiguring Striim PlatformThis section describes how to configure a Striim cluster.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-09\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/configuring-striim-platform.html", "title": "Configuring Striim Platform", "language": "en"}} {"page_content": "\n\nInstalling third-party drivers in Striim PlatformSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformInstalling third-party drivers in Striim PlatformPrevNextInstalling third-party drivers in Striim PlatformThis section describes installation of the third-party drivers required to use some adapters and other Striim Platform features.Install the HP NonStop JDBC driver for SQL/MX in a Striim serverThis driver must be installed in every Striim server that will read from or write to HP NonStop SQL/MX tables using\u00a0Database Reader or\u00a0Database Writer.CautionDo not install drivers for multiple versions of SQL/MX on the same Striim server. If you need to write to multiple versions of SQL/MX, install their drivers on different Striim servers and run each version's applications on the appropriate Striim server(s).Follow the instructions in the \"Installing and Verifying the Type 4 Driver\" section of\u00a0HPE NonStop JDBC Type 4 Driver Programmer's Reference for SQL/MX for the SQL/MX version you are running to copy the driver .tar file from the HP NonStop system with the tables that will be read\u00a0to a client workstation and untar it. Do not install the driver.Copy the t4sqlmx.jar file from the untarred directory to\u00a0striim/lib.Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).Install the MariaDB JDBC driver in a Striim serverThis driver must be installed in every Striim server that will read from or write to MariaDB.Download mariadb-java-client-2.4.3.jar from http://downloads.mariadb.com/Connectors/java/connector-java-2.4.3.Copy that file\u00a0to striim/lib.Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).Install the MemSQL JDBC driver in a Striim serverMemSQL uses MySQL's JDBC driver. See\u00a0Install the MySQL JDBC driver in a Striim server.Install the Microsoft JDBC Driver for SQL Server 2008 in a Striim serverThe JDBC driver for Microsoft SQL Server 2012 and later, Azure SQL Database, and Azure Synapse is bundled with the Striim server.To read from or write to SQL Server 2008, you must install an older driver.CautionDo not install both versions of the driver in the same Striim server.On Striim servers, delete striim/lib/mssql-jdbc-7.2.2.jre8.jar.Download the Microsoft JDBC Driver 6.0 for SQL Server\u00a0.gz package from https://www.microsoft.com/en-us/download/details.aspx?id=11774 and extract it.Copy enu/jre8/sqljdbc42.jar to striim/lib.Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).Install the MySQL JDBC driver in a Striim serverThis driver must be installed in every Striim server that will read from or write to MySQL.Download the Connector/J 8.0.27 package from\u00a0https://downloads.mysql.com/archives/c-j/\u00a0and extract it.Copy mysql-connector-java-8.0.27.jar\u00a0to striim/lib.Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).Install the Oracle Instant Client in a Striim serverThe Oracle Instant Client version 21c must be installed and configured in the Linux host environment of every Striim server that will run OJet.Download the client from https://download.oracle.com/otn_software/linux/instantclient/211000/instantclient-basic-linux.x64-21.1.0.0.0.zip and follow the installation procedure provided by Oracle for the operating system of the Forwarding Agent host.Edit Striim/conf/startUp.properties and add the NATIVE_LIBS property to specify the Instant Client path. For example, if the Instant Client is installed in /usr/local/instantclient_21.1:NATIVE_LIBS=/usr/local/instantclient_21.1\u00a0Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).Install the Redshift JDBC driverThis driver must be installed in every Striim server that will read from or write to Redshift.Download the JDBC 4.1 driver from http://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html#download-jdbc-driver.Copy the .jar file to striim/lib.Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).Install the SAP HANA JDBC driverThis driver must be installed in every Striim server that will write to SAP HANA.Download the JDBC 2.4.62 package from store.sap.com and extract it.Copy ngdbc-2.4.62.jar file to striim/lib.Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).Install the Teradata JDBC driver in a Striim serverThis driver must be installed in every Striim server or that will read from Teradata.Download the Teradata JDBC .tgz or .zip package from http://downloads.teradata.com/download/connectivity/jdbc-driver and extract it.Copy\u00a0tdgssconfig.jar\u00a0and\u00a0terajdbc4.jar to striim/lib.Stop and restart Striim (see\u00a0Starting and stopping Striim Platform).In this section: Installing third-party drivers in Striim PlatformInstall the HP NonStop JDBC driver for SQL/MX in a Striim serverInstall the MariaDB JDBC driver in a Striim serverInstall the MemSQL JDBC driver in a Striim serverInstall the Microsoft JDBC Driver for SQL Server 2008 in a Striim serverInstall the MySQL JDBC driver in a Striim serverInstall the Oracle Instant Client in a Striim serverInstall the Redshift JDBC driverInstall the SAP HANA JDBC driverInstall the Teradata JDBC driver in a Striim serverSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-01\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/installing-third-party-drivers-in-striim-platform.html", "title": "Installing third-party drivers in Striim Platform", "language": "en"}} {"page_content": "\n\nChanging the application start timeoutSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging the application start timeoutPrevNextChanging the application start timeoutBy default, when you start an application, Striim will wait five minutes for all components and flows to start. If after that time not all components and flows have started, start will fail with an error such as \"failed to verify flow start.\"To change this timeout, edit startUp.properties (and, if the application has a source running in a Forwarding Agent, agent.properties) and set the value of striim.cluster.maxWaitTimeForAppVerifyInDeploying to the number of seconds before timeout. Then restart Striim and any relevant Forwarding Agent.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-04-30\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-the-application-start-timeout.html", "title": "Changing the application start timeout", "language": "en"}} {"page_content": "\n\nChanging the Derby passwordSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging the Derby passwordPrevNextChanging the Derby passwordIn a production environment, we strongly recommend changing the Derby password from its default. The JAVA_HOME environment variable must be set for the ./changeDerbyPassword.sh script to work.Stop Striim and Derby (see Starting and stopping Striim Platform).Open a terminal and change to the striim directory.Change the password in Derby:In CentOS or Ubuntu, enter sudo su - striim bin/changeDerbyPassword.shIn OS X, enter bin/changeDerbyPassword.sh.In Windows, enter bin\\changeDerbyPassword.When prompted, enter the username waction, the current password (in a new installation, the password is w@ct10n), and the new password.Change the password in Striim's keystore:In CentOS or Ubuntu, enter sudo su - striim bin/sksConfig.sh -pIn OS X, enter bin/sksConfig.sh -pIn Windows, enter bin\\sksConfig -pRestart Derby and Striim (see Starting and stopping Striim Platform).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-15\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-the-derby-password.html", "title": "Changing the Derby password", "language": "en"}} {"page_content": "\n\nConfiguring low disk space monitoringSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformConfiguring low disk space monitoringPrevNextConfiguring low disk space monitoringStriim monitors the available disk space on the following:the root directory /the directories specified in the UsedDirs property in startUp.properties (by default, /opt/striim, /var/striim, /var/log/striimall subdirectories of those directoriesIf some of your data is stored elsewhere, edit startUp.properties and change the value of the UsedDirs property. For example, to also monitor a drive mapped to /data:UsedDirs=/opt/striim/,/var/log/striim/,/var/striim/,/dataTo disable low disk space monitoring (not recommended), leave the value of UsedDirs blank.Then restart the server as described in Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/configuring-low-disk-space-monitoring.html", "title": "Configuring low disk space monitoring", "language": "en"}} {"page_content": "\n\nEnabling file lineageSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformEnabling file lineagePrevNextEnabling file lineageBy default, file lineage is disabled.To enable it for adapters other than OracleReader (see File lineage in readers and writers):On the server running the source, edit\u00a0striim/conf/startUp.properties, set TrackFLM=true, and restart the server as described in\u00a0Starting and stopping Striim Platform.On the Forwarding Agent running the source, edit agent/conf/agent.conf, set striim.cluster.trackFileLineageMetadata=true, and restart the agent as described in\u00a0Starting and stopping Striim Platform.To enable it for OracleReader (see File lineage in Oracle):On the server running the source, edit\u00a0striim/conf/startUp.properties, set TrackOLM=true, and restart the server as described in\u00a0Starting and stopping Striim Platform.On the Forwarding Agent running the source, edit agent/conf/agent.conf, set striim.cluster.trackOracleLineageMetadata=true, and restart the agent as described in\u00a0Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/enabling-file-lineage.html", "title": "Enabling file lineage", "language": "en"}} {"page_content": "\n\nChanging the Hazelcast portsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging the Hazelcast portsPrevNextChanging the Hazelcast portsBy default, the Hazelcast ports start at 5701.To change that, edit startUp.properties, add striim.node.hazelcast.port=<port> (replacing <port> with the port number where you want Hazelcast ports to start) on a line by itself, and save the file.Then restart the server, as described in Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-12\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-the-hazelcast-ports.html", "title": "Changing the Hazelcast ports", "language": "en"}} {"page_content": "\n\nConfiguring HTTP and HTTPSSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformConfiguring HTTP and HTTPSPrevNextConfiguring HTTP and HTTPSWith its default settings, Striim uses HTTP and HTTPS as follows:The console discover the server using HTTP over port 9080, then connects using HTTPS over port 9081.The Forwarding Agent authenticates using HTTPS over port 9081. After that, it joins the cluster using Hazelcast.The web UI connects to the server using HTTP over port 9080. Alternatively, you may configure it to use HTTPS (Enabling HTTPS).Changing the HTTP portTo change the HTTP port, on every server in the cluster, edit striim\\conf\\startUp.properties, set the HttpPort property value to the new port number, save the file, and restart Striim.When the HTTP port is not 9080:When starting the console, include the -t <port> switch to specify the HTTPS port.Tell web UI users to use the new port.Disabling HTTPTo disable HTTP, on every server in the cluster, edit ./striim/conf/startUp.properties, set HttpEnabled=false, and restart Striim.When HTTP is disabled:When starting the console from the command line, include the -S <server IP address> switch (see Using the console in a terminal or command prompt).To use the web UI, see Enabling HTTPS.Changing the HTTPS portTo change the HTTPS port, on every server in the cluster, edit striim\\conf\\startUp.properties, set the HttpsPort property value to the new port number, save the file, and restart Striim.When the HTTPS port is not 9081:When starting the console, include the -T <port> switch to specify the HTTPS port.In the Forwarding Agent's agent.conf configuration file, set the striim.node.httpsPort property value to the new port number and restart the agent.If the web UI is using HTTPS, tell web UI users to use the new port.Disabling HTTPSTo disable HTTPS (not recommended), on every server in the cluster, edit ./striim/conf/startUp.properties, set HttpsEnabled=false, and restart Striim.When HTTPS is disabled:When starting the console, include the -H false switch.In the Forwarding Agent's agent.conf configuration file, set HttpsEnabled=False, set striim.node.servernode.address to the IP address of the server, and restart the agent.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2021-05-11\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/configuring-http-and-https.html", "title": "Configuring HTTP and HTTPS", "language": "en"}} {"page_content": "\n\nEnabling HTTPSSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformEnabling HTTPSPrevNextEnabling HTTPSTo support HTTPS for the web UI, Striim needs an SSL certificate.If you already have a .pkcs12 file, skip to the next step.If you have .key and .crt files, use the following command to generate a .pkcs12 file (replace myfile with your certificate name):openssl pkcs12 -inkey myfile.key -in myfile.crt -export -out myfile.pkcs12Use the following command to generate a keystore containing the certificate from the .pkcs12 file (replace myfile with your certificate name)::keytool -importkeystore -srckeystore myfile.pkcs12 -srcstoretype PKCS12 -destkeystore myfile.jksAlternatively, see Generate Keys to create a new self-signed certificate. In some browsers, these may trigger warnings about untrusted certificates.Once the certificate is in Striim's environment, configure the following options in startUp.properties:propertycommentsHttpsKeystorePathpath to the keystore created by keytool, for example, /opt/Striim/myfile.jksHttpsKeystorePasswordthe keystore password (storepass value) specified when running keytoolHttpsKeystoreManagerPasswordthe key password (keypass value) specified when running keytool or, if it was not specified, the storepass valueIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-06-04\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/enabling-https.html", "title": "Enabling HTTPS", "language": "en"}} {"page_content": "\n\nEnable Kerberos authentication for Oracle and PostgreSQLSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformEnable Kerberos authentication for Oracle and PostgreSQLPrevNextEnable Kerberos authentication for Oracle and PostgreSQLNoteIn this release, Kerberos authentication is supported only for Oracle using Database Reader, Oracle Reader, and Database Writer and for PostgreSQL using Database Reader, PostgreSQL Reader, and Database Writer.Prerequisites for using Kerberos authentication:a working Kerberos 5 environmentthe source or target database is configured to use Kerberos authenticationTo enable Kerberos authentication, you must update Striim's Java environment with the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 8, as follows:Download jce_policy-8.zip from Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files Download, extract it, and follow the instructions in the included README.txt to install it on each server in your Striim cluster that will run a source or target that uses Kerberos authenticaion.Place Kerberos's krb5.conf file in a directory accessible by Striim.If using a credential cache (also known as a ticket cache), cache the ticket for the service principal to be used by Striim on the Striim server.For PostgreSQL, create a keytab file for the Kerberos principal (see GSSAPI Authentication) in a directory accessible by Striim.For PostgreSQL, create a login.conf JAAS configuration file containing the following in a directory accessible by Striim. Specify the keytab file and Kerberos realm for your environment.<application name> {\n com.sun.security.auth.module.Krb5LoginModule required\n doNotPrompt=true\n useTicketCache=true\n renewTGT=true\n useKeyTab=true\n keyTab=\"<fully qualified name of keytab file>\"\n principal=\"postgres@<Kerberos realm>\"\n};For example:myJAASApp {\n com.sun.security.auth.module.Krb5LoginModule required\n doNotPrompt=true\n useTicketCache=true\n renewTGT=true\n useKeyTab=true\n keyTab=\"/etc/krb5.keytab\"\n principal=\"postgres@MYDOMAIN.COM\"\n};If you read from or write to multiple instances of PostgreSQL, specify one such property set for each in login.conf. Give each a different application name, which must be specified in the JAAS Configuration string in Database Reader, PostgreSQL Reader, or Database Writer. If you have only one instance of PostgreSQL, you must still provide an application name here and in the JAAS Configuration string.Restart Striim.Once these steps are complete, configure Kerberos authentication using the JAAS Configuration property in Database Reader, Oracle Reader, PostgreSQL Reader, or Database Writer.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-05-06\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/enable-kerberos-authentication-for-oracle-and-postgresql.html", "title": "Enable Kerberos authentication for Oracle and PostgreSQL", "language": "en"}} {"page_content": "\n\nSetting the log levelsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformSetting the log levelsPrevNextSetting the log levelsLog levels for striim.server.log, striim.command.log, and (for DEB and RPM installations) striim.command.log are set in\u00a0Striim/conf/log4j.server.properties. When installed from .tgz, the default log level is info, when installed from DEB or RPM, the default is\u00a0warn. From most to least verbose, the supported levels are\u00a0all,\u00a0trace,\u00a0debug,\u00a0info,\u00a0warn,\u00a0error, and\u00a0fatal.NoteDebug-level\u00a0UserCommandLogger messages, since they\u00a0are often necessary for troubleshooting, are always written to striim.server.log, regardless of the log level setting.Log levels for the Forwarding Agent are set in\u00a0Agent/conf/log4j.agent.properties. The default log level is\u00a0trace.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-12-20\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/setting-the-log-levels.html", "title": "Setting the log levels", "language": "en"}} {"page_content": "\n\nChanging log file retention settingsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging log file retention settingsPrevNextChanging log file retention settingsDefault retention for striim.server.log and striim.agent.log is nine 1 GB files. To alter these settings,For a Striim server, edit striim/conf/log4j.server.properties.For a Forwarding Agent, edit agent/conf/log4j.agent.properties.To change the number of files, change the value of appender.CommandFileAppender.strategy.max.To change the size of the files, change the value of appender.CommandFileAppender.policies.size.size.Save the file.Restart Striim (see Starting and stopping Striim Platform) or the Forwarding Agent (see Starting and stopping the Forwarding Agent).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-06\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-log-file-retention-settings.html", "title": "Changing log file retention settings", "language": "en"}} {"page_content": "\n\nChanging the amount of memory available to a Striim serverSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging the amount of memory available to a Striim serverPrevNextChanging the amount of memory available to a Striim serverBy default, a Striim server uses a maximum of 4GB.To change that, edit striim/conf/startUp.properties and change the value of MEM_MAX. To increase maximum memory to, for example, 8GB, change the value to 8192m:MEM_MAX=8192mThen restart the server as described in Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-the-amount-of-memory-available-to-a-striim-server.html", "title": "Changing the amount of memory available to a Striim server", "language": "en"}} {"page_content": "\n\nChanging metadata repository connection retry settingsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging metadata repository connection retry settingsPrevNextChanging metadata repository connection retry settingsBy default, when Striim tries to write to the metadata repository and it is temporarily unavailable due to network problems or other issues, Striim will wait ten seconds and try again. If that attempt fails, it will wait another ten seconds and try a third time. If the third attempt fails, the server will crash.Similarly, when an application for which recovery is enabled tries to persist WActionStore data to a MySQL or Oracle database that is unavailable, it will try again twice before terminating. (When an application for which recovery is not enabled tries to persist WActionStore data to an unavailable database, it will terminate immediately.)To change the number of attempts or the time between them, change the following property values in startUp.properties, then restart Striim.DBConnectionMaxRetries=3\nDBConnectionRetryInterval=10In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-metadata-repository-connection-retry-settings.html", "title": "Changing metadata repository connection retry settings", "language": "en"}} {"page_content": "\n\nChanging how long monitor report data is maintainedSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging how long monitor report data is maintainedPrevNextChanging how long monitor report data is maintainedBy default, the historical data for monitor reports (see Using monitor reports) is retained for 24 hours. You may revise this by setting the MonitorPersistenceRetention property in startUp.properties to some other number of hours. For example, to retain the data for one week:MonitorPersistenceRetention=168In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2019-01-21\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-how-long-monitor-report-data-is-maintained.html", "title": "Changing how long monitor report data is maintained", "language": "en"}} {"page_content": "\n\nEnabling monitoring via JMXSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformEnabling monitoring via JMXPrevNextEnabling monitoring via JMXBy default, monitoring via JMX is disabled. To enable it, edit\u00a0striim/conf/startUp.properties, set EnableJmx=true, and restart the server as described in\u00a0Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-15\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/enabling-monitoring-via-jmx.html", "title": "Enabling monitoring via JMX", "language": "en"}} {"page_content": "\n\nSwitching online help links to open the latest docs on the webSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformSwitching online help links to open the latest docs on the webPrevNextSwitching online help links to open the latest docs on the webDocumentation on the web is updated frequently. To have help links in Striim open the web documentation instead of the version bundled with the server (which gets increasingly out of date over time), add the following line to startUp.properties and restart the server as described in Starting and stopping Striim Platform.DocumentationURL=https://www.striim.com/docs/archive/420/platformIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/switching-online-help-links-to-open-the-latest-docs-on-the-web.html", "title": "Switching online help links to open the latest docs on the web", "language": "en"}} {"page_content": "\n\nEnabling SSL for LDAPSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformEnabling SSL for LDAPPrevNextEnabling SSL for LDAPTo enable LDAP over SSL, install a certificate for the LDAP server\u00a0\ufeffin the Java runtime environment used by Striim using a command such as:sudo keytool -import -alias <LDAP server IP address>\n -keystore $JAVA_HOME/jre/lib/security/cacerts -file public.crtIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2017-06-16\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/enabling-ssl-for-ldap.html", "title": "Enabling SSL for LDAP", "language": "en"}} {"page_content": "\n\nSupporting Active Directory authentication for AzureSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformSupporting Active Directory authentication for AzurePrevNextSupporting Active Directory authentication for AzureWhen reading from or writing to Azure SQL Database or writing to Azure SQL Data Warehouse, you may use Active Directory authentication. This is not supported in sources running in a Forwarding Agent.CautionUpdating .jar files in striim/lib may cause problems with other Striim functions that use them. If you encounter problems, delete the files added in step 3, restore the files in striim/lib.backup, and restart Striim.Move the following files from striim/lib to striim/lib.backup:adal4j-1.0.0.jargson-2.8.9.jaroauth2-oidc-sdk-9.32.jar or oauth2-oidc-sdk-9.7.jarDownload the following from mvnrepository.com and copy them to striim/lib:adal4j-1.6.4.jargson-2.8.0.jarjavax.mail-1.4.5.jaroauth2-oidc-sdk-6.5.jarRestart Striim (see Starting and stopping Striim Platform).To use Active Directory authentication, in the source or target adapter properties, for Username enter the fully qualified Active Directory user name, for Password enter its password, and use the following syntax for the Connection URL:jdbc:sqlserver://<SQL Server IP address>:<port>;authentication=ActiveDirectoryPassword;\nhostNameInCertificate=*.database.windows.net;See Connecting using Azure Active Directory authentication for more information. Striim does not support the ActiveDirectoryIntegrated or ActiveDirectoryMSI properties.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-23\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/supporting-active-directory-authentication-for-azure.html", "title": "Supporting Active Directory authentication for Azure", "language": "en"}} {"page_content": "\n\nSupporting Active Directory authentication for SQL ServerSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformSupporting Active Directory authentication for SQL ServerPrevNextSupporting Active Directory authentication for SQL ServerWhen Striim or a Forwarding Agent is running in Windows, you may use Active Directory authentication.Copy sqljdbc_4.2\\enu\\auth\\x64\\sqljdbc_auth.dll from the extracted JDBC driver package to the c:\\windows\\system32\\ directory.Restart Striim (see Starting and stopping Striim Platform) or the Forwarding Agent (see Starting and stopping the Forwarding Agent).In the source or target adapter properties, enter any values for Username and Password (they will be ignored), and use the following syntax for the Connection URL:jdbc:sqlserver://<SQL Server IP address>:<port>;integratedSecurity=trueStriim or the Forwarding Agent will use the Active Directory credentials for the user account of its process to authenticate with SQL Server.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-15\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/supporting-active-directory-authentication-for-sql-server.html", "title": "Supporting Active Directory authentication for SQL Server", "language": "en"}} {"page_content": "\n\nSetting the REST API token timeoutSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformSetting the REST API token timeoutPrevNextSetting the REST API token timeoutBy default, REST API tokens change only when the Striim server is restarted. To set a timeout, add the SessionTimeoutSecs property to startUp.properties. The value is in seconds. For example, to set the timeout to one day, use SessionTimeoutSecs=86400. See Getting a REST API authentication token for more information.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-15\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/setting-the-rest-api-token-timeout.html", "title": "Setting the REST API token timeout", "language": "en"}} {"page_content": "\n\nUsing TCP/IP instead of multicast UDPSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformUsing TCP/IP instead of multicast UDPPrevNextUsing TCP/IP instead of multicast UDPBy default, as discussed in System requirements, a Striim cluster uses multicast UDP to discover and add servers. Alternatively, you may configure a cluster manually using TCP/IP. This will have no impact on performance.To do that, on each server in the cluster, edit striim/conf/startUp.properties and set the following options:IsTcpIpCluster=true\nServerNodeAddress=<node list>The ServerNodeAddress\u00a0value is a comma-separated list of the IP addresses of all servers in the cluster, for example,\u00a0ServerNodeAddress=192.0.0.1,192.0.0.2.WarningThe\u00a0ServerNodeAddress setting must be the same on all servers in the cluster.You must also open port 5701 for TCP on each server.Then restart the servers, as described in Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-14\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/using-tcp-ip-instead-of-multicast-udp.html", "title": "Using TCP/IP instead of multicast UDP", "language": "en"}} {"page_content": "\n\nLocking out users after failed loginsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformLocking out users after failed loginsPrevNextLocking out users after failed loginsTo enable user lockout after a certain number of login failures:Edit\u00a0striim/conf/startUp.properties.Delete the # before striim.node.MaxLoginRetries=5.Optionally, change the 5 to the number of login failures you want to allow.Save the file.Restart the server as described in\u00a0Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-30\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/locking-out-users-after-failed-logins.html", "title": "Locking out users after failed logins", "language": "en"}} {"page_content": "\n\nChanging the web UI themeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging the web UI themePrevNextChanging the web UI themeBy default, some pages in the Striim web UI have dark backgrounds and some have light backgrounds. To make all pages have consistent backgrounds, on every server that hosts the web UI, edit\u00a0striim/webui/theme.conf and change the\u00a0Theme property to\u00a0light . Then restart Striim.NoteChanging the theme may require changes to dashboards. For example, labels created with the Value visualization that have white text and a transparent background are readable against the dark gray background of the default theme, but the text color or background must be changed to make them readable against the white background of the light theme.NoteUsers may have to clear their browser caches for a change to the theme to take effect.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2018-06-08\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-the-web-ui-theme.html", "title": "Changing the web UI theme", "language": "en"}} {"page_content": "\n\nSetting a web UI and console timeoutSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformSetting a web UI and console timeoutPrevNextSetting a web UI and console timeoutBy default, web UI and console sessions last indefinitely. To set a 30-minute timeout, add the following line to striim/conf/startUp.properties and restart the Striim server as described in Starting and stopping Striim Platform.SessionTimeoutSecs=1800You may set the timeout to any number of seconds.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-02\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/setting-a-web-ui-and-console-timeout.html", "title": "Setting a web UI and console timeout", "language": "en"}} {"page_content": "\n\nChanging the web UI portsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformChanging the web UI portsPrevNextChanging the web UI portsBy default, the Striim web UI uses port 9080 for http and port 9081 for https. If you wish to set the ports manually, edit striim/conf/startUp.properties and specify values for the following options:HttpPort=<port number>\nHttpsPort=<port number>\nThen restart the server, as described in Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-19\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/changing-the-web-ui-ports.html", "title": "Changing the web UI ports", "language": "en"}} {"page_content": "\n\nNarrowing the ZeroMQ port rangeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Striim PlatformNarrowing the ZeroMQ port rangePrevNextNarrowing the ZeroMQ port rangeStriim uses ZeroMQ for communication among servers and Forwarding Agents.By default, ZeroMQ uses the port range 49152-65535. To narrow that range, on each server in the cluster, edit striim/conf/startUp.properties, find the line #\u00a0ZMQ_PORT_RANGE=49152-65535, remove the #, and set the ports as desired. Do not set the low value below 49152 or the high value above 65535.Firewall settings:Striim servers must allow inbound access for this rangesystems on which Forwarding Agents are running must allow outbound access for this rangeThen restart the server, as described in Starting and stopping Striim Platform.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-07-19\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/narrowing-the-zeromq-port-range.html", "title": "Narrowing the ZeroMQ port range", "language": "en"}} {"page_content": "\n\nConfiguring Kafka for persisted streamsSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationConfiguring Kafka for persisted streamsPrevNextConfiguring Kafka for persisted streamsKafka streams may be persisted to Striim's internal Kafka server or to an external Kafka server.Using Striim's internal Kafka serverWarningOn Windows, Zookeeper and Kafka do not shut down cleanly. (This is a well-known problem.) Before you restart Kafka, you must delete the files they leave in\u00a0c:\\tmp. Alternatively, look on stackoverflow.com for instructions on running Zookeeper and Kafka as services on Windows, or run an external Kafka server in a Linux virtual machine.The default property set for the internal Kafka server that is installed with Striim at Striim/Kafka\u00a0 is\u00a0Global.DefaultKafkaProperties:jmx.broker=localhost:9998, \nbootstrap.brokers=localhost:9092, \nzk.address=localhost:2181To change properties in an existing property set, see ALTER PROPERTYSET.If you installed Striim with the JAR installer as discussed in\u00a0Install Striim Platform for evaluation purposes and enabled Kafka in the setup wizard, it will start automatically. If you did not enable Kafka during installation, you may do so by re-running the setup wizard in the\u00a0Strim/bin directory (WebConfig.exe for Windows,\u00a0WebConfig for Mac, or\u00a0WebConfig.sh for Linux).If you installed Striim from a DEB, RPM, TGZ, or ZIP package as discussed in\u00a0Running Striim in Ubuntu,\u00a0Running Striim in CentOS, or\u00a0Running Striim as a process,\u00a0start Kafka as follows:Open a terminal.Change to\u00a0Striim/Kafka, and enter\u00a0bin/zookeeper-server-start.sh\u00a0config/zookeeper.properties (this will start Zookeeper).Open another terminal.Change to\u00a0Striim/Kafka and enter\u00a0JMX_PORT=9998 bin/kafka-server-start.sh config/server.properties (this will start Kafka).You can then persist Kafka streams using the default property set.Using an external Kafka serverWhen using an external Kafka server, to handle Striim's maximum batch size the following entries in config/server.properties\u00a0must have at least these minimum values:message.max.bytes = 43264200\nreplica.fetch.max.bytes = 43264200\nsocket.request.max.bytes=104857600 To support persisting streams to an external server, use the Tungsten console to create\u00a0a custom Striim property set using the following syntax:CREATE PROPERTYSET <name> (\n bootstrap.brokers:'<bootstrap IP address>:<port>',\n jmx.broker:'<jmx IP address>:<port>'),\n zk.address:'<zookeeper IP address>:<port>',\n partitions:'<number of partitions to use>'\n kafkaversion:'{0.8|0.9|0.10|0.11|2.1}';If not specified,\u00a0partitions defaults to 200.To change properties in an existing property set, see ALTER PROPERTYSET.Using Kafka SASL (Kerberos) authentication with SSL encryptionTo use SASL authentication with SSL encryption, do the following:Get the files krb5.conf\u00a0,\u00a0principal.keytab\u00a0, server.keystore.jks, and server.truststore.jks\u00a0from your Kafka administrator and copy them to the Striim server's file system outside of the Striim program directory, for example, to\u00a0/etc/striim/kafkaconf.In the same directory, create the file\u00a0jaas.conf, including the following lines, adjusting the keyTab path and principal to match your environment:KafkaClient {\n com.sun.security.auth.module.Krb5LoginModule required\n useKeyTab=true\n storeKey=true\n doNotPrompt=true\n client=true\n keyTab=\"/etc/striim/kafkaconf/principal.keytab\"\n principal=\"principal@example.com\";\n};\nAdd the following to Striim's Java environment:JAVA_SYSTEM_PROPERTIES=\" \\\n-Djava.security.krb5.conf='/etc/striim/kafkaconf/krb5.conf' \\\n-Djava.security.auth.login.config='/etc/striim/kafkaconf/kafka_server_jaas.conf' \"Include the following properties in your Kafka stream's property set or KafkaReader or KafkaWriter\u00a0KafkaConfig, adjusting the paths to match your environment and using the passwords provided by your Kafka administrator. For KafkaConfig, replace the commas with semicolons.security.protocol=SASL_SSL,\nsasl.kerberos.service.name=kafka,\nssl.truststore.location=/etc/striim/kafkaconf/server.truststore.jks,\nssl.truststore.password=password,\nssl.keystore.location=/etc/striim/kafkaconf/server.keystore.jks,\nssl.keystore.password=password,\nssl.key.password=passwordUsing Kafka SASL (Kerberos) authentication without SSL encryptionTo use SASL authentication without SSL encryption, do the following:Get the files krb5.conf\u00a0and\u00a0principal.keytab\u00a0from your Kafka administrator and copy them to the Striim server's file system outside of the Striim program directory, for example, to\u00a0/etc/striim/kafkaconf.In the same directory, create the file\u00a0jaas.conf, including the following lines, adjusting the keyTab path and principal to match your environment:KafkaClient {\n com.sun.security.auth.module.Krb5LoginModule required\n useKeyTab=true\n storeKey=true\n doNotPrompt=true\n client=true\n keyTab=\"/etc/striim/kafkaconf/principal.keytab\"\n principal=\"principal@example.com\";\n};\nAdd the following to Striim's Java environment:JAVA_SYSTEM_PROPERTIES=\" \\\n-Djava.security.krb5.conf='/etc/striim/kafkaconf/krb5.conf' \\\n-Djava.security.auth.login.config='/etc/striim/kafkaconf/kafka_server_jaas.conf' \"Include the following properties in your Kafka stream's property set or KafkaReader or KafkaWriter\u00a0KafkaConfig\u00a0.\u00a0For KafkaConfig, replace the comma with a semicolon.security.protocol=SASL_PLAINTEXT,\nsasl.kerberos.service.name=kafkaUsing Kafka SSL encryption without SASL (Kerberos)\u00a0 authenticationTo use SSL encryption without SASL authentication, do the following:Get the files server.truststore.jks\u00a0and server.keystore.jks\u00a0from your Kafka administrator and copy them to the Striim server's file system outside of the Striim program directory, for example, to\u00a0/etc/striim/kafkaconf.Include the following properties in your Kafka stream property set or KafkaReader or KafkaWriter\u00a0KafkaConfig, adjusting the paths to match your environment and using the passwords provided by your Kafka administrator.\u00a0For KafkaConfig, replace the commas with semicolons.security.protocol=SSL,\nssl.truststore.location=/etc/striim/kafkaconf/server.truststore.jks,\nssl.truststore.password=password,\nssl.keystore.location=/etc/striim/kafkaconf/server.keystore.jks,\nssl.keystore.password=password,\nssl.key.password=passwordUsing Kafka without SASL\u00a0(Kerberos)\u00a0authentication or SSL encryptionTo use neither SASL authentication nor SSL encryption, do not specify\u00a0security.protocol in the KafkaReader or KafkaWriter KafkaConfig\u00a0or in your Kafka stream's property set.Additional properties for Kafka streamsUse these properties only in Kafka stream property sets, not with KafkaReader or KafkaWriter. Use single quotes around the values.Kafka propertydefault valuenotespartitions\u00a0'200'the maximum number of Kafka partitions to be used if the stream is partitioned; if the stream is not partitioned, only one partition is used and this value is ignoredreplication.factor'1'the number of replicas to keep for each event; if this is greater than the number of brokers, creation of a topic will failIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-25\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/configuring-kafka-for-persisted-streams.html", "title": "Configuring Kafka for persisted streams", "language": "en"}} {"page_content": "\n\nUpgrading StriimSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationUpgrading StriimPrevNextUpgrading StriimThe upgrade process varies depending on whether Striim is running in CentOS or Ubuntu and whether the metadata repository is hosted on Derby, Oracle, or PostgreSQL. It is important that you follow the instructions exactly and in order.If possible, use the simpler In-place upgrade method.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-01-09\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/upgrading-striim.html", "title": "Upgrading Striim", "language": "en"}} {"page_content": "\n\nIn-place upgradeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationUpgrading StriimIn-place upgradePrevNextIn-place upgradeAn in-place upgrade installs the new version of Striim while keeping the current repository database. You may do an in-place upgrade to Striim 4.2.0 from version 3.9.7 or later. From earlier versions, use the Export-import upgrade method.ImportantBefore following the instructions below, read the Release notes so you will be aware of any changes that may be required to your environment or applications before or after the upgrade.Release notesNoteIf any applications contain a router component, use the Export-import upgrade method. Alternatively, or if you also have applications with readers that have CDDL Capture enabled (which in this release are incompatible with the export-import upgrade):Export all applications containing router components.Upgrade using the in-place method.After upgrading, before running an application containing one or more router components, alter it to restore the router components, which will have been lost during the upgrade. Get the DDL statements for the router components from the exported TQL. For example:ALTER APPLICATION <namespace>.<application name>;\nCREATE OR REPLACE ROUTER myRouter INPUT FROM mySourceStream AS src \nCASE\n WHEN TO_INT(src.data[1]) < 150 THEN ROUTE TO stream_one,\n WHEN TO_INT(src.data[1]) >= 150 THEN ROUTE TO stream_two,\n WHEN meta(src,\"TableName\").toString() like 'QATEST.TABLE_%' THEN ROUTE TO stream_three,\nELSE ROUTE TO stream_else;\nALTER APPLICATION <namespace>.<application name> RECOMPILE;NoteIf upgrading from Striim 4.0.5 or earlier:If any applications use Salesforce Reader, use the Export-import upgrade method. Alternatively, use the in-place upgrade method, and if prior to upgrading the Salesforce Reader SObjects (shown as Objects in the UI) field value was anything other than %, after upgrading and before starting the application modify the application to restore the correct SObjects value (which will change to % as part of the upgrade).NoteIf upgrading from Striim 3.x:If any applications use Salesforce Reader, use the Export-import upgrade method.An in-place upgrade from 3.x will delete any WActionStore, exception store, or event table data persisted to Elasticsearch. If you wish to preserve such data, use the Export-import upgrade method.Back up the cluster as described in Backing up and restoring a cluster.Quiesce and undeploy all running applications with persisted streams. If upgrading from Striim 3.x, quiesce and undeploy all running applications with Spanner Writer targets. Stop and undeploy all other running and deployed applications.Open the Striim console and enter the following commands to stop the hidden monitoring and alerting applications:stop application Global.MonitoringSourceApp;\nundeploy application Global.MonitoringSourceApp;\nstop application Global.MonitoringProcessApp;\nundeploy application Global.MonitoringProcessApp;\nstop application System$Alerts.AlertingApp;\nundeploy application System$Alerts.AlertingApp;\ndrop application System$Alerts.AlertingApp cascade;\nexit;Stop all Forwarding Agents.If upgrading from 3.10.3.4 or earlier, delete all Elasticsearch data. For example, in Linux:rm -rf /opt/striim/elasticsearch/dataUpgrade all servers.On each server in the cluster, move the Striim configuration file to\u00a0/opt/striim/conf-backup (or, if you prefer, some other directory) so it will not be removed when you uninstall the old version of Striim:sudo mkdir /opt/striim/conf/conf-backup\nsudo mv /opt/striim/conf/startUp.properties /opt/striim/conf/conf-backup/\nfor Centos: on each server in the cluster, download striim-node-4.2.0-Linux.rpm and enter the following commands (on CentOS 6, omit systemctl):sudo systemctl stop striim-node\nsudo rpm -e striim-node\nsudo rpm -ivh striim-node-4.2.0-Linux.rpm\u00a0for Ubuntu: on each server in the cluster, download striim-node-4.2.0-Linux.deb and enter the following commands (on Ubuntu 14.04, omit systemctl):sudo systemctl stop striim-node\nsudo dpkg --remove striim-node\nsudo dpkg -i striim-node-4.2.0-Linux.\u00a0On each server in the cluster, enter the following command to restore your Striim configuration (adjust as necessary if you backed up your files to a different location):sudo cp /opt/striim/conf/conf-backup/startUp.properties /opt/striim/conf/Update your metadata repository.If upgrading from 4.0.5 or later, on one server, run the following commands:sudo chmod +x /opt/striim/bin/upgrade.sh\nsudo /opt/striim/bin/upgrade.sh -a forwardIf upgrading from 4.0.4 or earlier, using a client for your metadata repository host, run the appropriate scripts in the /opt/striim/conf/ directory:for Derby: enter the following commands, replacing ****** with the Derby password you set for the waction user as described in Changing the Derby password:cd /opt/striim/conf\n../derby/bin/ij\nconnect 'jdbc:derby://localhost/wactionrepos;user=waction;password=******;create=true';\nrun 'DefineMeteringReposDerby.sql';\nrun 'UpgradeMetadataReposDerbyTo420.sql';\nexit;for Oracle: DefineMeteringReposOracle.sql and UpgradeMetadataReposOracleTo420.sql - run these scripts as the Oracle user you created as described in Configuring Oracle to host the metadata repositoryfor PostgreSQL: DefineMeteringReposPostgres.sql and UpgradeMetadataReposPostgresTo420.sql - run these scripts as the PostgreSQL user you created as described in Configuring PostgreSQL to host the metadata repositoryIf you are upgrading from 3.9.8 or later, skip this step.On one server, run sudo su - striim /opt/striim/bin/sksConfig.sh and enter passwords for the Striim keystore and the admin and sys users. If hosting the metadata repository on Oracle or PostgreSQL, enter that password as well (see Configuring Striim's metadata repository). If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped:\u00a0, . _ + : @ % / -Copy sks.jks snf sksKsy.pwd from /opt/striim/conf/ on that server to /opt/striim/conf/ on all other servers and on all servers assign ownership of those files to Striim:sudo chown striim sks.jks\nsudo chown striim sksKey.pwdStart all servers.sudo systemctl start striim-nodeIf the metadata repository is hosted on PostgreSQL, reload any applications and dashboards you dropped in step 3.Reload any open processors (see Loading and unloading open processors).Upgrade and start all Forwarding Agents (see Upgrading Forwarding Agents).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-09\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/in-place-upgrade.html", "title": "In-place upgrade", "language": "en"}} {"page_content": "\n\nExport-import upgradeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationUpgrading StriimExport-import upgradePrevNextExport-import upgradeYou may use the following process to upgrade from Striim 3.8.4 or later. (For assistance upgrading from earlier versions, contact Striim support.)ImportantBefore following the instructions below, read the Release notes so you will be aware of any changes that may be required to your environment or applications before or after the upgrade.Release notesNoteIf you have applications with readers that have CDDL Capture enabled, use the In-place upgrade method.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-06\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/export-import-upgrade.html", "title": "Export-import upgrade", "language": "en"}} {"page_content": "\n\nPreparing to upgrade and exporting the metadataSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationUpgrading StriimExport-import upgradePreparing to upgrade and exporting the metadataPrevNextPreparing to upgrade and exporting the metadataImportantBefore following the instructions below, read the Release notes so you will be aware of any changes that may be required to your environment or applications before or after the upgrade.Release notesBack up the cluster as described in Backing up and restoring a cluster.On each server in the cluster, copy any .jar files you have added to Striim's lib directory to another location. These should be easy to identify since the files installed by Striim all have the same date.NoteStarting in 3.9.6, a custom Cassandra driver is bundled with Striim, so do not copy any cassandra-cassandra-jdbc-wrapper-<version>.jar you installed in an earlier release.Applications that have recovery enabled will pick up from the recovery checkpoint when they are restarted after the upgrade. In order for this to work correctly, after the upgrade they must be deployed on the same, unchanged deployment groups with the same options. Make notes as necessary to duplicate your deployment scenarios.In releases prior to 3.8.6,\u00a0a flow that was deployed to a group with more than one agent was always deployed to all agents in the group, even when ON ONE was specified. This bug was fixed in 3.8.6, and deployment to agents will be ON ONE or ON ALL as specified in the DEPLOY command. Consequently,\u00a0after the upgrade is complete, any applications with recovery enabled that were deployed to groups with more than one agent must be deployed ON ALL in order to pick up from the recovery checkpoint correctly.Quiesce and undeploy all running applications with persisted streams. If upgrading from Striim 3.x, quiesce and undeploy all running applications with Spanner Writer targets. Stop and undeploy all other running and deployed applications.Open the Striim console and enter the following commands to stop the hidden monitoring applications:stop application Global.MonitoringSourceApp;\nundeploy application Global.MonitoringSourceApp;\nstop application Global.MonitoringProcessApp;\nundeploy application Global.MonitoringProcessApp;\nexit;On each server in the cluster, stop the striim-node process:In CentOS 6 or Ubuntu 14.04:sudo stop striim-nodeIn CentOS 7 or Ubuntu 16.04 or later:sudo systemctl stop striim-nodeIf the metadata repository is hosted on Derby, on the server where Derby is installed, stop it:In CentOS 6 or Ubuntu 14.04:sudo stop striim-dbmsIn CentOS 7 or Ubuntu 16.04 or later:sudo systemctl stop striim-dbmsExport the metadata.If the metadata repository is hosted on Derby, on the server that hosts Derby, enter the following commands to export the metadata:cd /opt/striim\nsudo bin/tools.sh -A export -F export.json\nIf the metadata repository is hosted on Oracle, enter the following commands on any server:cd /opt/striim\nsudo bin/tools.sh -A export -F export.json -r oracle\nIf the metadata repository is hosted on PostgreSQL, enter the following commands on any server:cd /opt/striim\nsudo bin/tools.sh -A export -F export.json -r postgres\nMove the Striim configuration file to\u00a0/opt/striim/conf-backup (or, if you prefer, some other directory) so it will not be removed when you uninstall the old version of Striim (sks.jks and sksKey.pwd exist only in 3.9.8 and later):sudo mkdir conf-backup\nsudo mv conf/startUp.properties conf-backup\nsudo mv conf/sks.jks\nsudo mv conf/sksKey.pwd\nIf the metadata repository is hosted on Derby, skip this step.\u00a0Remove the old repository tables (which you backed up in step 1):If the metadata repository is hosted on Oracle, log in to\u00a0sqlplus as the user created in\u00a0Configuring Striim's metadata repository and run\u00a0/opt/striim/conf/DropMetadataReposOracle.sql.If the metadata repository is hosted on PostgreSQL, log in to\u00a0psql\u00a0as the user created in\u00a0Configuring Striim's metadata repository\u00a0and run\u00a0/opt/striim/conf/DropMetadataReposPostgressql.sql.Continue with\u00a0Upgrading in CentOS or\u00a0Upgrading in Ubuntu.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-01-16\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/preparing-to-upgrade-and-exporting-the-metadata.html", "title": "Preparing to upgrade and exporting the metadata", "language": "en"}} {"page_content": "\n\nUpgrading in CentOSSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationUpgrading StriimExport-import upgradeUpgrading in CentOSPrevNextUpgrading in CentOSComplete the steps in\u00a0Preparing to upgrade and exporting the metadata.Stop all Forwarding Agents.On each server in the cluster, enter the following commands to uninstall Striim (skip the dbms and samples commands if those packages are not installed on the node):sudo rpm -e striim-node\nsudo rpm -e striim-dbms\nsudo rpm -e striim-samplesOn each server in the cluster, download\u00a0striim-node-4.2.0-Linux.rpm.If the metadata repository is hosted on Derby, download\u00a0striim-dbms-4.2.0-Linux.rpm to the Derby host.Optionally, download\u00a0striim-samples-4.2.0-Linux.rpm.On each server in the cluster, install the node package: Optionally, also install the sample applications:\u00a0sudo rpm -ivh striim-samples-4.2.0-Linux.rpmIf the metadata repository is hosted on Derby, install it on the appropriate server:\u00a0sudo rpm -ivh striim-dbms-4.2.0-Linux.rpmContinue with Importing the metadata and completing the upgrade.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2020-01-06\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/upgrading-in-centos.html", "title": "Upgrading in CentOS", "language": "en"}} {"page_content": "\n\nUpgrading in UbuntuSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationUpgrading StriimExport-import upgradeUpgrading in UbuntuPrevNextUpgrading in UbuntuComplete the steps in\u00a0Preparing to upgrade and exporting the metadata.Stop all Forwarding Agents.On each server in the cluster, enter the following commands to uninstall Striim (skip the dbms and samples commands if those packages are not installed on the node):sudo dpkg -r striim-node\nsudo dpkg -r striim-dbms\nsudo dpkg -r striim-samplesOn each server in the cluster, download\u00a0striim-node-4.2.0-Linux.deb.If the metadata repository is hosted on Derby, download\u00a0striim-dbms-4.2.0-Linux.deb to the Derby host.Optionally, download\u00a0striim-samples-4.2.0-Linux.deb.On each server in the cluster, install the node package: sudo dpkg -i striim-node-4.2.0-Linux.debOptionally, also install the sample applications:\u00a0sudo dpkg -i striim-samples-4.2.0-Linux.debIf the metadata repository is hosted on Derby, install it on the appropriate server:\u00a0sudo dpkg -i striim-dbms-4.2.0-Linux.debContinue with Importing the metadata and completing the upgrade.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-13\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/upgrading-in-ubuntu.html", "title": "Upgrading in Ubuntu", "language": "en"}} {"page_content": "\n\nImporting the metadata and completing the upgradeSkip to main contentToggle navigationSelect versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveToggle navigationStriim Platform 4.2.0 Select versionStriim Cloud 4.2.0 (current release)Striim Platform 4.2.0 (current release)Striim Cloud 4.1.2Striim Platform 4.1.2Striim Cloud 4.1.0Striim Platform 4.1.0Striim documentation archiveprintToggle navigationStriim Platform 4.2.0 Installation and configurationUpgrading StriimExport-import upgradeImporting the metadata and completing the upgradePrevNextImporting the metadata and completing the upgradeComplete the steps in Upgrading in CentOS or Upgrading in Ubuntu.If the metadata repository is hosted on Derby, skip this step.If the metadata repository is hosted on Oracle, log in to sqlplus as the user created in Configuring Oracle to host the metadata repository\u00a0and run the /opt/striim/conf/DefineMetadataReposOracle.sql and DefineMeteringReposOracle.sql scripts to create new repository tables.If the metadata repository is hosted on PostgreSQL, log in to\u00a0psql as the user created in\u00a0Configuring PostgreSQL to host the metadata repository and run the\u00a0/opt/striim/conf/DefineMetadataReposPostgres.sql\u00a0and DefineMetadataReposPostgres.sql scripts to create new repository tables.On each server in the cluster, enter the following command to copy your Striim configuration (adjust as necessary if you backed up your files to a different location):cp /opt/striim/conf-backup/startUp.properties /opt/striim/confIf you are upgrading from 3.9.8 or later, also enter the following commands:cp /opt/striim/conf-backup/sks.jks /opt/striim/conf\ncp /opt/striim/conf-backup/sksKwy.pws /opt/striim/conf\nsudo chown striim /opt/striim/conf/sks.jks\nsudo chown striim /opt/striim/conf/sksKwy.pwsIf you are upgrading from 3.9.8 or later, skip this step.On each server in the cluster, edit /opt/striim/startUp.properties and delete the lines for WAClusterPassword and WAAdminPassword.If you are upgrading from 3.9.8 or later, skip this step.On one server, run sudo su - striim /opt/striim/bin/sksConfig.sh and enter passwords for the Striim keystore and the admin and sys users. If hosting the metadata repository on Oracle or PostgreSQL, enter that password as well (see Configuring Striim's metadata repository). If you are using a Bash or Bourne shell, characters other than letters, numbers, and the following punctuation marks must be escaped:\u00a0, . _ + : @ % / -Copy sks.jks snf sksKsy.pwd from /opt/striim/conf/ on that server to /opt/striim/conf/ on all other servers and on all servers assign ownership of those files to Striim:sudo chown striim sks.jks\nsudo chown striim sksKey.pwdImport the metadata. On the server where you exported the metadata when Preparing to upgrade and exporting the metadata, enter the following commands:If the metadata repository is hosted on Derby:cd /opt/striim\nsudo bin/tools.sh -A import -F export.json -f <old version>For example, if upgrading from 3.8.4:sudo bin/tools.sh -A import -F export.json -f 3.8.4If the metadata repository is hosted on Oracle:cd /opt/striim\nsudo bin/tools.sh -A import -F export.json -f <old version> -r oracleIf the metadata repository is hosted on PostgreSQL:cd /opt/striim\nsudo bin/tools.sh -A import -F export.json -f <old version> -r postgresWhen import is complete, reboot that system to verify that Striim restarts automatically.Alternatively, to start without rebooting:For CentOS 6.x or Ubuntu 14.04, enter the following commands:If the metadata repository is hosted on Derby, on the Derby host only:\u00a0sudo start striim-dbmsWait ten seconds, then enter:sudo start striim-node\u00a0\nsudo tail -F /opt/striim/logs/striim-node.log\u00a0For CentOS 7.x or Ubuntu 16.04 or later, enter the following commands:If the metadata repository is hosted on Derby, on the derby host only:sudo systemctl enable striim-dbms\nsudo systemctl start striim-dbms\u00a0Wait ten seconds, then enter:sudo systemctl enable striim-node\nsudo systemctl start striim-node\u00a0\nsudo tail -F /opt/striim/logs/striim-node.log\u00a0When you see the message Please go to ... to administer, or use console, Striim is running.Log in to verify that Striim is running, then reboot the other servers in the cluster, or start each server manually using the commands above.Reload any open processors (see Loading and unloading open processors).Upgrade and start all Forwarding Agents (see Upgrading Forwarding Agents).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-28\n", "metadata": {"source": "https://www.striim.com/docs/platform/en/importing-the-metadata-and-completing-the-upgrade.html", "title": "Importing the metadata and completing the upgrade", "language": "en"}} {"page_content": "\n\nWhat is Striim for BigQuery?Skip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationWhat is Striim for BigQuery?PrevNextWhat is Striim for BigQuery?Striim for BigQuery is a fully-managed software-as-a-service tool for building data pipelines (see What is a Data Pipeline) to copy data from MariaDB, MySQL, Oracle, PostgreSQL, and SQL Server to BigQuery in real time using change data capture (CDC).Striim first copies all existing source data to BigQuery (\"initial sync\"), then transitions automatically to reading and writing new and updated source data (\"live sync\"). You can monitor the real-time health and progress of your pipelines, as well as view performance statistics as far back as 90 days.Optionally, with some sources, Striim can also synchronize schema evolution. That is, when you add a table or column to, or drop a table from, the source database, Striim will update BigQuery to match. Sync will continue without interruption. (However, if a column is dropped from a source table, it will not be dropped from the corresponding BigQuery target table.) For more details, see Additional Settings.When you launch Striim for BigQuery, we guide you through the configuration of your pipeline, including connecting to your BigQuery project, configuring your source, selecting the schemas and tables you want to sync to BigQuery, and choosing which settings to use for the pipeline.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/what-is-striim-for-bigquery-.html", "title": "What is Striim for BigQuery?", "language": "en"}} {"page_content": "\n\nSupported sourcesSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesPrevNextSupported sourcesStriim for BigQuery supports the following sources:MariaDB:on-premise: MariaDB and MariaDB Galera Cluster versions compatible with MySQL 5.5 or laterAmazon RDS for MariaDBMySQL:on-premise: MySQL 5.5 and later versionsAmazon Aurora for MySQLAmazon RDS for MySQLAzure Database for MySQLCloud SQL for MySQLOracle Database (RAC is supported in all versions except Amazon RDS for Oracle):on-premise:11g Release 2 version 11.2.0.412c Release 1 version 12.1.0.212c Release 2 version 12.2.0.118c (all versions)19c (all versions)Amazon RDS for OraclePostgreSQL:on-premise: PostgreSQL 9.4.x and later versionsAmazon Aurora for PostgreSQLAmazon RDS for PostgreSQLAzure Database for PostgreSQLCloud SQL for PostgreSQLSQL Server:on-premise:SQL Server Enterprise versions 2008, 2012, 2014, 2016, 2017, and 2019SQL Server Standard versions 2016, 2017, and 2019Amazon RDS for SQL ServerAzure SQL Database Managed InstanceIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect_source-select.html", "title": "Supported sources", "language": "en"}} {"page_content": "\n\nSet up your MariaDB sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your MariaDB sourcePrevNextSet up your MariaDB sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.For all MariaDB environmentsAn administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MySQL.You may use any other valid name in place of striim. Note that by default MySQL does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MariaDB setupSee Activating the Binary Log.On-premise MariaDB Galera Cluster setupThe following properties must be set on each server in the cluster:binlog_format=ROWlog_bin=ONlog_slave_updates=ONServer_id: see server_idwsrep_gtid_mode=ONAmazon RDS for MariaDB setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-mariadb.html", "title": "Set up your MariaDB source", "language": "en"}} {"page_content": "\n\nSet up your MySQL SourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your MySQL SourcePrevNextSet up your MySQL SourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.For all MySQL environmentsAn administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MySQL.You may use any other valid name in place of striim. Note that by default MySQL does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MySQL setupStriim reads from the MySQL binary log. If your MySQL server is using replication, the binary log is enabled, otherwise it may be disabled.For on-premise MySQL, the property name for enabling the binary log, whether it is one or off by default, and how and where you change that setting vary depending on the operating system and your MySQL configuration, so for instructions see the binary log documentation for the version of MySQL you are running.If the binary log is not enabled, Striim's attempts to read it will fail with errors such as the following:2016-04-25 19:05:40,377 @ -WARN hz._hzInstance_1_striim351_0423.cached.thread-2 \ncom.webaction.runtime.Server.startSources (Server.java:2477) Failure in Starting \nSources.\njava.lang.Exception: Problem with the configuration of MySQL\nRow logging must be specified.\nBinary logging is not enabled.\nThe server ID must be specified.\nAdd --binlog-format=ROW to the mysqld command line or add binlog-format=ROW to your \nmy.cnf file\nAdd --bin-log to the mysqld command line or add bin-log to your my.cnf file\nAdd --server-id=n where n is a positive number to the mysqld command line or add \nserver-id=n to your my.cnf file\n at com.webaction.proc.MySQLReader_1_0.checkMySQLConfig(MySQLReader_1_0.java:605) ...Amazon Aurora for MySQL setupSee How do I enable binary logging for my Amazon Aurora MySQL cluster?.Amazon RDS for MySQL setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);Azure Database for MySQL setupYou must create a read replica to enable binary logging. See Read replicas in Azure Database for MySQL.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-mysql.html", "title": "Set up your MySQL Source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-oracle.html", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_UUID-75440084-bcfd-2815-47bd-a6187eedb706", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_UUID-9a533b8b-4fb4-086e-6c80-56e5950ac1a2", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_section-idm4534974681201633552621372819", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_section-idm4534974402491233552621478836", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to BigQuery, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-postgresql.html", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to BigQuery, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-postgresql.html#UUID-477477cc-bfa8-d00a-dbc8-f5e6d7f0d8ce_UUID-1380ef4e-7a49-9cba-8d57-bcec970068c9", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to BigQuery, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-postgresql.html#UUID-477477cc-bfa8-d00a-dbc8-f5e6d7f0d8ce_UUID-616a1f47-8e66-8a87-3198-dd4d87fe1b36", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to BigQuery, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-postgresql.html#UUID-477477cc-bfa8-d00a-dbc8-f5e6d7f0d8ce_UUID-9e6b9041-acd7-c1aa-4c43-1f195c43bbfb", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your SQL Server sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSupported sourcesSet up your SQL Server sourcePrevNextSet up your SQL Server sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Striim reads SQL Server change data using the native SQL Server Agent utility. For more information, see About Change Data Capture (SQL Server) on msdn.microsoft.com.If a table uses a SQL Server feature that prevents change data capture, MS SQL Reader can not read it. For examples, see the \"SQL Server 2014 (12.x) specific limitations\" section of CREATE COLUMNSTORE INDEX (Transact-SQL).In Azure SQL Database managed instances, change data capture requires collation to be set to the default SQL_Latin1_General_CP1_CI_AS at the server, database, and table level. If you need a different collation, it must be set at the column level.Before Striim applications can use the MS SQL Reader adapter, a SQL Server administrator with the necessary privileges must do the following:If it is not running already, start SQL Server Agent (see Start, Stop, or Pause the SQL Server Agent Service; if the agent is disabled, see Agent XPs Server Configuration Option).Enable change data capture on each database to be read using the following commands:for Amazon RDS for SQL Server:EXEC msdb.dbo.rds_cdc_enable_db '<database name>';for all others:USE <database name>\nEXEC sys.sp_cdc_enable_dbCreate a SQL Server user for use by Striim. This user must use the SQL Server authentication mode, which must be enabled in SQL Server. (If only Windows authentication mode is enabled, Striim will not be able to connect to SQL Server.)Grant the MS SQL Reader user the db_owner role for each database to be read using the following commands:USE <database name>\nEXEC sp_addrolemember @rolename=db_owner, @membername=<user name>For example, to enable change data capture on the database mydb, create a user striim, and give that user the db_owner role on mydb:USE mydb\nEXEC sys.sp_cdc_enable_db\nCREATE LOGIN striim WITH PASSWORD = 'passwd' \nCREATE USER striim FOR LOGIN striim\nEXEC sp_addrolemember @rolename=db_owner, @membername=striim\nTo confirm that change data capture is set up correctly, run the following command and verify that all tables to read are included in the output:EXEC sys.sp_cdc_help_change_data_captureStriim can capture change data from a secondary database in an Always On availability group. In that case, change data capture must be enabled on the primary database.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/prerequisite-checks-sqlserver.html", "title": "Set up your SQL Server source", "language": "en"}} {"page_content": "\n\nGetting startedSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedPrevNextGetting startedTo get started with Striim for BigQuery:Configure a BigQuery service account: assign roles to a service account and download a key Striim can use to connectConfigure your source: enable change data capture and create a user account for use by StriimChoose how Striim will connect to your database: allow Striim to connect to your source database via an SSH tunnel, a firewall rule, or port forwardingSubscribe to Striim for BigQuery: deploy Striim for BigQuery from the Google Cloud MarketplaceCreate a Striim for BigQuery service: in the Striim Cloud Console, create a Striim for BigQuery serviceCreate a pipeline: follow the instructions on screen to create your first pipelineIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-18\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/getting-started.html", "title": "Getting started", "language": "en"}} {"page_content": "\n\nConfigure a BigQuery service accountSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedConfigure a BigQuery service accountPrevNextConfigure a BigQuery service accountTo connect to BigQuery, Striim requires a service account (see Service Accounts) associated with the project in which the target tables will be created.The service account must have the BigQuery Data Editor, BigQuery Job User, and BigQuery Resource Admin roles for the target tables (see BigQuery predefined Cloud IAM roles). Alternatively, you may create a custom role with the following permissions for the target tables (see BigQuery custom roles):bigquery.datasets.createbigquery.datasets.getbigquery.jobs.createbigquery.jobs.getbigquery.jobs.listbigquery.jobs.listAllbigquery.tables.createbigquery.tables.deletebigquery.tables.getbigquery.tables.getDatabigquery.tables.listbigquery.tables.updatebigquery.tables.updateDatabigquery.tables.updateTagAfter you have created the service account, download its key file (see Authenticating with a service account key file). You will upload this file when you create your first pipeline.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/configure-a-bigquery-service-account.html", "title": "Configure a BigQuery service account", "language": "en"}} {"page_content": "\n\nConfigure your sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedConfigure your sourcePrevNextConfigure your sourceYou must configure your source database before you can use it in a pipeline. The configuration details are different for each database type (Oracle, SQL Server, etc.). See the specific setup instructions under Supported sources.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-04\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/configure-your-source.html", "title": "Configure your source", "language": "en"}} {"page_content": "\n\nChoose how Striim will connect to your databaseSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedChoose how Striim will connect to your databasePrevNextChoose how Striim will connect to your databaseIf you have an SSH tunnel server (aka jump server) for your source database, that is the most secure way for Striim to connect to it. Alternatively, you can use a firewall rule or port forwarding.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/choose-how-striim-will-connect-to-your-database.html", "title": "Choose how Striim will connect to your database", "language": "en"}} {"page_content": "\n\nConfigure Striim to use your SSH tunnelSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedChoose how Striim will connect to your databaseConfigure Striim to use your SSH tunnelPrevNextConfigure Striim to use your SSH tunnelIf you plan to use an SSH tunnel for Striim to connect to your source, set it up before creating your pipeline.In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click ... and select Security.Click Create New Tunnel and enter the following:Name: choose a descriptive name for this tunnelJump Host: the IP address or DNS name of the jump serverJump Host Port: the port number for the tunnelJump Host Username: the jump host operating system user account that Striim Cloud will use to connectDatabase Host: the IP address or DNS name of the source databaseDatabase Port: the port for the databaseClick Create Tunnel. Do not click Start yet.Under Public Key, click Get Key > Copy Key.Add the copied key to your jump server's authorized keys file, then return to the Striim Cloud Security page and click Start. The SSH tunnel will now be available in the source settings.Give the user specified for Jump Host Username the necessary file system permissions to access the key.Under Tunnel Address, click Copy to get the string to provide as the host name.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-21\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/create-new.html", "title": "Configure Striim to use your SSH tunnel", "language": "en"}} {"page_content": "\n\nConfigure your firewall to allow Striim to connect to your databaseSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedChoose how Striim will connect to your databaseConfigure your firewall to allow Striim to connect to your databasePrevNextConfigure your firewall to allow Striim to connect to your databaseIn the firewall or cloud security group for your source database, create an inbound port rule for Striim's IP address and the port for your database (typically 3306 for MariaDB or MySQL, 1521 for Oracle, 5432 for PostgreSQL, or 1433 for SQL Server). To get Striim's IP address:In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Security.Click the Copy IP icon next to the IP address.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/configure-your-firewall-to-allow-striim-to-connect-to-your-database.html", "title": "Configure your firewall to allow Striim to connect to your database", "language": "en"}} {"page_content": "\n\nConfigure port forwarding in your router to allow Striim to connect to your databaseSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedChoose how Striim will connect to your databaseConfigure port forwarding in your router to allow Striim to connect to your databasePrevNextConfigure port forwarding in your router to allow Striim to connect to your databaseIn your router configuration, create a port forwarding rule for your database's port. If supported by your router, set the source IP to your database's IP address and the target IP to Striim's IP address. To get Striim's IP address:In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Security.Click the Copy IP icon next to the IP address.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/configure-port-forwarding-in-your-router-to-allow-striim-to-connect-to-your-database.html", "title": "Configure port forwarding in your router to allow Striim to connect to your database", "language": "en"}} {"page_content": "\n\nSubscribe to Striim for BigQuerySkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedSubscribe to Striim for BigQueryPrevNextSubscribe to Striim for BigQueryIn the Google Cloud Platform Marketplace, search for\u00a0Striim for BigQuery\u00a0and click it.Scroll down to\u00a0Pricing, select a plan, and click\u00a0Select.Scroll down to\u00a0Additional terms, check to accept them all, and click\u00a0Subscribe.Click\u00a0Register with Striim Inc., then follow the instructions to complete registration. Make a note of the domain and password you enter.When you receive the\u00a0Striim for BigQuery | Activate your account\u00a0email, open it and click the activation link.Enter your email address and password, then click\u00a0Sign up.You will receive another email with information about your subscription.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/subscribe-to-striim-for-bigquery.html", "title": "Subscribe to Striim for BigQuery", "language": "en"}} {"page_content": "\n\nCreate a Striim for BigQuery serviceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationGetting startedCreate a Striim for BigQuery servicePrevNextCreate a Striim for BigQuery serviceAfter you receive the email confirming your subscription and log in:Select the Services tab, click Create new, and under Striim for BigQuery click Create.Enter a name for your service.Select the appropriate region and virtual machine size.Click Create.When the service's status changes from Creating to Running:If you will use an SSH tunnel to connect to your source, Configure Striim to use your SSH tunnel.Otherwise, click Launch and Create a pipeline.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/create-a-striim-for-bigquery-service.html", "title": "Create a Striim for BigQuery service", "language": "en"}} {"page_content": "\n\nCreate a pipelineSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelinePrevNextCreate a pipelineIf your pipeline will connect to your source using an SSH tunnel, Configure Striim to use your SSH tunnel before you create the pipeline.Striim uses a wizard interface to walk you through the steps required to create a pipeline. The steps are:connect to BigQueryselect source database type (Oracle, SQL Server, etc.)connect to source databaseselect schemas to syncselect tables to syncoptionally, create table groupsoptionally, revise additional settingsreview settings and start pipelineAt most points in the wizard, you can save your work and come back to finish it later.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/create-a-pipeline.html", "title": "Create a pipeline", "language": "en"}} {"page_content": "\n\nConnect to BigQuerySkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineConnect to BigQueryPrevNextConnect to BigQueryIf you have already created one or more pipelines, you can select an existing BigQuery connection to write to the same BigQuery project. When you create your first pipeline, or if you want to use a different service account or connect to a different BigQuery project, you must do the following.Upload the service account key you downloaded as described in Configure a BigQuery service account: click Browse, navigate to the downloaded file, and double-click to select it. This will automatically set the project ID and use it as the BigQuery connection name.Click Next. Striim will verify that all the necessary permissions are available. If it reports any privileges are missing, reconfigure your service account as necessary, then try again.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect_target.html", "title": "Connect to BigQuery", "language": "en"}} {"page_content": "\n\nSelect your sourceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineSelect your sourcePrevNextSelect your sourceChoose the basic type of your source database:MariaDBMySQLOraclePostgreSQLSQL ServerSee Supported sources for details of which versions and cloud services are supported.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-15\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/select-your-source.html", "title": "Select your source", "language": "en"}} {"page_content": "\n\nConnect to your source databaseSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineConnect to your source databasePrevNextConnect to your source databaseThe connection properties vary according to the source database type.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-23\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect-to-your-source-database.html", "title": "Connect to your source database", "language": "en"}} {"page_content": "\n\nConnect to MariaDBSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineConnect to your source databaseConnect to MariaDBPrevNextConnect to MariaDBWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for MariaDB instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your MariaDB source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as MariaDBConnection1.Use SSL with on-premise MariaDBAcquire a certificate in .pem format as described in MariaDB > Enterprise Documentation > Security > Data in-transit encryption > Enabling TLS on MariaDB ServerImport the certificate into a custom Java truststore file:keytool -importcert -alias MariaCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon RDS for MariaDBDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MariaCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect_source-new-mariadb.html", "title": "Connect to MariaDB", "language": "en"}} {"page_content": "\n\nConnect to MySQLSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineConnect to your source databaseConnect to MySQLPrevNextConnect to MySQLWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon Aurora for MySQL, Amazon RDS for MySQL , Azure Database for MySQL, or Cloud SQL for MySQL instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your MySQL Source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as MySQLConnection1.Use SSL with on-premise MySQL or Cloud SQL for MySQLGet an SSL certificate in .pem format from your database administrator.Import the certificate into a custom Java truststore file:keytool -importcert -alias MySQLServerCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon Aurora or RDS for MySQLDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MySQLServerCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Use SSL with Azure Database for MySQLDownload the certificate .pem file from Learn > Azure > MySQL > Configure SSL connectivity in your application to securely connect to Azure Database for MySQL > Step 1: Obtain SSL certificate.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MySQLServerCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect_source-new-mysql.html", "title": "Connect to MySQL", "language": "en"}} {"page_content": "\n\nConnect to OracleSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineConnect to your source databaseConnect to OraclePrevNextConnect to OracleWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for Oracle instance, select that, otherwise leave at the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.SID: Enter the Oracle system ID or service name of the Oracle instance.Username: Enter the name of the user you created when you Set up your Oracle source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Use pluggable database: Select if the source database is CDB or PDB.Pluggable database name (appears if Use pluggable database is enabled): If the source database is PDB, enter its name here. If it is CDB, leave blank.Source connection name: Enter a descriptive name, such as OracleConnection1.Use SSL with on-premise OracleGet an SSL certificate in .pem format from your database administrator.Import the certificate into a custom Java truststore file:keytool -importcert -alias OracleCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon RDS for OracleDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias OracleCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter he password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-08\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect_source-new-oracle.html", "title": "Connect to Oracle", "language": "en"}} {"page_content": "\n\nConnect to PostgreSQLSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineConnect to your source databaseConnect to PostgreSQLPrevNextConnect to PostgreSQLWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon Aurora for PostgreSQL, Amazon RDS for PostgreSQL , Azure Database for PostgreSQL, or Cloud SQL for PostgreSQL instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your PostgreSQL source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as PostgreSQLConnection1.Use SSL with on-premise PostgreSQLGet an SSL certificate in .pem format from your database administrator (see Creating Certificates in the PostgreSQL documentation).Convert that to .pk8 format (replace <file name> with the name of the .pem file):openssl pkcs8 -topk8 -inform PEM -outform DER -in <file name>.pem -out client.root.pk8 \\\n -nocryptSet these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Certificate: upload the client-cert.pem file created in step 2SSL Certificate Key: upload the client.root.pk8 file created in step 2SSL Root Certificate: upload the server-ca.pem file created in step 2Use SSL with Amazon Aurora or RDS for PostgreSQLDownload the root certificate rds-ca-2019-root.pem (see AWS > Documentation > Amazon Relational Database Service (RDS) Using SSL with a PostgreSQL DB instance).Set these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Root Certificate: upload the file you downloaded in step 1Use SSL with Azure Database for PostgreSQLDownload the root certificate DigiCertGlobalRootG2.crt.pem (see Learn > Azure > PostgreSQL > Configure TLS connectivity in Azure Database for PostgreSQL - Single Server).Set these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Root Certificate: upload the file you downloaded in step 1Use SSL with Cloud SQL for PostgreSQLDownload server-ca.pem, client-cert.pem & client-key.pem from Google Cloud Platform (see Cloud SQL > Documentation > PostgreSQL > Guides > Configure SSL/TLS certificates).Convert client-key.pem to .pk8 format:openssl pkcs8 -topk8 -inform PEM -outform DER -in client-key.pem \\\n -out client.root.pk8 -nocryptSet these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Certificate: upload the client-cert.pem file you downloaded in step 1SSL Certificate Key: upload the client.root.pk8 file you created in step 2SSL Root Certificate: upload the server-ca.pem file you downloaded in step 1In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect_source-new-postgresql.html", "title": "Connect to PostgreSQL", "language": "en"}} {"page_content": "\n\nConnect to SQL ServerSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineConnect to your source databaseConnect to SQL ServerPrevNextConnect to SQL ServerWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for SQL Server instance or Azure SQL Managed Instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your SQL Server source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Connect using SSL: Select if the connection requires SSL, in which case you must also specify the following properties:Source connection name: Enter a descriptive name, such as SQLServerConnection1.Use SSL with on-premise SQL ServerGet an SSL certificate in .pem format from your database administrator.Create the truststore.jks file (replace <file name> with the name of your certificate file):keytool -importcert -alias MSSQLCACert -file <file name>.pem -keystore truststore.jks \\\n -storepass mypasswordSet these properties in Striim:Use trust server certificate: enableIntegrated security: enable to use Windows credentialsTrust store: upload the file you created in step 2Trust store password: the password you specified for -storepass in step 2Certificate host name: the hostNameInCertificate property value for the connection (see Learn > SQL > Connect > JDBC > Securing Applications > Using encryption > Understanding encryption support)Use SSL with Amazon RDS for SQL ServerDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MSSQLCACert -file <file name>.pem -keystore truststore.jks \\\n -storepass mypasswordSet these properties in Striim:Use trust server certificate: enableIntegrated security: enable to use Windows credentialsTrust store: upload the file you created in step 2Trust store password: the password you specified for -storepass in step 2Certificate host name: the hostNameInCertificate property value for the connection (see Learn > SQL > Connect > JDBC > Securing Applications > Using encryption > Understanding encryption support)Use SSL with Azure SQL Managed InstanceMicrosoft has changed its certificate requirements. Documentation update in progress.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/connect_source-new-sql-server.html", "title": "Connect to SQL Server", "language": "en"}} {"page_content": "\n\nSelect schemas and tables to syncSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineSelect schemas and tables to syncPrevNextSelect schemas and tables to syncSelect schemasSelect the source schemas containing the tables you want Striim to sync to BigQuery, then click Next.The first time you run the pipeline, Striim will create target datasets in BigQuery with the same names as the selected schemas automatically.CautionThe only special character allowed in BigQuery dataset names is underscore (_), so the names of the selected schemas must not contain any other special characters. For more information, see BigQuery > Documentation > Guides > Creating datasets > Name datasets.Select tablesSelect the source tables you want Striim to sync to BigQuery, then click Next.The first time you run the pipeline, Striim will create tables with the same names, columns, and data types in the target datasets.For information on supported datatypes and how they are mapped between your source and BigQuery, see Data type support & mapping for schema conversion & evolution.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/select-schemas-and-tables-to-sync.html", "title": "Select schemas and tables to sync", "language": "en"}} {"page_content": "\n\nMask data (optional)Skip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineSelect schemas and tables to syncMask data (optional)PrevNextMask data (optional)Optionally, you may mask data from source columns of string data types so that in the target their values are replaced by xxxxxxxxxxxxxxx. The Transform Data drop-down menu will appear for columns for which this option is available. (This option is not available for key columns.)To mask a column's values, set Transform Data to Mask.Masked data will appear as xxxxxxxxxxxxxxx in the target:In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/mask-data--optional-.html", "title": "Mask data (optional)", "language": "en"}} {"page_content": "\n\nSelect key columns (optional)Skip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineSelect schemas and tables to syncSelect key columns (optional)PrevNextSelect key columns (optional)The following is applicable only when you select Write continuous changes directly (MERGE mode) in Additional Settings. With the default setting Write continuous changes as audit records (APPEND ONLY mode), key columns are not required or used.By default, when a source table does not have a primary key, Striim will concatenate the values of all columns to create a unique identifier key for each row to identify it for UPDATE and DELETE operations. Alternatively, you may manually specify one or more columns to be used to create this key. Be sure that the selected column(s) will serve as a unique identifier; if two rows have the same key that may produce invalid results or errors.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-04\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/select-key-columns--optional-.html", "title": "Select key columns (optional)", "language": "en"}} {"page_content": "\n\nWhen target tables already existSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineWhen target tables already existPrevNextWhen target tables already existSelect what you want Striim to do when some of the tables selected to be synced already exist in the target:Proceed without the existing tables: Omit both source and target tables from the pipeline. Do not write any data from the source table to the target. (If all the tables already exist in the target, this option will not appear.)Add prefix and create new tables: Do not write to the existing target table. Instead, create a target table of the same name, but with a prefix added to distinguish it from the existing table.Drop and re-create the existing tables: Drop the existing target tables and any data they contain, create new target tables, and perform initial sync with the source tables. Choose this option if you were unsatisfied with an initial sync and are starting over.Use the existing tables: Retain the target table and its data, and add additional data from the source.Review the impact of the action to be taken. To proceed enter yes and click Confirm and continue.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/when-target-tables-already-exist.html", "title": "When target tables already exist", "language": "en"}} {"page_content": "\n\nAdd the tables to table groups (optional)Skip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineAdd the tables to table groups (optional)PrevNextAdd the tables to table groups (optional)During Live Sync, Striim uses table groups to parallelize writing to BigQuery to increase throughput, with each table group mapped internally to a separate BigQuery writer. The batch policy for each table group is the minimum feasible LEE (end-to-end latency) for tables in the group. We recommend the following when you create your table groups:Place your sensitive tables into individual table groups. These tables may have high input change rates or low latency expectations. You can group tables with a few other tables that exhibit similar behavior or latency expectations.Place all tables that do not have a critical dependency on latency into the Default table group. By default, Striim places all new tables in a pipeline into the Default table group.Table groups are not used during Initial Sync.Create table groupsClick\u00a0Create a new table group, enter a name for the group, optionally change the batch policy, and click\u00a0Create.Select the\u00a0Default\u00a0group (or any other group, if you have already created one or more), select the tables you want to move to the new group, select\u00a0Move to, and click the new group.Repeat the previous steps to add more groups, then click\u00a0Next.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/add-the-tables-to-table-groups--optional-.html", "title": "Add the tables to table groups (optional)", "language": "en"}} {"page_content": "\n\nAdditional SettingsSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineAdditional SettingsPrevNextAdditional SettingsNoteThe options on this page cannot be changed after you start the pipeline.If BigQuery is running in a free trial or the Google Cloud Platform free tier, uncheck Use streaming mode.How do you want to write changes to BigQuery?Write continuous changes as audit records (default; also known as APPEND ONLY mode): BigQuery retains a record of every operation in the source. For example, if you insert a row, then update it, then delete it, BigQuery will have three records, one for each operation in the source (INSERT, UPDATE, and DELETE). This is appropriate when you want to be able to see the state of the data at various points in the past, for example, to compare activity for the current month with activity for the same month last year.With this setting, Striim will add two additional columns to each table, STRIIM_OPTIME, a timestamp for the operation, and STRIIM_OPTYPE, the event type, INSERT, UPDATE, or DELETE. Note: on initial sync with SQL Server, all STRIIM_OPTYPE values are SELECT.Write continuous changes directly (also known as MERGE mode): BigQuery tables are synchronized with the source tables. For example, if you insert a row, then update it, BigQuery will have only the updated data. If you then delete the row from the source table, BigQuery will no longer have any record of that row.How would you like to handle schema changes?NOTE: In this release, this feature is not supported when the source is MariaDB, Oracle CDB, Oracle PDB, Oracle 19c, or any version of SQL Server, so for pipelines with those sources the option will not appear.Select what you want Striim to do in BigQuery when a table or column is added to or a table is dropped from the source database:Do not propagate changes and continue (default): Striim will take no action. Any data added to new tables will not be synced to BigQuery. Any data added to a new column will not be synced to BigQuery as the column will not exist in the target. Tables dropped from the source will continue to exist in BigQuery.Pause the pipeline: Striim will pause the pipeline. After making any necessary changes in the source or BigQuery, restart the pipeline.Propagate changes to BigQuery: In BigQuery, Striim will create a new table, add a column, or drop a table so that the target matches the source. Sync will continue without interruption. (Note that if a column is dropped from a source table, it will not be dropped from the corresponding BigQuery target table.)Use streaming mode to write continuous changes to BigQuery?Use streaming mode(enabled by default): We recommend using this method when you need low latency. If your uploads are infrequent (for example, once an hour, you may wish to disable streaming mode. If BigQuery is running in a free trial or the Google Cloud Platform free tier, disable this option or upgrade to a paid account.What is your PostgreSQL replication slot?Appears only when your source is PostgreSQL.Enter the name of the slot you created or chose in Set up your PostgreSQL source. Note that you cannot use the same slot in two pipelines, each must have its own slot.Schema evolution tracker tableIf for \"How would you like to handle schema changes\" you selected Propagate changes to BigQuery, enter the name of the table you created or chose in Set up your PostgreSQL source.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/additional_settings-settings.html", "title": "Additional Settings", "language": "en"}} {"page_content": "\n\nReview your settings and run the pipelineSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationCreate a pipelineReview your settings and run the pipelinePrevNextReview your settings and run the pipelineIf everything on this page looks right, click Run the pipeline. Otherwise, click Back as many times as necessary to return to any settings you want to change.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/review.html", "title": "Review your settings and run the pipeline", "language": "en"}} {"page_content": "\n\nMonitor pipelinesSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationMonitor pipelinesPrevNextMonitor pipelinesThe pipeline's Monitor tab displays a performance graph for the most-recent hour, 24 hours, or 90 days. \"Read freshness\" and \"Write freshness\" report the time that has passed since the last read and write.Click View performance > View performance to see statistics about individual tables.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-05\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/monitor-pipelines.html", "title": "Monitor pipelines", "language": "en"}} {"page_content": "\n\nManage pipelinesSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationManage pipelinesPrevNextManage pipelinesYou can perform several management functions on existing pipelines:Remove tables from the pipeline: On the pipeline's Overview page, select Manage tables in pipeline from the menu, select the tables you want to remove from the pipeline, and click Remove. The table and existing data will remain in the BigQuery target dataset , but no additional data will be added from the source.Pause a pipeline: On Striim for BigQuery's Overview page, select Pause from the pipeline's menu. Data will stop being synced from source to target until you resume the pipeline.We recommend that you pause a pipeline before taking its source database offline. Otherwise, its connection may time out and the pipeline will require repair.Resume a pipeline: On Striim for BigQuery's Overview page, select Resume from the pipeline's menu.Delete a pipeline: On Striim for BigQuery's Overview page, select Delete Pipeline from the pipeline's menu. Sync will stop and the pipeline will be deleted, but the previously synced data will remain in BigQuery.Repair errors in a pipeline: If a pipeline encounters a potentially recoverable error, a Repair button will appear on the Overview page.Click Repair to see the error.Click Retry to attempt repair. If repair fails, Contact Striim support.If the error is on the target side, you may also have a Remove table option. Clicking that will remove the table that is causing the problem from the pipeline and restart it.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/manage-pipelines.html", "title": "Manage pipelines", "language": "en"}} {"page_content": "\n\nUsing the Striim Cloud ConsoleSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationUsing the Striim Cloud ConsolePrevNextUsing the Striim Cloud ConsoleThe Striim Cloud Console lets you perform various tasks related to your Striim for BigQuery service.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-06\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/using-the-striim-cloud-console.html", "title": "Using the Striim Cloud Console", "language": "en"}} {"page_content": "\n\nAdd usersSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationUsing the Striim Cloud ConsoleAdd usersPrevNextAdd usersIn the Striim Cloud Console, go to the Users page and click Invite User.Enter the new user's email address, select the appropriate role (see the text of the drop-down for details), and click Save.Admin: can create pipelines, perform all functions on all pipelines, add users, and change users' rolesDeveloper: can create pipelines and perform all functions on all pipelinesViewer: can view information about pipelines and monitor themThe new user will receive an email with a signup link. Once they have signed up, their status will change from Pending to Activated. Once the new user is activated, go to the Users page, click the user's name, click Add service, select the service(s) you want them to have access to, and click Add.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-08\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/add-users.html", "title": "Add users", "language": "en"}} {"page_content": "\n\nInternal WIP: Using Okta with Striim CloudSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationUsing the Striim Cloud ConsoleInternal WIP: Using Okta with Striim CloudPrevNextInternal WIP: Using Okta with Striim CloudTKK see https://webaction.atlassian.net/browse/DEV-32325You can configure Striim Cloud to allow users in your organization to log in using Okta single sign-on (SSO). This requires you to create a SAML application in Okta, assign that application to your users, and configure Striim Cloud to trust Okta as an identity provider (IdP). For more information, see SAML app integrations.Create a SAML application in OktaLog in to your Okta account as an Admin user. Okta may ask you to log in again.Click the Admin button on the top right corner.In the left panel, select Applications > Applications, then click Create App Integration.Choose SAML 2.0 as the sign on method, then click Next.Name your application and click Next.Enter the following for Single sign on URL: <your striim account url>/auth/saml/callbackCheck the box Use this for Recipient URL and Destination URL.Enter the following for Audience URI (SP Entity ID): <your-striim-account-url>Create the following attribute statements for first name, last name and email, then click Next.NameName formatValuefirstNameUnspecifieduser.firstNamelastNameUnspecifieduser.lastNameemailUnspecifieduser.emailChoose I'm an Okta customer adding an internal app and click Finish.Go the Sign On tab of the application you just created and click View SAML Setup Instructions.Copy the values for the Identity Provider Single Sign-On URL, Identity Provider Issuer and X.509 Certificate into a text editor. You\u2019ll need those to enable SAML authentication in your Striim Cloud account.Assign the Okta application to your users from the Assignments tab of your app. TKK see Steve's comment in CLOUD-5981Configure Striim Cloud to trust Okta as an IdPLog in into your Striim Cloud account and click User Profile at the top right of the screen.Go to the Login & Provisioning tab.In the Single sign-on section paste the values from the Okta setup instructions page (see Step 12 above) into the SSO URL, IDP Issuer and Public Certificate fields.Click Update configuration.Enable the Single sign-on (SSO) toggle near the top of the page.Test logging in to your Striim Cloud account through Okta. Logout then go to the login page and select Sign in with Saml. You will be logged in through Okta. Users can access Striim Cloud through the Striim Cloud login page, or through the Okta tile named after your app.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-09\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/internal-wip--using-okta-with-striim-cloud.html", "title": "Internal WIP: Using Okta with Striim Cloud", "language": "en"}} {"page_content": "\n\nUpgrade the instance sizeSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationUsing the Striim Cloud ConsoleUpgrade the instance sizePrevNextUpgrade the instance sizeTo upgrade to a larger instance:In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click More and select Resize VM.Choose the type of instance you want to upgrade to, then click Next.Click Update.All the instance's pipelines will be paused and will resume after the upgrade is complete. If you encounter any problems with your pipelines after upgrading, contact Striim support.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-06\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/upgrade-the-instance-size.html", "title": "Upgrade the instance size", "language": "en"}} {"page_content": "\n\nMonitor the service's virtual machineSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationUsing the Striim Cloud ConsoleMonitor the service's virtual machinePrevNextMonitor the service's virtual machineStriim Cloud Console's Monitor page displays recent CPU and memory utilization of the virtual machine that hosts Striim for BigQuery.In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Monitor.Select the time range to display.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/monitor-the-service-s-virtual-machine.html", "title": "Monitor the service's virtual machine", "language": "en"}} {"page_content": "\n\nUsing the Striim for BigQuery REST APISkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationUsing the Striim Cloud ConsoleUsing the Striim for BigQuery REST APIPrevNextUsing the Striim for BigQuery REST APIDocumentation for this feature is not yet available.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/using-the-striim-for-bigquery-rest-api.html", "title": "Using the Striim for BigQuery REST API", "language": "en"}} {"page_content": "\n\nStop a serviceSkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationUsing the Striim Cloud ConsoleStop a servicePrevNextStop a serviceTo stop a Striim for BigQuery service and pause all its pipelines:In the Striim Cloud Console, go to the Services page.Next to the service, click More, select Stop, and click Stop.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/stop-a-service.html", "title": "Stop a service", "language": "en"}} {"page_content": "\n\nSecuritySkip to main contentToggle navigationToggle navigationStriim for BigQuery DocumentationprintToggle navigationStriim for BigQuery DocumentationSecurityPrevNextSecurityStriim for BigQuery is deployed as a Google Kubernetes Engine (GKE) pod on Google Cloud Platform (GCP). Much of the security for Striim for BigQuery, such as data encryption at rest, comes from the security infrastructure provided by GKE and GCP. For more information, see Google Kubernetes Engine (GKE) > Documentation > Guides > Security overviewUser metadata is stored in the GKE pod. This metadata can be accessed only by Striim DevOps personnel, and all such access generates an audit trail. Sensitive data including BigQuery service account keys, source database passwords, and SSL keys and passwords are not accessible to DevOps personnel.AuthenticationBigQuery authorizes access to resources based on a verified client identity. Striim for BigQuery uses the Google service account associated with your BigQuery project to access its API. You will grant required roles or permissions to your service account, and upload the service account key to Striim. See Connect to BigQuery for details on BigQuery roles and permissions.Striim for BigQuery's default password policy enforces character variety and minimum length. Each individual user can change the password for their own account. Regardless of privilege level, no user account can manage the password for another account.Access controlWhat users can access and do in Google for BigQuery is controlled by roles. For more information, see Add users.Encryption between servicesAll communication between your Striim Cloud Consoile and your Striim for BigQuery instances is encrypted using Transport Layer Security (TLS) 1.2.REST APIREST API keys are specific to individual users and not accessible to other users or Striim DevOps personnel. An audit trail tracks all actions taken through the API for each user.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/GCP/StriimForBigQuery/en/security.html", "title": "Security", "language": "en"}} {"page_content": "\n\nWhat is Striim for Databricks?Skip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationWhat is Striim for Databricks?PrevNextWhat is Striim for Databricks?Striim for Databricks is a fully-managed software-as-a-service tool for building data pipelines (see What is a Data Pipeline) to copy data from MariaDB, MySQL, Oracle, PostgreSQL, and SQL Server to Databricks in real time using change data capture (CDC).Striim first copies all existing source data to Databricks (\"initial sync\"), then transitions automatically to reading and writing new and updated source data (\"live sync\"). You can monitor the real-time health and progress of your pipelines, as well as view performance statistics as far back as 90 days.Optionally, with some sources, Striim can also synchronize schema evolution. That is, when you add a table or column to, or drop a table from, the source database, Striim will update Databricks to match. Sync will continue without interruption. (However, if a column is dropped from a source table, it will not be dropped from the corresponding Databricks target table.). If your source supports this, How would you like to handle schema changes? will appear among the Connect to Source properties.When you launch Striim for Databricks, we guide you through the configuration of your pipeline, including connecting to your Databricks project, configuring your source, selecting the schemas and tables you want to sync to Databricks, and choosing which settings to use for the pipeline.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/what-is-striim-for-databricks-.html", "title": "What is Striim for Databricks?", "language": "en"}} {"page_content": "\n\nSupported sourcesSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationSupported sourcesPrevNextSupported sourcesStriim for Databricks supports the following sources:MariaDB:on-premise: MariaDB 5.5 to 10.4 and compatible MariaDB Galera Cluster versionsAmazon RDS for MariaDBMySQL:on-premise: MySQL 5.5 and later versionsAmazon Aurora for MySQLAmazon RDS for MySQLAzure Database for MySQLCloud SQL for MySQLOracle Database (RAC is supported in all versions except Amazon RDS for Oracle):on-premise:11g Release 2 version 11.2.0.412c Release 1 version 12.1.0.212c Release 2 version 12.2.0.118c (all versions)19c (all versions)Amazon RDS for OraclePostgreSQL:on-premise: PostgreSQL 9.4.x and later versionsAmazon Aurora for PostgreSQLAmazon RDS for PostgreSQLAzure Database for PostgreSQLCloud SQL for PostgreSQLSQL Server:on-premise:SQL Server Enterprise versions 2008, 2012, 2014, 2016, 2017, and 2019SQL Server Standard versions 2016, 2017, and 2019Amazon RDS for SQL ServerAzure SQL Database Managed InstanceIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-22\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect_source-select.html", "title": "Supported sources", "language": "en"}} {"page_content": "\n\nSet up your MariaDB sourceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationSupported sourcesSet up your MariaDB sourcePrevNextSet up your MariaDB sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.For all MariaDB environmentsAn administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MySQL.You may use any other valid name in place of striim. Note that by default MySQL does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MariaDB setupSee Activating the Binary Log.On-premise MariaDB Galera Cluster setupThe following properties must be set on each server in the cluster:binlog_format=ROWlog_bin=ONlog_slave_updates=ONServer_id: see server_idwsrep_gtid_mode=ONAmazon RDS for MariaDB setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-03\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/prerequisite-checks-mariadb.html", "title": "Set up your MariaDB source", "language": "en"}} {"page_content": "\n\nSet up your MySQL SourceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationSupported sourcesSet up your MySQL SourcePrevNextSet up your MySQL SourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.For all MySQL environmentsAn administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MySQL.You may use any other valid name in place of striim. Note that by default MySQL does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MySQL setupStriim reads from the MySQL binary log. If your MySQL server is using replication, the binary log is enabled, otherwise it may be disabled.For on-premise MySQL, the property name for enabling the binary log, whether it is one or off by default, and how and where you change that setting vary depending on the operating system and your MySQL configuration, so for instructions see the binary log documentation for the version of MySQL you are running.If the binary log is not enabled, Striim's attempts to read it will fail with errors such as the following:2016-04-25 19:05:40,377 @ -WARN hz._hzInstance_1_striim351_0423.cached.thread-2 \ncom.webaction.runtime.Server.startSources (Server.java:2477) Failure in Starting \nSources.\njava.lang.Exception: Problem with the configuration of MySQL\nRow logging must be specified.\nBinary logging is not enabled.\nThe server ID must be specified.\nAdd --binlog-format=ROW to the mysqld command line or add binlog-format=ROW to your \nmy.cnf file\nAdd --bin-log to the mysqld command line or add bin-log to your my.cnf file\nAdd --server-id=n where n is a positive number to the mysqld command line or add \nserver-id=n to your my.cnf file\n at com.webaction.proc.MySQLReader_1_0.checkMySQLConfig(MySQLReader_1_0.java:605) ...Amazon Aurora for MySQL setupSee How do I enable binary logging for my Amazon Aurora MySQL cluster?.Amazon RDS for MySQL setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);Azure Database for MySQL setupYou must create a read replica to enable binary logging. See Read replicas in Azure Database for MySQL.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-03\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/prerequisite-checks-mysql.html", "title": "Set up your MySQL Source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;Create the QUIESCEMARKER tableIn the database to be read, create the following table:CREATE TABLE QUIESCEMARKER (source varchar2(100), \n status varchar2(100),\n sequence NUMBER(10),\n inittime timestamp, \n updatetime timestamp default sysdate, \n approvedtime timestamp, \n reason varchar2(100), \n CONSTRAINT quiesce_marker_pk PRIMARY KEY (source, sequence));\nALTER TABLE QUIESCEMARKER ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;In this section: Set up your Oracle sourceBasic Oracle configuration tasksCreate an Oracle user with LogMiner privilegesIf using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 12c, 18c, or 19c with PDBCreate the QUIESCEMARKER tableSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-03\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/prerequisite-checks-oracle.html", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to Databricks, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Set up your PostgreSQL sourcePostgreSQL setup in Linux or WindowsPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityPostgreSQL setup in Amazon RDS for PostgreSQLSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-03\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/prerequisite-checks-postgresql.html", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your SQL Server sourceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationSupported sourcesSet up your SQL Server sourcePrevNextSet up your SQL Server sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Striim reads SQL Server change data using the native SQL Server Agent utility. For more information, see Learn > SQL > SQL Server > About Change Data Capture (SQL Server).If a table uses a SQL Server feature that prevents change data capture, MS SQL Reader can not read it. For examples, see the \"SQL Server 2014 (12.x) specific limitations\" section of CREATE COLUMNSTORE INDEX (Transact-SQL).In Azure SQL Database managed instances, change data capture requires collation to be set to the default SQL_Latin1_General_CP1_CI_AS at the server, database, and table level. If you need a different collation, it must be set at the column level.Before Striim applications can use the MS SQL Reader adapter, a SQL Server administrator with the necessary privileges must do the following:If it is not running already, start SQL Server Agent (see Start, Stop, or Pause the SQL Server Agent Service; if the agent is disabled, see Agent XPs Server Configuration Option).Enable change data capture on each database to be read using the following commands (for more information, see Learn / SQL / SQL Server / Enable and disable change data capture):for Amazon RDS for SQL Server:EXEC msdb.dbo.rds_cdc_enable_db '<database name>';for all others:USE <database name>\nEXEC sys.sp_cdc_enable_dbCreate a SQL Server user for use by Striim. This user must use the SQL Server authentication mode, which must be enabled in SQL Server. (If only Windows authentication mode is enabled, Striim will not be able to connect to SQL Server.)Grant the MS SQL Reader user the db_owner role for each database to be read using the following commands:USE <database name>\nEXEC sp_addrolemember @rolename=db_owner, @membername=<user name>For example, to enable change data capture on the database mydb, create a user striim, and give that user the db_owner role on mydb:USE mydb\nEXEC sys.sp_cdc_enable_db\nCREATE LOGIN striim WITH PASSWORD = 'passwd' \nCREATE USER striim FOR LOGIN striim\nEXEC sp_addrolemember @rolename=db_owner, @membername=striim\nTo confirm that change data capture is set up correctly, run the following command and verify that all tables to read are included in the output:EXEC sys.sp_cdc_help_change_data_captureStriim can capture change data from a secondary database in an Always On availability group. In that case, change data capture must be enabled on the primary database.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/prerequisite-checks-sqlserver.html", "title": "Set up your SQL Server source", "language": "en"}} {"page_content": "\n\nGetting startedSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedPrevNextGetting startedTo get started with Striim for Databricks:Configure Azure Databricks: create a user, assign privileges, get a personal access key, and get the JDBC URL.Configure your source: enable change data capture and create a user account for use by StriimChoose how Striim will connect to your database: allow Striim to connect to your source database via an SSH tunnel, a firewall rule, or port forwardingSubscribe to Striim for Databricks: deploy Striim for Databricks from the Azure MarketplaceCreate a Striim for Databricks service: in the Striim Cloud Console, create a Striim for Databricks serviceCreate a pipeline: follow the instructions on screen to create your first pipelineIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/getting-started.html", "title": "Getting started", "language": "en"}} {"page_content": "\n\nConfigure Azure DatabricksSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedConfigure Azure DatabricksPrevNextConfigure Azure DatabricksBefore you can create a Striim for Databricks pipeline, you must do the following:Create an Azure Databricks user that Striim for Databricks will use to connect to the target (see Learn / Manage users / Add users to your Azure Databricks account).Give that user the following privilegesCREATE SCHEMAif you are using Unity Catalog, also USE CATALOG (not required if you are using the Hive metastore)Get a personal access token for that user, which you will need to provide when creating a pipeline. The following steps are subject to change by Microsoft but they worked in April 2023:In Azure Databricks, at the top right corner, click your login name, select User Settings, and click Generate new token.In the Comment field enter a description for the token (such as for Striim), optionally adjust its Lifetime, and click Generate.Copy the token and keep it in a safe place. If you lose it you will need to create a new token.Get the JDBC URL for your Databricks cluster, which you will need to provide when creating a pipeline. The following steps are subject to change by Microsoft but they worked in April 2023:In Azure Databricks, at the top of the left navigation panel, select Data Science & Engineering.In the left navigation panel, click Compute, then click the name of your cluster.At the bottom of the page, expand Advanced options, then select the jDBC/ODBC tap.Copy the JDBC URL (it starts with jdbc and ends with PWD=<personal-access-token>).In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-06-22\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/configure-azure-databricks.html", "title": "Configure Azure Databricks", "language": "en"}} {"page_content": "\n\nConfigure your sourceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedConfigure your sourcePrevNextConfigure your sourceYou must configure your source database before you can use it in a pipeline. The configuration details are different for each database type (Oracle, SQL Server, etc.). See the specific setup instructions under Supported sources.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-04\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/configure-your-source.html", "title": "Configure your source", "language": "en"}} {"page_content": "\n\nChoose how Striim will connect to your databaseSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedChoose how Striim will connect to your databasePrevNextChoose how Striim will connect to your databaseIf you have an SSH tunnel server (aka jump server) for your source database, that is the most secure way for Striim to connect to it. Alternatively, you can use a firewall rule or port forwarding.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/choose-how-striim-will-connect-to-your-database.html", "title": "Choose how Striim will connect to your database", "language": "en"}} {"page_content": "\n\nConfigure Striim to use your SSH tunnelSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedChoose how Striim will connect to your databaseConfigure Striim to use your SSH tunnelPrevNextConfigure Striim to use your SSH tunnelIf you plan to use an SSH tunnel for Striim to connect to your source, set it up before creating your pipeline.In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click ... and select Security.Click Create New Tunnel and enter the following:Name: choose a descriptive name for this tunnelJump Host: the IP address or DNS name of the jump serverJump Host Port: the port number for the tunnelJump Host Username: the jump host operating system user account that Striim Cloud will use to connectDatabase Host: the IP address or DNS name of the source databaseDatabase Port: the port for the databaseClick Create Tunnel. Do not click Start yet.Under Public Key, click Get Key > Copy Key.Add the copied key to your jump server's authorized keys file, then return to the Striim Cloud Security page and click Start. The SSH tunnel will now be available in the source settings.Give the user specified for Jump Host Username the necessary file system permissions to access the key.Under Tunnel Address, click Copy to get the string to provide as the host name.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-21\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/create-new.html", "title": "Configure Striim to use your SSH tunnel", "language": "en"}} {"page_content": "\n\nConfigure your firewall to allow Striim to connect to your databaseSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedChoose how Striim will connect to your databaseConfigure your firewall to allow Striim to connect to your databasePrevNextConfigure your firewall to allow Striim to connect to your databaseIn the firewall or cloud security group for your source database, create an inbound port rule for Striim's IP address and the port for your database (typically 3306 for MariaDB or MySQL, 1521 for Oracle, 5432 for PostgreSQL, or 1433 for SQL Server). To get Striim's IP address:In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Security.Click the Copy IP icon next to the IP address.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/configure-your-firewall-to-allow-striim-to-connect-to-your-database.html", "title": "Configure your firewall to allow Striim to connect to your database", "language": "en"}} {"page_content": "\n\nConfigure port forwarding in your router to allow Striim to connect to your databaseSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedChoose how Striim will connect to your databaseConfigure port forwarding in your router to allow Striim to connect to your databasePrevNextConfigure port forwarding in your router to allow Striim to connect to your databaseIn your router configuration, create a port forwarding rule for your database's port. See the documentation for your router for instructions.For additional security, if supported by your router, set the source IP for the port forwarding rule to your database's IP address and the target IP to Striim's IP address. To get Striim's IP address:In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Security.Click the Copy IP icon next to the IP address.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-26\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/configure-port-forwarding-in-your-router-to-allow-striim-to-connect-to-your-database.html", "title": "Configure port forwarding in your router to allow Striim to connect to your database", "language": "en"}} {"page_content": "\n\nSubscribe to Striim for DatabricksSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedSubscribe to Striim for DatabricksPrevNextSubscribe to Striim for DatabricksIn the Azure Marketplace, search for Striim for Databricks and click it.Click Get It Now, check the box to accept Microsoft's terms, and click Continue.Select a plan, then click Subscribe.Select one of your existing resource groups or create a new one, enter a name for this subscription, and click Review + subscribe.Click Subscribe.When you receive an activation email from Microsoft AppSource, open it and click Activate now.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-12\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/subscribe-to-striim-for-databricks.html", "title": "Subscribe to Striim for Databricks", "language": "en"}} {"page_content": "\n\nCreate a Striim for Databricks serviceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationGetting startedCreate a Striim for Databricks servicePrevNextCreate a Striim for Databricks serviceAfter you receive the email confirming your subscription and log in:Select the Services tab, click Create new, and under Striim for Databricks click Create.Enter a name for your service.Select the appropriate region and virtual machine size.Click Create.When the service's status changes from Creating to Running:If you will use an SSH tunnel to connect to your source, Configure Striim to use your SSH tunnel.Otherwise, click Launch and Create a pipeline.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/create-a-striim-for-databricks-service.html", "title": "Create a Striim for Databricks service", "language": "en"}} {"page_content": "\n\nCreate a pipelineSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelinePrevNextCreate a pipelineIf your pipeline will connect to your source using an SSH tunnel, Configure Striim to use your SSH tunnel before you create the pipeline.Striim uses a wizard interface to walk you through the steps required to create a pipeline. The steps are:connect to Databricksselect source database type (Oracle, SQL Server, etc.)connect to source databaseselect schemas to syncselect tables to syncoptionally, create table groupsoptionally, revise additional settingsreview settings and start pipelineAt most points in the wizard, you can save your work and come back to finish it later.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-10\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/create-a-pipeline.html", "title": "Create a pipeline", "language": "en"}} {"page_content": "\n\nConnect to DatabricksSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to DatabricksPrevNextConnect to DatabricksIf you have already created one or more pipelines, you can select an existing Databricks connection to write to the same Databricks database. When you create your first pipeline, or if you want to use a different service account or connect to a different Databricks database, you must do the following.In the JDBC URL field, enter the JDBC URL for your cluster. Replace <personal access token> with the token for the Azure Databricks user Striim will use to connect to the target (see Configure Azure Databricks).In the Personal Access Token field, enter the token for the Azure Databricks user Striim will use to connect to the target (see Configure Azure Databricks).If you are using Databricks' Unity Catalog, specify the catalog name.Specify a name for this Databricks connection.Select how you want to write to Databricks:Write continuous changes as audit records (default; also known as APPEND ONLY mode): Databricks retains a record of every operation in the source. For example, if you insert a row, then update it, then delete it, Databricks will have three records, one for each operation in the source (INSERT, UPDATE, and DELETE). This is appropriate when you want to be able to see the state of the data at various points in the past, for example, to compare activity for the current month with activity for the same month last year.With this setting, Striim will add two additional columns to each table, STRIIM_OPTIME, a timestamp for the operation, and STRIIM_OPTYPE, the event type, INSERT, UPDATE, or DELETE. Note: on initial sync with SQL Server, all STRIIM_OPTYPE values are SELECT.Write continuous changes directly (also known as MERGE mode): Databricks tables are synchronized with the source tables. For example, if you insert a row, then update it, Databricks will have only the updated data. If you then delete the row from the source table, Databricks will no longer have any record of that row.Choose where you want to stage your data.Databricks File System (default): With this default value (not recommended), events are staged to the native Databricks File System (DBFS) This has as a 2\u00a0GB cap on storage, which can cause file corruption. To work around that limitation, we strongly recommend using Azure Data Lake Storage instead.Azure data lake storage: Select this to use Azure Data Lake Storage (ADLS) Gen2. Specify the following properties:Azure Account Access Key: the account access key from Storage accounts > <account name> > Access keysAzure Account Name: the name of the Azure storage account for the blob containerAzure Container Name (optional): the blob container name from Storage accounts > <account name> > Containers. If it does not exist, it will be created. If you leave this blank, the container name will be striim-deltalakewriter-container.Click Next. Striim will verify that all the necessary permissions are available. If it reports any are missing, reconfigure the Datbricks user account associated with the personal access token, then try again.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-10\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect_target.html", "title": "Connect to Databricks", "language": "en"}} {"page_content": "\n\nSelect your sourceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineSelect your sourcePrevNextSelect your sourceChoose the basic type of your source database:MariaDBMySQLOraclePostgreSQLSQL ServerSee Supported sources for details of which versions and cloud services are supported.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-15\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/select-your-source.html", "title": "Select your source", "language": "en"}} {"page_content": "\n\nConnect to your source databaseSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to your source databasePrevNextConnect to your source databaseThe connection properties vary according to the source database type.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-23\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect-to-your-source-database.html", "title": "Connect to your source database", "language": "en"}} {"page_content": "\n\nConnect to your source databaseSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to your source databasePrevNextConnect to your source databaseThe connection properties vary according to the source database type.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-23\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect-to-your-source-database.html", "title": "Connect to your source database", "language": "en"}} {"page_content": "\n\nConnect to MariaDBSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to your source databaseConnect to MariaDBPrevNextConnect to MariaDBWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for MariaDB instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your MariaDB source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as MariaDBConnection1.Use SSL with on-premise MariaDBAcquire a certificate in .pem format as described in MariaDB > Enterprise Documentation > Security > Data in-transit encryption > Enabling TLS on MariaDB ServerImport the certificate into a custom Java truststore file:keytool -importcert -alias MariaCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon RDS for MariaDBDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MariaCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect_source-new-mariadb.html", "title": "Connect to MariaDB", "language": "en"}} {"page_content": "\n\nConnect to MySQLSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to your source databaseConnect to MySQLPrevNextConnect to MySQLWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon Aurora for MySQL, Amazon RDS for MySQL , Azure Database for MySQL, or Cloud SQL for MySQL instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your MySQL Source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as MySQLConnection1.How would you like to handle schema changes?: choose the option appropriate for your workflow.Do not propagate changes and continue (default): Striim will take no action. Any data added to new tables will not be synced to Databricks. Any data added to a new column will not be synced to Databricks as the column will not exist in the target. Tables dropped from the source will continue to exist in Databricks.Pause the pipeline: Striim will pause the pipeline. After making any necessary changes in the source or Databricks, restart the pipeline.Propagate changes to Databricks: In Databricks, Striim will create a new table, add a column, or drop a table so that the target matches the source. Sync will continue without interruption. (Note that if a column is dropped from a source table, it will not be dropped from the corresponding Databricks target table.)Use SSL with on-premise MySQL or Cloud SQL for MySQLGet an SSL certificate in .pem format from your database administrator.Import the certificate into a custom Java truststore file:keytool -importcert -alias MySQLServerCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon Aurora or RDS for MySQLDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MySQLServerCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Use SSL with Azure Database for MySQLDownload the certificate .pem file from Learn > Azure > MySQL > Configure SSL connectivity in your application to securely connect to Azure Database for MySQL > Step 1: Obtain SSL certificate.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MySQLServerCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect_source-new-mysql.html", "title": "Connect to MySQL", "language": "en"}} {"page_content": "\n\nConnect to OracleSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to your source databaseConnect to OraclePrevNextConnect to OracleWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for Oracle instance, select that, otherwise leave at the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.SID: Enter the Oracle system ID or service name of the Oracle instance.Username: Enter the name of the user you created when you Set up your Oracle source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Use pluggable database: Select if the source database is CDB or PDB.Pluggable database name (appears if Use pluggable database is enabled): If the source database is PDB, enter its name here. If it is CDB, leave blank.Source connection name: Enter a descriptive name, such as OracleConnection1.How would you like to handle schema changes?: choose the option appropriate for your workflow.NoteIf your source is Oracle CDB, Oracle PDB, or Oracle 19c, leave this set to the default, as this feature is not currently supported with those versions.Do not propagate changes and continue (default): Striim will take no action. Any data added to new tables will not be synced to Databricks. Any data added to a new column will not be synced to Databricks as the column will not exist in the target. Tables dropped from the source will continue to exist in Databricks.Pause the pipeline: Striim will pause the pipeline. After making any necessary changes in the source or Databricks, restart the pipeline.Propagate changes to Databricks: In Databricks, Striim will create a new table, add a column, or drop a table so that the target matches the source. Sync will continue without interruption. (Note that if a column is dropped from a source table, it will not be dropped from the corresponding Databricks target table.)Quiesce marker table: enter QUIESCEMARKER (the name of the table you created when you Set up your Oracle source).Use SSL with on-premise OracleGet an SSL certificate in .pem format from your database administrator.Import the certificate into a custom Java truststore file:keytool -importcert -alias OracleCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon RDS for OracleDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias OracleCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter he password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect_source-new-oracle.html", "title": "Connect to Oracle", "language": "en"}} {"page_content": "\n\nConnect to PostgreSQLSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to your source databaseConnect to PostgreSQLPrevNextConnect to PostgreSQLWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon Aurora for PostgreSQL, Amazon RDS for PostgreSQL , Azure Database for PostgreSQL, or Cloud SQL for PostgreSQL instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your PostgreSQL source.Password: Enter the password associated with the specified user name.Database name: Enter the name of the source database.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Name your source connection: Enter a descriptive name, such as PostgreSQLConnection1.How would you like to handle schema changes?: choose the option appropriate for your workflow.Do not propagate changes and continue (default): Striim will take no action. Any data added to new tables will not be synced to Databricks. Any data added to a new column will not be synced to Databricks as the column will not exist in the target. Tables dropped from the source will continue to exist in Databricks.Pause the pipeline: Striim will pause the pipeline. After making any necessary changes in the source or Databricks, restart the pipeline.Propagate changes to Databricks: In Databricks, Striim will create a new table, add a column, or drop a table so that the target matches the source. Sync will continue without interruption. (Note that if a column is dropped from a source table, it will not be dropped from the corresponding Databricks target table.)What is your PostgreSQL replication slot: Enter the name of the slot you created or chose in Set up your PostgreSQL source . Note that you cannot use the same slot in two pipelines, each must have its own slot.Use SSL with on-premise PostgreSQLGet an SSL certificate in .pem format from your database administrator (see Creating Certificates in the PostgreSQL documentation).Convert that to .pk8 format (replace <file name> with the name of the .pem file):openssl pkcs8 -topk8 -inform PEM -outform DER -in <file name>.pem -out client.root.pk8 \\\n -nocryptSet these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Certificate: upload the client-cert.pem file created in step 2SSL Certificate Key: upload the client.root.pk8 file created in step 2SSL Root Certificate: upload the server-ca.pem file created in step 2Use SSL with Amazon Aurora or RDS for PostgreSQLDownload the root certificate rds-ca-2019-root.pem (see AWS > Documentation > Amazon Relational Database Service (RDS) Using SSL with a PostgreSQL DB instance).Set these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Root Certificate: upload the file you downloaded in step 1Use SSL with Azure Database for PostgreSQLDownload the root certificate DigiCertGlobalRootG2.crt.pem (see Learn > Azure > PostgreSQL > Configure TLS connectivity in Azure Database for PostgreSQL - Single Server).Set these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Root Certificate: upload the file you downloaded in step 1Use SSL with Cloud SQL for PostgreSQLDownload server-ca.pem, client-cert.pem & client-key.pem from Google Cloud Platform (see Cloud SQL > Documentation > PostgreSQL > Guides > Configure SSL/TLS certificates).Convert client-key.pem to .pk8 format:openssl pkcs8 -topk8 -inform PEM -outform DER -in client-key.pem \\\n -out client.root.pk8 -nocryptSet these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Certificate: upload the client-cert.pem file you downloaded in step 1SSL Certificate Key: upload the client.root.pk8 file you created in step 2SSL Root Certificate: upload the server-ca.pem file you downloaded in step 1In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect_source-new-postgresql.html", "title": "Connect to PostgreSQL", "language": "en"}} {"page_content": "\n\nConnect to SQL ServerSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineConnect to your source databaseConnect to SQL ServerPrevNextConnect to SQL ServerWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for SQL Server instance or Azure SQL Managed Instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your SQL Server source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Connect using SSL: Select if the connection requires SSL, in which case you must also specify the following properties:Source connection name: Enter a descriptive name, such as SQLServerConnection1.Use SSL with on-premise SQL ServerGet an SSL certificate in .pem format from your database administrator.Create the truststore.jks file (replace <file name> with the name of your certificate file):keytool -importcert -alias MSSQLCACert -file <file name>.pem -keystore truststore.jks \\\n -storepass mypasswordSet these properties in Striim:Use trust server certificate: enableIntegrated security: enable to use Windows credentialsTrust store: upload the file you created in step 2Trust store password: the password you specified for -storepass in step 2Certificate host name: the hostNameInCertificate property value for the connection (see Learn > SQL > Connect > JDBC > Securing Applications > Using encryption > Understanding encryption support)Use SSL with Amazon RDS for SQL ServerDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MSSQLCACert -file <file name>.pem -keystore truststore.jks \\\n -storepass mypasswordSet these properties in Striim:Use trust server certificate: enableIntegrated security: enable to use Windows credentialsTrust store: upload the file you created in step 2Trust store password: the password you specified for -storepass in step 2Certificate host name: the hostNameInCertificate property value for the connection (see Learn > SQL > Connect > JDBC > Securing Applications > Using encryption > Understanding encryption support)Use SSL with Azure SQL Managed InstanceMicrosoft has changed its certificate requirements. Documentation update in progress.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/connect_source-new-sql-server.html", "title": "Connect to SQL Server", "language": "en"}} {"page_content": "\n\nSelect schemas and tables to syncSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineSelect schemas and tables to syncPrevNextSelect schemas and tables to syncSelect schemasSelect the source schemas containing the tables you want Striim to sync to Databricks, then click Next.The first time you run the pipeline, Striim will create target databases in Databricks with the same names as the selected schemas automatically.NoteIn Databricks, you will find the schemas created by Striim in the Data section of the Data Science & Engineering persona, not the SQL persona. See Learn / Navigate the workspace / Use the sidebar for more information. If you did not specify a catalog name in the Databricks connection properties, the schemas will be in the default hive_metastore.Select tablesSelect the source tables you want Striim to sync to Databricks, then click Next.The first time you run the pipeline, Striim will create tables with the same names, columns, and data types in the target datasets.For information on supported datatypes and how they are mapped between your source and Databricks, see Data type support & mapping for schema conversion & evolution.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-04-27\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/select-schemas-and-tables-to-sync.html", "title": "Select schemas and tables to sync", "language": "en"}} {"page_content": "\n\nMask data (optional)Skip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineSelect schemas and tables to syncMask data (optional)PrevNextMask data (optional)Optionally, you may mask data from source columns of string data types so that in the target their values are replaced by xxxxxxxxxxxxxxx. The Transform Data drop-down menu will appear for columns for which this option is available. (This option is not available for key columns.)To mask a column's values, set Transform Data to Mask.Masked data will appear as xxxxxxxxxxxxxxx in the target:In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/mask-data--optional-.html", "title": "Mask data (optional)", "language": "en"}} {"page_content": "\n\nSelect key columns (optional)Skip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineSelect schemas and tables to syncSelect key columns (optional)PrevNextSelect key columns (optional)This option is available only if you selected Write continuous changes directly (MERGE mode) in the source properties. With the default setting Write continuous changes as audit records (APPEND ONLY mode), key columns are not required or used.By default, when a source table does not have a primary key, Striim will concatenate the values of all columns to create a unique identifier key for each row to identify it for UPDATE and DELETE operations. Alternatively, you may manually specify one or more columns to be used to create this key. Be sure that the selected column(s) will serve as a unique identifier; if two rows have the same key that may produce invalid results or errors.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-04\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/select-key-columns--optional-.html", "title": "Select key columns (optional)", "language": "en"}} {"page_content": "\n\nWhen target tables already existSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineWhen target tables already existPrevNextWhen target tables already existSelect what you want Striim to do when some of the tables selected to be synced already exist in the target:Proceed without the existing tables: Omit both source and target tables from the pipeline. Do not write any data from the source table to the target. (If all the tables already exist in the target, this option will not appear.)Add prefix and create new tables: Do not write to the existing target table. Instead, create a target table of the same name, but with a prefix added to distinguish it from the existing table.Drop and re-create the existing tables: Drop the existing target tables and any data they contain, create new target tables, and perform initial sync with the source tables. Choose this option if you were unsatisfied with an initial sync and are starting over.Use the existing tables: Retain the target table and its data, and add additional data from the source.Review the impact of the action to be taken. To proceed enter yes and click Confirm and continue.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/when-target-tables-already-exist.html", "title": "When target tables already exist", "language": "en"}} {"page_content": "\n\nAdd the tables to table groups (optional)Skip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineAdd the tables to table groups (optional)PrevNextAdd the tables to table groups (optional)During Live Sync, Striim uses table groups to parallelize writing to Databricks to increase throughput, with each table group mapped internally to a separate Databricks writer. The batch policy for each table group is the minimum feasible LEE (end-to-end latency) for tables in the group. We recommend the following when you create your table groups:Place your sensitive tables into individual table groups. These tables may have high input change rates or low latency expectations. You can group tables with a few other tables that exhibit similar behavior or latency expectations.Place all tables that do not have a critical dependency on latency into the Default table group. By default, Striim places all new tables in a pipeline into the Default table group.Table groups are not used during Initial Sync.Create table groupsClick\u00a0Create a new table group, enter a name for the group, optionally change the batch policy, and click\u00a0Create.Select the\u00a0Default\u00a0group (or any other group, if you have already created one or more), select the tables you want to move to the new group, select\u00a0Move to, and click the new group.Repeat the previous steps to add more groups, then click\u00a0Next.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/add-the-tables-to-table-groups--optional-.html", "title": "Add the tables to table groups (optional)", "language": "en"}} {"page_content": "\n\nReview your settings and run the pipelineSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationCreate a pipelineReview your settings and run the pipelinePrevNextReview your settings and run the pipelineIf everything on this page looks right, click Run the pipeline. Otherwise, click Back as many times as necessary to return to any settings you want to change.NoteIn Databricks, you will find the schemas created by Striim in the Data section of the Data Science & Engineering persona, not the SQL persona. See Learn / Navigate the workspace / Use the sidebar for more information. If you did not specify a catalog name in the Databricks connection properties, the schemas will be in the default hive_metastore.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-12\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/review.html", "title": "Review your settings and run the pipeline", "language": "en"}} {"page_content": "\n\nMonitor pipelinesSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationMonitor pipelinesPrevNextMonitor pipelinesThe pipeline's Monitor tab displays a performance graph for the most-recent hour, 24 hours, or 90 days. \"Read freshness\" and \"Write freshness\" report the time that has passed since the last read and write.Click View performance > View performance to see statistics about individual tables.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-05\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/monitor-pipelines.html", "title": "Monitor pipelines", "language": "en"}} {"page_content": "\n\nManage pipelinesSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationManage pipelinesPrevNextManage pipelinesYou can perform several management functions on existing pipelines:Remove tables from the pipeline: On the pipeline's Overview page, select Manage tables in pipeline from the menu, select the tables you want to remove from the pipeline, and click Remove. The table and existing data will remain in the Databricks target database , but no additional data will be added from the source.Pause a pipeline: On Striim for Databricks's Overview page, select Pause from the pipeline's menu. Data will stop being synced from source to target until you resume the pipeline.We recommend that you pause a pipeline before taking its source database offline. Otherwise, its connection may time out and the pipeline will require repair.Resume a pipeline: On Striim for Databricks's Overview page, select Resume from the pipeline's menu.Delete a pipeline: On Striim for Databricks's Overview page, select Delete Pipeline from the pipeline's menu. Sync will stop and the pipeline will be deleted, but the previously synced data will remain in Databricks.Repair errors in a pipeline: If a pipeline encounters a potentially recoverable error, a Repair button will appear on the Overview page.Click Repair to see the error.Click Retry to attempt repair. If repair fails, Contact Striim support.If the error is on the target side, you may also have a Remove table option. Clicking that will remove the table that is causing the problem from the pipeline and restart it.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/manage-pipelines.html", "title": "Manage pipelines", "language": "en"}} {"page_content": "\n\nUsing the Striim Cloud ConsoleSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationUsing the Striim Cloud ConsolePrevNextUsing the Striim Cloud ConsoleThe Striim Cloud Console lets you perform various tasks related to your Striim for Databricks service.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-06\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/using-the-striim-cloud-console.html", "title": "Using the Striim Cloud Console", "language": "en"}} {"page_content": "\n\nAdd usersSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationUsing the Striim Cloud ConsoleAdd usersPrevNextAdd usersIn the Striim Cloud Console, go to the Users page and click Invite User.Enter the new user's email address, select the appropriate role (see the text of the drop-down for details), and click Save.Admin: can create pipelines, perform all functions on all pipelines, add users, and change users' rolesDeveloper: can create pipelines and perform all functions on all pipelinesViewer: can view information about pipelines and monitor themThe new user will receive an email with a signup link. Once they have signed up, their status will change from Pending to Activated. Once the new user is activated, go to the Users page, click the user's name, click Add service, select the service(s) you want them to have access to, and click Add.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-08\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/add-users.html", "title": "Add users", "language": "en"}} {"page_content": "\n\nUsing Okta with Striim CloudSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationUsing the Striim Cloud ConsoleUsing Okta with Striim CloudPrevNextUsing Okta with Striim CloudYou can configure Striim Cloud to allow users in your organization to log in using Okta single sign-on (SSO). This requires you to create a SAML application in Okta, assign that application to your users, and configure Striim Cloud to trust Okta as an identity provider (IdP). For more information, see SAML app integrations.Create a SAML application in OktaLog in to your Okta account as an Admin user. Okta may ask you to log in again.Click the Admin button on the top right corner.In the left panel, select Applications > Applications, then click Create App Integration.Choose SAML 2.0 as the sign on method, then click Next.Name your application and click Next.Enter the following for Single sign on URL: <your striim account url>/auth/saml/callbackCheck the box Use this for Recipient URL and Destination URL.Enter the following for Audience URI (SP Entity ID): <your-striim-account-url>Create the following attribute statements for first name, last name and email, then click Next.NameName formatValuefirstNameUnspecifieduser.firstNamelastNameUnspecifieduser.lastNameemailUnspecifieduser.emailChoose I'm an Okta customer adding an internal app and click Finish.Go the Sign On tab of the application you just created and click View SAML Setup Instructions.Copy the values for the Identity Provider Single Sign-On URL, Identity Provider Issuer and X.509 Certificate into a text editor. You\u2019ll need those to enable SAML authentication in your Striim Cloud account.Assign the Okta application to your users from the Assignments tab of your app.Configure Striim Cloud to trust Okta as an IdPLog into your Striim Cloud account and click User Profile at the top right of the screen.Go to the Login & Provisioning tab.In the Single sign-on section paste the values from the Okta setup instructions page (see Step 12 above) into the SSO URL, IDP Issuer and Public Certificate fields.Click Update configuration.Enable the Single sign-on (SSO) toggle near the top of the page.Test logging in to your Striim Cloud account through Azure AD. Logout then go to the login page and select Sign in with SAML. You will be logged in through Azure AD. Users can access Striim Cloud through the Striim Cloud login page.In this section: Using Okta with Striim CloudCreate a SAML application in OktaConfigure Striim Cloud to trust Okta as an IdPSearch resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-05-31\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/using-okta-with-striim-cloud.html", "title": "Using Okta with Striim Cloud", "language": "en"}} {"page_content": "\n\nUpgrade the instance sizeSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationUsing the Striim Cloud ConsoleUpgrade the instance sizePrevNextUpgrade the instance sizeTo upgrade to a larger instance:In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click More and select Resize VM.Choose the type of instance you want to upgrade to, then click Next.Click Update.All the instance's pipelines will be paused and will resume after the upgrade is complete. If you encounter any problems with your pipelines after upgrading, contact Striim support.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-06\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/upgrade-the-instance-size.html", "title": "Upgrade the instance size", "language": "en"}} {"page_content": "\n\nMonitor the service's virtual machineSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationUsing the Striim Cloud ConsoleMonitor the service's virtual machinePrevNextMonitor the service's virtual machineStriim Cloud Console's Monitor page displays recent CPU and memory utilization of the virtual machine that hosts Striim for Databricks.In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Monitor.Select the time range to display.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/monitor-the-service-s-virtual-machine.html", "title": "Monitor the service's virtual machine", "language": "en"}} {"page_content": "\n\nUsing the Striim for Databricks REST APISkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationUsing the Striim Cloud ConsoleUsing the Striim for Databricks REST APIPrevNextUsing the Striim for Databricks REST APIDocumentation for this feature is not yet available.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/using-the-striim-for-databricks-rest-api.html", "title": "Using the Striim for Databricks REST API", "language": "en"}} {"page_content": "\n\nStop a serviceSkip to main contentToggle navigationSelect versionStriim for Databricks 1.2 (current release)Toggle navigationStriim for Databricks DocumentationSelect versionStriim for Databricks 1.2 (current release)printToggle navigationStriim for Databricks DocumentationUsing the Striim Cloud ConsoleStop a servicePrevNextStop a serviceTo stop a Striim for Databricks service and pause all its pipelines:In the Striim Cloud Console, go to the Services page.Next to the service, click More, select Stop, and click Stop.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://www.striim.com/docs/Azure/StriimForDatabricks/en/stop-a-service.html", "title": "Stop a service", "language": "en"}} {"page_content": "\n\nWhat is Striim for Snowflake?Skip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationWhat is Striim for Snowflake?PrevNextWhat is Striim for Snowflake?Striim for Snowflake is a fully-managed software-as-a-service tool for building data pipelines (see What is a Data Pipeline) to copy data from MariaDB, MySQL, Oracle, PostgreSQL, and SQL Server to Snowflake in real time using change data capture (CDC).Striim first copies all existing source data to Snowflake (\"initial sync\"), then transitions automatically to reading and writing new and updated source data (\"live sync\"). You can monitor the real-time health and progress of your pipelines, as well as view performance statistics as far back as 90 days.Optionally, with some sources, Striim can also synchronize schema evolution. That is, when you add a table or column to, or drop a table from, the source database, Striim will update Snowflake to match. Sync will continue without interruption. (However, if a column is dropped from a source table, it will not be dropped from the corresponding Snowflake target table.) For more details, see Additional Settings.When you launch Striim for Snowflake, we guide you through the configuration of your pipeline, including connecting to your Snowflake project, configuring your source, selecting the schemas and tables you want to sync to Snowflake, and choosing which settings to use for the pipeline.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/what-is-striim-for-snowflake-.html", "title": "What is Striim for Snowflake?", "language": "en"}} {"page_content": "\n\nSupported sourcesSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesPrevNextSupported sourcesStriim for Snowflake supports the following sources:MariaDB:on-premise: MariaDB and MariaDB Galera Cluster versions compatible with MySQL 5.5 or laterAmazon RDS for MariaDBMySQL:on-premise: MySQL 5.5 and later versionsAmazon Aurora for MySQLAmazon RDS for MySQLAzure Database for MySQLCloud SQL for MySQLOracle Database (RAC is supported in all versions except Amazon RDS for Oracle):on-premise:11g Release 2 version 11.2.0.412c Release 1 version 12.1.0.212c Release 2 version 12.2.0.118c (all versions)19c (all versions)Amazon RDS for OraclePostgreSQL:on-premise: PostgreSQL 9.4.x and later versionsAmazon Aurora for PostgreSQLAmazon RDS for PostgreSQLAzure Database for PostgreSQLCloud SQL for PostgreSQLSQL Server:on-premise:SQL Server Enterprise versions 2008, 2012, 2014, 2016, 2017, and 2019SQL Server Standard versions 2016, 2017, and 2019Amazon RDS for SQL ServerAzure SQL Database Managed InstanceIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect_source-select.html", "title": "Supported sources", "language": "en"}} {"page_content": "\n\nSet up your MariaDB sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your MariaDB sourcePrevNextSet up your MariaDB sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.For all MariaDB environmentsAn administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MySQL.You may use any other valid name in place of striim. Note that by default MySQL does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MariaDB setupSee Activating the Binary Log.On-premise MariaDB Galera Cluster setupThe following properties must be set on each server in the cluster:binlog_format=ROWlog_bin=ONlog_slave_updates=ONServer_id: see server_idwsrep_gtid_mode=ONAmazon RDS for MariaDB setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-mariadb.html", "title": "Set up your MariaDB source", "language": "en"}} {"page_content": "\n\nSet up your MySQL SourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your MySQL SourcePrevNextSet up your MySQL SourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.For all MySQL environmentsAn administrator with the necessary privileges must create a user for use by the adapter and assign it the necessary privileges:CREATE USER 'striim' IDENTIFIED BY '******';\nGRANT REPLICATION SLAVE ON *.* TO 'striim'@'%';\nGRANT REPLICATION CLIENT ON *.* TO 'striim'@'%';\nGRANT SELECT ON *.* TO 'striim'@'%';The caching_sha2_password authentication plugin is not supported in this release. The mysql_native_password plugin is required.The REPLICATION privileges must be granted on *.*. This is a limitation of MySQL.You may use any other valid name in place of striim. Note that by default MySQL does not allow remote logins by root.Replace ****** with a secure password.You may narrow the SELECT statement to allow access only to those tables needed by your application. In that case, if other tables are specified in the source properties for the initial load application, Striim will return an error that they do not exist.On-premise MySQL setupStriim reads from the MySQL binary log. If your MySQL server is using replication, the binary log is enabled, otherwise it may be disabled.For on-premise MySQL, the property name for enabling the binary log, whether it is one or off by default, and how and where you change that setting vary depending on the operating system and your MySQL configuration, so for instructions see the binary log documentation for the version of MySQL you are running.If the binary log is not enabled, Striim's attempts to read it will fail with errors such as the following:2016-04-25 19:05:40,377 @ -WARN hz._hzInstance_1_striim351_0423.cached.thread-2 \ncom.webaction.runtime.Server.startSources (Server.java:2477) Failure in Starting \nSources.\njava.lang.Exception: Problem with the configuration of MySQL\nRow logging must be specified.\nBinary logging is not enabled.\nThe server ID must be specified.\nAdd --binlog-format=ROW to the mysqld command line or add binlog-format=ROW to your \nmy.cnf file\nAdd --bin-log to the mysqld command line or add bin-log to your my.cnf file\nAdd --server-id=n where n is a positive number to the mysqld command line or add \nserver-id=n to your my.cnf file\n at com.webaction.proc.MySQLReader_1_0.checkMySQLConfig(MySQLReader_1_0.java:605) ...Amazon Aurora for MySQL setupSee How do I enable binary logging for my Amazon Aurora MySQL cluster?.Amazon RDS for MySQL setupCreate a new parameter group for the database (see Creating a DB Parameter Group).Edit the parameter group, change binlog_format to row and binlog_row_image to full, and save the parameter group (see Modifying Parameters in a DB Parameter Group).Reboot the database instance (see Rebooting a DB Instance).In a database client, enter the following command to set the binlog retention period to one week:call mysql.rds_set_configuration('binlog retention hours', 168);Azure Database for MySQL setupYou must create a read replica to enable binary logging. See Read replicas in Azure Database for MySQL.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-mysql.html", "title": "Set up your MySQL Source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-oracle.html", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_UUID-75440084-bcfd-2815-47bd-a6187eedb706", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_UUID-9a533b8b-4fb4-086e-6c80-56e5950ac1a2", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_section-idm4534974681201633552621372819", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your Oracle sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your Oracle sourcePrevNextSet up your Oracle sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Basic Oracle configuration tasksThe following tasks must be performed regardless of which Oracle version or variation you are using.Enable archivelog:Log in to SQL*Plus as the sys user.Enter the following command:select log_mode from v$database;If the command returns ARCHIVELOG, it is enabled. Skip ahead to\u00a0Enabling supplemental log data.If the command returns NOARCHIVELOG, enter: shutdown immediateWait for the message ORACLE instance shut down, then enter: startup mountWait for the message Database mounted, then enter:alter database archivelog;\nalter database open;To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.Enable supplemental log data for all Oracle versions except Amazon RDS for Oracle:Enter the following command:select supplemental_log_data_min, supplemental_log_data_pk from v$database;If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example,\u00a0SUPPLEME SUP\n-------- ---\nYES NOindicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;To enable primary key logging for all tables in the database enter:\u00a0ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;Enable supplemental logging on all columns for all tables in the source database:ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Alternatively, to enable only for selected tables:ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;To activate your changes, enter:alter system switch logfile;Enable supplemental log data when using Amazon RDS for Oracle:exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');\nexec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');\nexec rdsadmin.rdsadmin_util.switch_logfile;\nselect supplemental_log_data_min, supplemental_log_data_pk from v$database;Create an Oracle user with LogMiner privilegesYou may use LogMiner with any supported Oracle version.Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.If using Oracle 11g, or 12c, 18c, or 19c without CDBIf using Oracle 11g, or 12c, 18c, or 19c without CDBEnter the following commands:create role striim_privs;\ngrant create session,\n execute_catalog_role,\n select any transaction,\n select any dictionary\n to striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to striim_privs;\ncreate user striim identified by ******** default tablespace users;\ngrant striim_privs to striim;\nalter user striim quota unlimited on users;\nFor Oracle 12c or later, also enter the following command:grant LOGMINING to striim_privs;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;If using Oracle 12c, 18c, or 19c with PDBEnter the following commands. Replace\u00a0<PDB name> with the name of your PDB.create role c##striim_privs;\ngrant create session,\nexecute_catalog_role,\nselect any transaction,\nselect any dictionary,\nlogmining\nto c##striim_privs;\ngrant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;\ngrant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;\ncreate user c##striim identified by ******* container=all;\ngrant c##striim_privs to c##striim container=all;\nalter user c##striim set container_data = (cdb$root, <PDB name>) container=current;\nIf using Database Vault, omit execute_catalog_role, and also enter the following commands:grant execute on SYS.DBMS_LOGMNR to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;\ngrant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-oracle.html#UUID-b900c146-397c-207a-4dc8-9f65015bf9f9_section-idm4534974402491233552621478836", "title": "Set up your Oracle source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to Snowflake, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-postgresql.html", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to Snowflake, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-postgresql.html#UUID-477477cc-bfa8-d00a-dbc8-f5e6d7f0d8ce_UUID-1380ef4e-7a49-9cba-8d57-bcec970068c9", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to Snowflake, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-postgresql.html#UUID-477477cc-bfa8-d00a-dbc8-f5e6d7f0d8ce_UUID-616a1f47-8e66-8a87-3198-dd4d87fe1b36", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your PostgreSQL sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your PostgreSQL sourcePrevNextSet up your PostgreSQL sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.In all environments, make note of the slot name (the examples use striim_slot but you can use any name you wish). When creating a pipeline, provide the slot name on the Additional Settings page.In all environments, if you plan to have Striim propagate PostgreSQL schema changes to Snowflake, you must create a tracking table in the source database. To create this table, run pg_ddl_setup_410.sql, which you can download from https://github.com/striim/doc-downloads. When creating a pipeline, provide the name of this table on the Additional Settings page.PostgreSQL setup in Linux or WindowsThis will require a reboot, so it should probably be performed during a maintenance window.Install the wal2json plugin for the operating system of your PostgreSQL host as described in https://github.com/eulerto/wal2json.Edit postgressql.conf, set the following options, and save the file. The values for max_replication_slots and max_wal_senders may be higher but there must be one of each available for each instance of PostgreSQL Reader. max_wal_senders cannot exceed the value of max_connections.wal_level = logical\nmax_replication_slots = 1\nmax_wal_senders = 1Edit pg_hba.conf and add the following records, replacing <IP address> with the Striim server's IP address. If you have a multi-node cluster, add a record for each server that will run PostgreSQLReader. Then save the file and restart PostgreSQL.local replication striim <IP address>/0 trust\nlocal replication striim trustRestart PostgreSQL.Enter the following command to create the replication slot (the location of the command may vary but typically is /usr/local/bin in Linux or C:\\Program Files\\PostgreSQL\\<version>\\bin\\ in Windows.pg_recvlogical -d mydb --slot striim_slot --create-slot -P wal2jsonIf you plan to use multiple instances of PostgreSQL Reader, create a separate slot for each.Create a role with the REPLICATION attribute for use by Striim and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and myschema with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******' REPLICATION;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon Aurora with PostgreSQL compatibilityYou must set up replication at the cluster level. This will require a reboot, so it should probably be performed during a maintenance window.Amazon Aurora supports logical replication for PostgreSQL compatibility options 10.6 and later. Automated backups must be enabled. To set up logical replication, your AWS user account must have the rds_superuser role.For additional information, see Using PostgreSQL logical replication with Aurora, Replication with Amazon Aurora PostgreSQL, and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group.For the Parameter group family, select the aurora-postgresql item that matches your PostgreSQL compatibility option (for example, for PostgreSQL 11, select aurora-postgresql11).For Type, select DB Cluster Parameter Group.For Group Name and Description, enter aurora-logical-decoding, then click Create.Click aurora-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your Aurora cluster, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB cluster parameter group to aurora-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the cluster's status to change from Modifying to Available, then stop it, wait for the status to change from Stopping to Stopped, then start it.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Amazon RDS for PostgreSQLYou must set up replication in the master instance. This will require a reboot, so it should probably be performed during a maintenance window.Amazon RDS supports logical replication only for PostgreSQL version 9.4.9, higher versions of 9.4, and versions 9.5.4 and higher. Thus PostgreSQLReader can not be used with PostgreSQL 9.4\u00a0- 9.4.8 or 9.5\u00a0- 9.5.3 on Amazon RDS.For additional information, see Best practices for Amazon RDS PostgreSQL replication and Using logical replication to replicate managed Amazon RDS for PostgreSQL and Amazon Aurora to self-managed PostgreSQL.Go to your RDS dashboard, select Parameter groups > Create parameter group, enter posstgres-logical-decoding as the Group name and Description, then click Create.Click postgres-logical-decoding.Enter logical_ in the Parameters field to filter the list, click Modify, set rds.logical_replication to 1, and click Continue > Apply changes.In the left column, click Databases, then click the name of your database, click Modify, scroll down to Database options (you may have to expand the Additional configuration section), change DB parameter group to postgres-logical-decoding, then scroll down to the bottom and click Continue.Select Apply immediately > Modify DB instance. Wait for the database's status to change from Modifying to Available, then reboot it and wait for the status to change from Rebooting to Available.In PSQL, enter the following command to create the replication slot:SELECT pg_create_logical_replication_slot('striim_slot', 'wal2json');Create a role with the REPLICATION attribute for use by PostgreSQLReader and give it select permission on the schema(s) containing the tables to be read. Replace ****** with a strong password and (if necessary) public with the name of your schema.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT rds_replication TO striim;\nGRANT SELECT ON ALL TABLES IN SCHEMA public TO striim;\nPostgreSQL setup in Azure SQL Managed InstanceAzure Database for PostgreSQL - Hyperscale is not supported because it does not support logical replication.Set up logical decoding using wal2json:for Azure Database for PostgreSQL, see Logical decodingfor Azure Database for PostgreSQL Flexible Server, see Logical replication and logical decoding in Azure Database for PostgreSQL - Flexible ServerGet the values for the following properties which you will need to set in Striim:Username: see Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portalPassword: the login password for that userReplication slot name: see Logical decodingPostgreSQL setup in Cloud SQL for SQL ServerSet up logical replication as described in Setting up logical replication and decoding.Get the values for the following properties which you will need to set in Striim:Username: the name of the user created in Create a replication usePassword: the login password for that userReplication slot name: the name of the slot created in the \"Create replication slot\" section of Receiving decoded WAL changes for change data capture (CDC)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-postgresql.html#UUID-477477cc-bfa8-d00a-dbc8-f5e6d7f0d8ce_UUID-9e6b9041-acd7-c1aa-4c43-1f195c43bbfb", "title": "Set up your PostgreSQL source", "language": "en"}} {"page_content": "\n\nSet up your SQL Server sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSupported sourcesSet up your SQL Server sourcePrevNextSet up your SQL Server sourceYou must perform all setup tasks appropriate for your source environment before you can create a pipeline. If any of these tasks are not complete, the corresponding prerequisite checks will fail.Striim reads SQL Server change data using the native SQL Server Agent utility. For more information, see About Change Data Capture (SQL Server) on msdn.microsoft.com.If a table uses a SQL Server feature that prevents change data capture, MS SQL Reader can not read it. For examples, see the \"SQL Server 2014 (12.x) specific limitations\" section of CREATE COLUMNSTORE INDEX (Transact-SQL).In Azure SQL Database managed instances, change data capture requires collation to be set to the default SQL_Latin1_General_CP1_CI_AS at the server, database, and table level. If you need a different collation, it must be set at the column level.Before Striim applications can use the MS SQL Reader adapter, a SQL Server administrator with the necessary privileges must do the following:If it is not running already, start SQL Server Agent (see Start, Stop, or Pause the SQL Server Agent Service; if the agent is disabled, see Agent XPs Server Configuration Option).Enable change data capture on each database to be read using the following commands:for Amazon RDS for SQL Server:EXEC msdb.dbo.rds_cdc_enable_db '<database name>';for all others:USE <database name>\nEXEC sys.sp_cdc_enable_dbCreate a SQL Server user for use by Striim. This user must use the SQL Server authentication mode, which must be enabled in SQL Server. (If only Windows authentication mode is enabled, Striim will not be able to connect to SQL Server.)Grant the MS SQL Reader user the db_owner role for each database to be read using the following commands:USE <database name>\nEXEC sp_addrolemember @rolename=db_owner, @membername=<user name>For example, to enable change data capture on the database mydb, create a user striim, and give that user the db_owner role on mydb:USE mydb\nEXEC sys.sp_cdc_enable_db\nCREATE LOGIN striim WITH PASSWORD = 'passwd' \nCREATE USER striim FOR LOGIN striim\nEXEC sp_addrolemember @rolename=db_owner, @membername=striim\nTo confirm that change data capture is set up correctly, run the following command and verify that all tables to read are included in the output:EXEC sys.sp_cdc_help_change_data_captureStriim can capture change data from a secondary database in an Always On availability group. In that case, change data capture must be enabled on the primary database.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/prerequisite-checks-sqlserver.html", "title": "Set up your SQL Server source", "language": "en"}} {"page_content": "\n\nGetting startedSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedPrevNextGetting startedTo get started with Striim for Snowflake:Configure Snowflake: create a database and warehouse and a user with the necessary role to use themConfigure your source: enable change data capture and create a user account for use by StriimChoose how Striim will connect to your database: allow Striim to connect to your source database via an SSH tunnel, a firewall rule, or port forwardingSubscribe to Striim for Snowflake: deploy Striim for Snowflake from the AWS MarketplaceCreate a Striim for Snowflake service: in the Striim Cloud Console, create a Striim for Snowflake serviceCreate a pipeline: follow the instructions on screen to create your first pipelineIn this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-18\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/getting-started.html", "title": "Getting started", "language": "en"}} {"page_content": "\n\nConfigure SnowflakeSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedConfigure SnowflakePrevNextConfigure SnowflakeBefore you can create a Striim for Snowflake pipeline, you must do the following in Snowflake:create a target database (see Docs \u00bb Using Snowflake \u00bb Databases, Tables & Views)create or select a warehouse for Striim to use (see Docs \u00bb Using Snowflake \u00bb Virtual Warehouses)create or select a user for Striim to use (Docs \u00bb Managing Your Snowflake Account \u00bb User Management)assign the user a role with the permissions required to use the database and warehouse (see Docs \u00bb Managing Security in Snowflake \u00bb Administration & Authorization \u00bb Access Control in Snowflake \u00bb Overview of Access Control)to use streaming mode, create a key pair and assign the public key to the user (see Docs \u00bb Managing Security in Snowflake \u00bb Authentication \u00bb Key Pair Authentication & Key Pair Rotation)In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-18\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/configure-snowflake.html", "title": "Configure Snowflake", "language": "en"}} {"page_content": "\n\nConfigure your sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedConfigure your sourcePrevNextConfigure your sourceYou must configure your source database before you can use it in a pipeline. The configuration details are different for each database type (Oracle, SQL Server, etc.). See the specific setup instructions under Supported sources.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-04\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/configure-your-source.html", "title": "Configure your source", "language": "en"}} {"page_content": "\n\nChoose how Striim will connect to your databaseSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedChoose how Striim will connect to your databasePrevNextChoose how Striim will connect to your databaseIf you have an SSH tunnel server (aka jump server) for your source database, that is the most secure way for Striim to connect to it. Alternatively, you can use a firewall rule or port forwarding.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/choose-how-striim-will-connect-to-your-database.html", "title": "Choose how Striim will connect to your database", "language": "en"}} {"page_content": "\n\nConfigure Striim to use your SSH tunnelSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedChoose how Striim will connect to your databaseConfigure Striim to use your SSH tunnelPrevNextConfigure Striim to use your SSH tunnelIf you plan to use an SSH tunnel for Striim to connect to your source, set it up before creating your pipeline.In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click ... and select Security.Click Create New Tunnel and enter the following:Name: choose a descriptive name for this tunnelJump Host: the IP address or DNS name of the jump serverJump Host Port: the port number for the tunnelJump Host Username: the jump host operating system user account that Striim Cloud will use to connectDatabase Host: the IP address or DNS name of the source databaseDatabase Port: the port for the databaseClick Create Tunnel. Do not click Start yet.Under Public Key, click Get Key > Copy Key.Add the copied key to your jump server's authorized keys file, then return to the Striim Cloud Security page and click Start. The SSH tunnel will now be available in the source settings.Give the user specified for Jump Host Username the necessary file system permissions to access the key.Under Tunnel Address, click Copy to get the string to provide as the host name.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-21\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/create-new.html", "title": "Configure Striim to use your SSH tunnel", "language": "en"}} {"page_content": "\n\nConfigure your firewall to allow Striim to connect to your databaseSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedChoose how Striim will connect to your databaseConfigure your firewall to allow Striim to connect to your databasePrevNextConfigure your firewall to allow Striim to connect to your databaseIn the firewall or cloud security group for your source database, create an inbound port rule for Striim's IP address and the port for your database (typically 3306 for MariaDB or MySQL, 1521 for Oracle, 5432 for PostgreSQL, or 1433 for SQL Server). To get Striim's IP address:In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Security.Click the Copy IP icon next to the IP address.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/configure-your-firewall-to-allow-striim-to-connect-to-your-database.html", "title": "Configure your firewall to allow Striim to connect to your database", "language": "en"}} {"page_content": "\n\nConfigure port forwarding in your router to allow Striim to connect to your databaseSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedChoose how Striim will connect to your databaseConfigure port forwarding in your router to allow Striim to connect to your databasePrevNextConfigure port forwarding in your router to allow Striim to connect to your databaseIn your router configuration, create a port forwarding rule for your database's port. If supported by your router, set the source IP to your database's IP address and the target IP to Striim's IP address. To get Striim's IP address:In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Security.Click the Copy IP icon next to the IP address.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-29\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/configure-port-forwarding-in-your-router-to-allow-striim-to-connect-to-your-database.html", "title": "Configure port forwarding in your router to allow Striim to connect to your database", "language": "en"}} {"page_content": "\n\nSubscribe to Striim for SnowflakeSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedSubscribe to Striim for SnowflakePrevNextSubscribe to Striim for SnowflakeIn the AWS Marketplace, search for Striim for Snowflake and click it.To evaluate Striim for Snowflake, select Try for free > Create contract > Set up your account.(Alternatively, to sign up for a one-year contract, select View purchase options and follow the instructions.)In the Sign up for Striim Cloud dialog, enter your name, email address, company name, your desired sub-domain (part of the URL where you will access Striim Cloud), and password, then click Sign up.When you receive the Striim Cloud | Activate your account email, open it and click the activation link.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/subscribe-to-striim-for-snowflake.html", "title": "Subscribe to Striim for Snowflake", "language": "en"}} {"page_content": "\n\nCreate a Striim for Snowflake serviceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationGetting startedCreate a Striim for Snowflake servicePrevNextCreate a Striim for Snowflake serviceAfter you receive the email confirming your subscription and log in:Select the Services tab, click Create new, and under Striim for Snowflake click Create.Enter a name for your service.Select the appropriate region and virtual machine size.Click Create.When the service's status changes from Creating to Running:If you will use an SSH tunnel to connect to your source, Configure Striim to use your SSH tunnel.Otherwise, click Launch and Create a pipeline.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/create-a-striim-for-snowflake-service.html", "title": "Create a Striim for Snowflake service", "language": "en"}} {"page_content": "\n\nCreate a pipelineSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelinePrevNextCreate a pipelineIf your pipeline will connect to your source using an SSH tunnel, Configure Striim to use your SSH tunnel before you create the pipeline.Striim uses a wizard interface to walk you through the steps required to create a pipeline. The steps are:connect to Snowflakeselect source database type (Oracle, SQL Server, etc.)connect to source databaseselect schemas to syncselect tables to syncoptionally, create table groupsoptionally, revise additional settingsreview settings and start pipelineAt most points in the wizard, you can save your work and come back to finish it later.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/create-a-pipeline.html", "title": "Create a pipeline", "language": "en"}} {"page_content": "\n\nConnect to SnowflakeSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineConnect to SnowflakePrevNextConnect to SnowflakeIf you have already created one or more pipelines, you can select an existing Snowflake connection to write to the same Snowflake database. When you create your first pipeline, or if you want to write to a different Snowflake database, you must enter the following connection details See Configure Snowflake for details on creating the referenced objects in Snowflake.Host: your Snowflake account identifierUsername: the Snowflake user ID Striim will use to connectPassword: the password for the specified user IDDatabase: the existing Snowflake database that Striim will write toRole: a role associated with the specified user ID that has the privileges required to use the specified database and warehouseWarehouse: an existing Snowflake warehouse (leave blank to use the default warehouse for the specified user)JDBC URL Params (optional): Specify any additional JDBC connection parameters required to connect to your Snowflake instance (see Docs \u00bb Connecting to Snowflake \u00bb Connectors &amp; Drivers \u00bb JDBC Driver \u00bb Configuring the JDBC Driver). Separate multiple parameters with &, for exampleuseProxy=true&proxyHost=198.51.100.0&proxyPort=3128&proxyUser=example&proxyPassword=******In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-07\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect_target.html", "title": "Connect to Snowflake", "language": "en"}} {"page_content": "\n\nSelect your sourceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineSelect your sourcePrevNextSelect your sourceChoose the basic type of your source database:MariaDBMySQLOraclePostgreSQLSQL ServerSee Supported sources for details of which versions and cloud services are supported.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-15\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/select-your-source.html", "title": "Select your source", "language": "en"}} {"page_content": "\n\nConnect to your source databaseSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineConnect to your source databasePrevNextConnect to your source databaseThe connection properties vary according to the source database type.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-23\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect-to-your-source-database.html", "title": "Connect to your source database", "language": "en"}} {"page_content": "\n\nConnect to MariaDBSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineConnect to your source databaseConnect to MariaDBPrevNextConnect to MariaDBWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for MariaDB instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your MariaDB source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as MariaDBConnection1.Use SSL with on-premise MariaDBAcquire a certificate in .pem format as described in MariaDB > Enterprise Documentation > Security > Data in-transit encryption > Enabling TLS on MariaDB ServerImport the certificate into a custom Java truststore file:keytool -importcert -alias MariaCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon RDS for MariaDBDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MariaCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect_source-new-mariadb.html", "title": "Connect to MariaDB", "language": "en"}} {"page_content": "\n\nConnect to MySQLSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineConnect to your source databaseConnect to MySQLPrevNextConnect to MySQLWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon Aurora for MySQL, Amazon RDS for MySQL , Azure Database for MySQL, or Cloud SQL for MySQL instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your MySQL Source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as MySQLConnection1.Use SSL with on-premise MySQL or Cloud SQL for MySQLGet an SSL certificate in .pem format from your database administrator.Import the certificate into a custom Java truststore file:keytool -importcert -alias MySQLServerCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon Aurora or RDS for MySQLDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MySQLServerCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Use SSL with Azure Database for MySQLDownload the certificate .pem file from Learn > Azure > MySQL > Configure SSL connectivity in your application to securely connect to Azure Database for MySQL > Step 1: Obtain SSL certificate.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MySQLServerCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect_source-new-mysql.html", "title": "Connect to MySQL", "language": "en"}} {"page_content": "\n\nConnect to OracleSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineConnect to your source databaseConnect to OraclePrevNextConnect to OracleWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for Oracle instance, select that, otherwise leave at the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.SID: Enter the Oracle system ID or service name of the Oracle instance.Username: Enter the name of the user you created when you Set up your Oracle source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Use pluggable database: Select if the source database is CDB or PDB.Pluggable database name (appears if Use pluggable database is enabled): If the source database is PDB, enter its name here. If it is CDB, leave blank.Source connection name: Enter a descriptive name, such as OracleConnection1.Use SSL with on-premise OracleGet an SSL certificate in .pem format from your database administrator.Import the certificate into a custom Java truststore file:keytool -importcert -alias OracleCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordSet these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter the password you specified in step 3Client certificate keystore URL: upload the keystore.jks file created in step 4Client certificate keystore type: enter the store type you specified in step 4Client certificate keystore password: enter the password you specified in step 4Use SSL with Amazon RDS for OracleDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias OracleCACert -file <file name>.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Set these properties in Striim:Verify server certificate: Enable if you want Striim to verify all the following certificates while establishing the connection.Trust certificate keystore URL: upload the truststore.jks file created in step 3Trust certificate keystore type: enter the store type you specified in step 3Trust certificate keystore password: enter he password you specified in step 3In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-08\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect_source-new-oracle.html", "title": "Connect to Oracle", "language": "en"}} {"page_content": "\n\nConnect to PostgreSQLSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineConnect to your source databaseConnect to PostgreSQLPrevNextConnect to PostgreSQLWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon Aurora for PostgreSQL, Amazon RDS for PostgreSQL , Azure Database for PostgreSQL, or Cloud SQL for PostgreSQL instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your PostgreSQL source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Source connection name: Enter a descriptive name, such as PostgreSQLConnection1.Use SSL with on-premise PostgreSQLGet an SSL certificate in .pem format from your database administrator (see Creating Certificates in the PostgreSQL documentation).Convert that to .pk8 format (replace <file name> with the name of the .pem file):openssl pkcs8 -topk8 -inform PEM -outform DER -in <file name>.pem -out client.root.pk8 \\\n -nocryptSet these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Certificate: upload the client-cert.pem file created in step 2SSL Certificate Key: upload the client.root.pk8 file created in step 2SSL Root Certificate: upload the server-ca.pem file created in step 2Use SSL with Amazon Aurora or RDS for PostgreSQLDownload the root certificate rds-ca-2019-root.pem (see AWS > Documentation > Amazon Relational Database Service (RDS) Using SSL with a PostgreSQL DB instance).Set these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Root Certificate: upload the file you downloaded in step 1Use SSL with Azure Database for PostgreSQLDownload the root certificate DigiCertGlobalRootG2.crt.pem (see Learn > Azure > PostgreSQL > Configure TLS connectivity in Azure Database for PostgreSQL - Single Server).Set these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Root Certificate: upload the file you downloaded in step 1Use SSL with Cloud SQL for PostgreSQLDownload server-ca.pem, client-cert.pem & client-key.pem from Google Cloud Platform (see Cloud SQL > Documentation > PostgreSQL > Guides > Configure SSL/TLS certificates).Convert client-key.pem to .pk8 format:openssl pkcs8 -topk8 -inform PEM -outform DER -in client-key.pem \\\n -out client.root.pk8 -nocryptSet these properties in Striim:SSL Mode: enter disable , allow , prefer , require , or verify-ca to match the type of encryption and validation required for the user (verify-full is not supported).SSL Certificate: upload the client-cert.pem file you downloaded in step 1SSL Certificate Key: upload the client.root.pk8 file you created in step 2SSL Root Certificate: upload the server-ca.pem file you downloaded in step 1In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect_source-new-postgresql.html", "title": "Connect to PostgreSQL", "language": "en"}} {"page_content": "\n\nConnect to SQL ServerSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineConnect to your source databaseConnect to SQL ServerPrevNextConnect to SQL ServerWhen prompted by the wizard, enter the appropriate connection details.Where is the database located? If your source is an Amazon RDS for SQL Server instance or Azure SQL Managed Instance, select that, otherwise leave set to the default.Hostname: Enter the IP address or fully qualified network name of the instance (for example, 198.51.100.10 or mydb.123456789012.us-east-1.rds.amazonaws.com) or, if you are connecting via an SSH tunnel, paste the string copied from Striim Cloud Console > Service details > Secure connection > Tunnel Address (see Configure Striim to use your SSH tunnel).Port: Enter the port for the specified host.Username: Enter the name of the user you created when you Set up your SQL Server source.Password: Enter the password associated with the specified user name.Connect using SSL: Select if connecting to the source database using SSL. See the detailed instructions below.Connect using SSL: Select if the connection requires SSL, in which case you must also specify the following properties:Source connection name: Enter a descriptive name, such as SQLServerConnection1.Use SSL with on-premise SQL ServerGet an SSL certificate in .pem format from your database administrator.Create the truststore.jks file (replace <file name> with the name of your certificate file):keytool -importcert -alias MSSQLCACert -file <file name>.pem -keystore truststore.jks \\\n -storepass mypasswordSet these properties in Striim:Use trust server certificate: enableIntegrated security: enable to use Windows credentialsTrust store: upload the file you created in step 2Trust store password: the password you specified for -storepass in step 2Certificate host name: the hostNameInCertificate property value for the connection (see Learn > SQL > Connect > JDBC > Securing Applications > Using encryption > Understanding encryption support)Use SSL with Amazon RDS for SQL ServerDownload the appropriate .pem file from AWS > Documentation > Amazon Relational Database Service (RDS) > User Guide > Using SSL/TLS to encrypt a connection to a DB instance.Create the truststore.jks file (replace <file name> with the name of the file you downloaded):keytool -importcert -alias MSSQLCACert -file <file name>.pem -keystore truststore.jks \\\n -storepass mypasswordSet these properties in Striim:Use trust server certificate: enableIntegrated security: enable to use Windows credentialsTrust store: upload the file you created in step 2Trust store password: the password you specified for -storepass in step 2Certificate host name: the hostNameInCertificate property value for the connection (see Learn > SQL > Connect > JDBC > Securing Applications > Using encryption > Understanding encryption support)Use SSL with Azure SQL Managed InstanceMicrosoft has changed its certificate requirements. Documentation update in progress.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-10\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/connect_source-new-sql-server.html", "title": "Connect to SQL Server", "language": "en"}} {"page_content": "\n\nSelect schemas and tables to syncSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineSelect schemas and tables to syncPrevNextSelect schemas and tables to syncSelect schemasSelect the source schemas containing the tables you want Striim to sync to Snowflake, then click Next.The first time you run the pipeline, Striim will create target schemas in Snowflake with the same names as the selected schemas automatically.Select tablesSelect the source tables you want Striim to sync to Snowflake, then click Next.The first time you run the pipeline, Striim will create tables with the same names, columns, and data types in the target datasets.For information on supported datatypes and how they are mapped between your source and Snowflake, see Data type support & mapping for schema conversion & evolution.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/select-schemas-and-tables-to-sync.html", "title": "Select schemas and tables to sync", "language": "en"}} {"page_content": "\n\nMask data (optional)Skip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineSelect schemas and tables to syncMask data (optional)PrevNextMask data (optional)Optionally, you may mask data from source columns of string data types so that in the target their values are replaced by xxxxxxxxxxxxxxx. The Transform Data drop-down menu will appear for columns for which this option is available. (This option is not available for key columns.)To mask a column's values, set Transform Data to Mask.Masked data will appear as xxxxxxxxxxxxxxx in the target:In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/mask-data--optional-.html", "title": "Mask data (optional)", "language": "en"}} {"page_content": "\n\nSelect key columns (optional)Skip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineSelect schemas and tables to syncSelect key columns (optional)PrevNextSelect key columns (optional)The following is applicable only when you select Write continuous changes directly (MERGE mode) in Additional Settings. With the default setting Write continuous changes as audit records (APPEND ONLY mode), key columns are not required or used.By default, when a source table does not have a primary key, Striim will concatenate the values of all columns to create a unique identifier key for each row to identify it for UPDATE and DELETE operations. Alternatively, you may manually specify one or more columns to be used to create this key. Be sure that the selected column(s) will serve as a unique identifier; if two rows have the same key that may produce invalid results or errors.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-04\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/select-key-columns--optional-.html", "title": "Select key columns (optional)", "language": "en"}} {"page_content": "\n\nWhen target tables already existSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineWhen target tables already existPrevNextWhen target tables already existSelect what you want Striim to do when some of the tables selected to be synced already exist in the target:Proceed without the existing tables: Omit both source and target tables from the pipeline. Do not write any data from the source table to the target. (If all the tables already exist in the target, this option will not appear.)Add prefix and create new tables: Do not write to the existing target table. Instead, create a target table of the same name, but with a prefix added to distinguish it from the existing table.Drop and re-create the existing tables: Drop the existing target tables and any data they contain, create new target tables, and perform initial sync with the source tables. Choose this option if you were unsatisfied with an initial sync and are starting over.Use the existing tables: Retain the target table and its data, and add additional data from the source.Review the impact of the action to be taken. To proceed enter yes and click Confirm and continue.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-07\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/when-target-tables-already-exist.html", "title": "When target tables already exist", "language": "en"}} {"page_content": "\n\nAdd the tables to table groups (optional)Skip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineAdd the tables to table groups (optional)PrevNextAdd the tables to table groups (optional)During Live Sync, Striim uses table groups to parallelize writing to Snowflake to increase throughput, with each table group mapped internally to a separate Snowflake writer. The batch policy for each table group is the minimum feasible LEE (end-to-end latency) for tables in the group. We recommend the following when you create your table groups:Place your sensitive tables into individual table groups. These tables may have high input change rates or low latency expectations. You can group tables with a few other tables that exhibit similar behavior or latency expectations.Place all tables that do not have a critical dependency on latency into the Default table group. By default, Striim places all new tables in a pipeline into the Default table group.Table groups are not used during Initial Sync.Create table groupsClick\u00a0Create a new table group, enter a name for the group, optionally change the batch policy, and click\u00a0Create.Select the\u00a0Default\u00a0group (or any other group, if you have already created one or more), select the tables you want to move to the new group, select\u00a0Move to, and click the new group.Repeat the previous steps to add more groups, then click\u00a0Next.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/add-the-tables-to-table-groups--optional-.html", "title": "Add the tables to table groups (optional)", "language": "en"}} {"page_content": "\n\nAdditional SettingsSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineAdditional SettingsPrevNextAdditional SettingsNoteThe options on this page cannot be changed after you start the pipeline.How do you want to write changes to Snowflake?Write continuous changes as audit records (default; also known as APPEND ONLY mode): Snowflake retains a record of every operation in the source. For example, if you insert a row, then update it, then delete it, Snowflake will have three records, one for each operation in the source (INSERT, UPDATE, and DELETE). This is appropriate when you want to be able to see the state of the data at various points in the past, for example, to compare activity for the current month with activity for the same month last year.With this setting, Striim will add two additional columns to each table, STRIIM_OPTIME, a timestamp for the operation, and STRIIM_OPTYPE, the event type, INSERT, UPDATE, or DELETE. Note: on initial sync with SQL Server, all STRIIM_OPTYPE values are SELECT.Write continuous changes directly (also known as MERGE mode): Snowflake tables are synchronized with the source tables. For example, if you insert a row, then update it, Snowflake will have only the updated data. If you then delete the row from the source table, Snowflake will no longer have any record of that row.Which method would you like to use to write continuous changes to Snowflake?Streaming: Write from staging to Snowflake using Snowpipe (see Docs \u00bb Loading Data into Snowflake \u00bb Loading Continuously Using Snowpipe \u00bb Introduction to Snowpipe). If you choose this option, you must upload the public key created as described in Configure SnowflakeFile upload: Write from staging to Snowflake using bulk loading (see Docs \u00bb Loading Data into Snowflake \u00bb Bulk Loading Using COPY). With the default setting, Local, the staging area is a Snowflake internal named stage (see Docs \u00bb Loading Data into Snowflake \u00bb Bulk Loading Using COPY \u00bb Bulk Loading from a Local File System \u00bb Choosing an Internal Stage for Local Files). Set to Amazon S3 to use that as the staging area instead, in which case specify the following properties:propertytypedefault valuenotesS3 Access KeyStringan AWS access key ID (created on the AWS Security Credentials page) for a user with read and write permissions on the bucket (leave blank if using an IAM role)S3 Bucket NameStringSpecify the S3 bucket to be used for staging. If it does not exist, it will be created.S3 IAM RoleStringan AWS IAM role with\u00a0read and write permissions on the bucket (leave blank if using an access key)S3 RegionStringthe AWS region of the bucketS3 Secret Access Keyencrypted passwordthe secret access key for the access keyWhat is your PostgreSQL replication slot?Appears only when your source is PostgreSQL.Enter the name of the slot you created or chose in Set up your PostgreSQL source. Note that you cannot use the same slot in two pipelines, each must have its own slot.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/additional_settings-settings.html", "title": "Additional Settings", "language": "en"}} {"page_content": "\n\nReview your settings and run the pipelineSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationCreate a pipelineReview your settings and run the pipelinePrevNextReview your settings and run the pipelineIf everything on this page looks right, click Run the pipeline. Otherwise, click Back as many times as necessary to return to any settings you want to change.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-09-27\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/review.html", "title": "Review your settings and run the pipeline", "language": "en"}} {"page_content": "\n\nMonitor pipelinesSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationMonitor pipelinesPrevNextMonitor pipelinesThe pipeline's Monitor tab displays a performance graph for the most-recent hour, 24 hours, or 90 days. \"Read freshness\" and \"Write freshness\" report the time that has passed since the last read and write.Click View performance > View performance to see statistics about individual tables.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-05\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/monitor-pipelines.html", "title": "Monitor pipelines", "language": "en"}} {"page_content": "\n\nManage pipelinesSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationManage pipelinesPrevNextManage pipelinesYou can perform several management functions on existing pipelines:Remove tables from the pipeline: On the pipeline's Overview page, select Manage tables in pipeline from the menu, select the tables you want to remove from the pipeline, and click Remove. The table and existing data will remain in the Snowflake target schema , but no additional data will be added from the source.Pause a pipeline: On Striim for Snowflake's Overview page, select Pause from the pipeline's menu. Data will stop being synced from source to target until you resume the pipeline.We recommend that you pause a pipeline before taking its source database offline. Otherwise, its connection may time out and the pipeline will require repair.Resume a pipeline: On Striim for Snowflake's Overview page, select Resume from the pipeline's menu.Delete a pipeline: On Striim for Snowflake's Overview page, select Delete Pipeline from the pipeline's menu. Sync will stop and the pipeline will be deleted, but the previously synced data will remain in Snowflake.Repair errors in a pipeline: If a pipeline encounters a potentially recoverable error, a Repair button will appear on the Overview page.Click Repair to see the error.Click Retry to attempt repair. If repair fails, Contact Striim support.If the error is on the target side, you may also have a Remove table option. Clicking that will remove the table that is causing the problem from the pipeline and restart it.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/manage-pipelines.html", "title": "Manage pipelines", "language": "en"}} {"page_content": "\n\nUsing the Striim Cloud ConsoleSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationUsing the Striim Cloud ConsolePrevNextUsing the Striim Cloud ConsoleThe Striim Cloud Console lets you perform various tasks related to your Striim for Snowflake service.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-06\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/using-the-striim-cloud-console.html", "title": "Using the Striim Cloud Console", "language": "en"}} {"page_content": "\n\nAdd usersSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationUsing the Striim Cloud ConsoleAdd usersPrevNextAdd usersIn the Striim Cloud Console, go to the Users page and click Invite User.Enter the new user's email address, select the appropriate role (see the text of the drop-down for details), and click Save.Admin: can create pipelines, perform all functions on all pipelines, add users, and change users' rolesDeveloper: can create pipelines and perform all functions on all pipelinesViewer: can view information about pipelines and monitor themThe new user will receive an email with a signup link. Once they have signed up, their status will change from Pending to Activated. Once the new user is activated, go to the Users page, click the user's name, click Add service, select the service(s) you want them to have access to, and click Add.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-12-08\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/add-users.html", "title": "Add users", "language": "en"}} {"page_content": "\n\nInternal WIP: Using Okta with Striim CloudSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationUsing the Striim Cloud ConsoleInternal WIP: Using Okta with Striim CloudPrevNextInternal WIP: Using Okta with Striim CloudTKK see https://webaction.atlassian.net/browse/DEV-32325You can configure Striim Cloud to allow users in your organization to log in using Okta single sign-on (SSO). This requires you to create a SAML application in Okta, assign that application to your users, and configure Striim Cloud to trust Okta as an identity provider (IdP). For more information, see SAML app integrations.Create a SAML application in OktaLog in to your Okta account as an Admin user. Okta may ask you to log in again.Click the Admin button on the top right corner.In the left panel, select Applications > Applications, then click Create App Integration.Choose SAML 2.0 as the sign on method, then click Next.Name your application and click Next.Enter the following for Single sign on URL: <your striim account url>/auth/saml/callbackCheck the box Use this for Recipient URL and Destination URL.Enter the following for Audience URI (SP Entity ID): <your-striim-account-url>Create the following attribute statements for first name, last name and email, then click Next.NameName formatValuefirstNameUnspecifieduser.firstNamelastNameUnspecifieduser.lastNameemailUnspecifieduser.emailChoose I'm an Okta customer adding an internal app and click Finish.Go the Sign On tab of the application you just created and click View SAML Setup Instructions.Copy the values for the Identity Provider Single Sign-On URL, Identity Provider Issuer and X.509 Certificate into a text editor. You\u2019ll need those to enable SAML authentication in your Striim Cloud account.Assign the Okta application to your users from the Assignments tab of your app. TKK see Steve's comment in CLOUD-5981Configure Striim Cloud to trust Okta as an IdPLog in into your Striim Cloud account and click User Profile at the top right of the screen.Go to the Login & Provisioning tab.In the Single sign-on section paste the values from the Okta setup instructions page (see Step 12 above) into the SSO URL, IDP Issuer and Public Certificate fields.Click Update configuration.Enable the Single sign-on (SSO) toggle near the top of the page.Test logging in to your Striim Cloud account through Okta. Logout then go to the login page and select Sign in with Saml. You will be logged in through Okta. Users can access Striim Cloud through the Striim Cloud login page, or through the Okta tile named after your app.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-03-09\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/internal-wip--using-okta-with-striim-cloud.html", "title": "Internal WIP: Using Okta with Striim Cloud", "language": "en"}} {"page_content": "\n\nUpgrade the instance sizeSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationUsing the Striim Cloud ConsoleUpgrade the instance sizePrevNextUpgrade the instance sizeTo upgrade to a larger instance:In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click More and select Resize VM.Choose the type of instance you want to upgrade to, then click Next.Click Update.All the instance's pipelines will be paused and will resume after the upgrade is complete. If you encounter any problems with your pipelines after upgrading, contact Striim support.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-10-06\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/upgrade-the-instance-size.html", "title": "Upgrade the instance size", "language": "en"}} {"page_content": "\n\nMonitor the service's virtual machineSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationUsing the Striim Cloud ConsoleMonitor the service's virtual machinePrevNextMonitor the service's virtual machineStriim Cloud Console's Monitor page displays recent CPU and memory utilization of the virtual machine that hosts Striim for Snowflake.In the Striim Cloud Console, go to the Services page.Next to the service, click More and select Monitor.Select the time range to display.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/monitor-the-service-s-virtual-machine.html", "title": "Monitor the service's virtual machine", "language": "en"}} {"page_content": "\n\nUsing the Striim for Snowflake REST APISkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationUsing the Striim Cloud ConsoleUsing the Striim for Snowflake REST APIPrevNextUsing the Striim for Snowflake REST APIDocumentation for this feature is not yet available.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/using-the-striim-for-snowflake-rest-api.html", "title": "Using the Striim for Snowflake REST API", "language": "en"}} {"page_content": "\n\nStop a serviceSkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationUsing the Striim Cloud ConsoleStop a servicePrevNextStop a serviceTo stop a Striim for Snowflake service and pause all its pipelines:In the Striim Cloud Console, go to the Services page.Next to the service, click More, select Stop, and click Stop.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2022-11-02\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/stop-a-service.html", "title": "Stop a service", "language": "en"}} {"page_content": "\n\nSecuritySkip to main contentToggle navigationToggle navigationStriim for Snowflake DocumentationprintToggle navigationStriim for Snowflake DocumentationSecurityPrevNextSecurityStriim for Snowflake is deployed as an Elastic Kubernetes Service (EKS) pod on Amazon Web Services (AWS). Much of the security for Striim for Snowflake, such as data encryption at rest, comes from the security infrastructure provided by EKS and AWS. For more information, see the Security section of the Amazon EKS documentation.User metadata is stored in the EKS pod. This metadata can be accessed only by Striim DevOps personnel, and all such access generates an audit trail. Sensitive data including source database passwords, SSL keys, and SSL passwords are not accessible to DevOps personnel.AuthenticationSnowflake authorizes access to resources based on a verified client identity. Striim for Snowflake connects to Snowflake over JDBC.. See Connect to Snowflake for details on Snowflake roles and permissions.Striim for Snowflake's default password policy enforces character variety and minimum length. Each individual user can change the password for their own account. Regardless of privilege level, no user account can manage the password for another account.Access controlWhat users can access and do in Striim for Snowflake is controlled by roles. For more information, see Add users.Encryption between servicesAll communication between your Striim Cloud Console and your Striim for Snowflake instances is encrypted using Transport Layer Security (TLS) 1.2.REST APIREST API keys are specific to individual users and not accessible to other users or Striim DevOps personnel. An audit trail tracks all actions taken through the API for each user.In this section: Search resultsNo results foundWould you like to provide feedback? Just click here to suggest edits.PrevNextSee how streaming data integration can work for\n\t\t\t\t\t\tyou.Schedule a\n\t\t\t\t\t\tDemoDownload \u00a9 2023 Striim, Inc. All rights reserved. Last modified: 2023-02-10\n", "metadata": {"source": "https://striim.com/docs/AWS/StriimForSnowflake/en/security.html", "title": "Security", "language": "en"}} {"page_content": "\n\nGetting Started with StreamShiftSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationGetting Started with StreamShiftPrevNextGetting Started with StreamShiftIn this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/getting-started-with-streamshift.html", "title": "Getting Started with StreamShift", "language": "en"}} {"page_content": "\n\nUnderstanding database migration and replicationSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationGetting Started with StreamShiftUnderstanding database migration and replicationPrevNextUnderstanding database migration and replicationAt a high level, the three approaches to moving a database to the cloud are lift-and-shift migration, online migration, and continuous replication.In a lift-and-shift migration, you typically export or back up a source database, move the exported or backup file to the cloud, and then import and restore the file to a target database instance running in the cloud. When the new target database is ready, business applications run in the cloud and access data.A downside to the lift-and-shift approach is that it requires downtime for both the business applications and database. This approach requires careful planning to migrate during low-activity periods. This approach also assumes that you can stop the business applications during the migration and restart them after the database is restored in the cloud. Any testing of the applications after the database is restored adds to the downtime. Furthermore, the lift-and-shift approach creates an intermediate copy of the database that needs to be secured, moved, stored, and eventually deleted. This aspect adds cost and management complexity. While the lift-and-shift approach might work for certain applications, the requirements of most business-critical applications do not tolerate these costs. For these reasons, an online database migration is a far better approach.The online migration (lift and shift with ongoing replication) approach aims for minimal impact on database performance and users of the business applications. This approach continuously replicates inserts, updates, and deletes from the source to the target for months, or even years, while you optimize and test the business applications for the new cloud-based database.In an online migration to the cloud, the source database is retired when the migration is complete because the cloud database is now the production instance.In some use cases, you might need the original database instance to continue to run while downstream processing occurs in the cloud. In this case, the source database is replicated in the cloud indefinitely. For example, you might have an application that accesses a database that depends on technology that cannot be moved to the cloud. Another example is where a database that contains personally identifiable information (PII) must reside in an on-premises environment while downstream processing that uses obfuscated PII occurs in the cloud.In these use cases, the original database must be continuously replicated to the target indefinitely. The source database is not shut down as it is with a database migration.Online database migration and replication for heterogeneous databases are typically complex, involving months or even years of hand coding and integrating various services. A more modern and efficient approach to online database migrations is the use of database migration and integration systems such as StreamShift (for lift-and-shift or online migration) or Striim (for continuous replication).Portions of this introduction are licensed from Google under the Creative Commons Attribution 4.0 License. Minor changes have been made from the original.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/understanding-database-migration-and-replication.html", "title": "Understanding database migration and replication", "language": "en"}} {"page_content": "\n\nLift and Shift versus Ongoing SynchronizationSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationGetting Started with StreamShiftLift and Shift versus Ongoing SynchronizationPrevNextLift and Shift versus Ongoing SynchronizationStreamShift offers three types of migration:Lift and Shift with Ongoing SynchronizationThis has two phases.First, selected tables (minus foreign keys) and existing data are copied from the source database to the target database using JDBC. This is a one-time process. (In the Striim platform, this is called initial load.) Depending on the amount and complexity of data in the source tables, this may take minutes, hours, days, or weeks. StreamShift's assessment report may help estimate how long this process will take. Source data types that are incompatible with the target will be converted or omitted. After all data has been copied to the target, foreign keys may be applied to the target tables.At the same time StreamShift starts initial load, it starts capturing insert, update, and delete operations in the source database using change data capture (CDC) and stores those events in the integrated Kafka instance. When initial load is complete, StreamShift starts applying the captured change data to the target database. This ongoing synchronization picks up initial load stopped, and there should be no missing or duplicate transactions. Synchronization continues until you stop the migration manually.Lift and Shift onlyIf the source and target databases are of the same type (for example, on-premise Oracle to Amazon RDS for Oracle, or on-premise SQL Server to Azure SQL Managed Instance), StreamShift will use the database's native utilities to copy the entire selected database, typically including all tables and most if not all other objects such as triggers, stored procedures, and privileges.If the source and target databases are of different types (for example, on-premise Oracle to Amazon RDS for PostgreSQL, or on-premise SQL Server to Google Cloud SQL for MySQL), selected tables and their data are copied from the source to the target database as for the first phase of Lift and Shift with Ongoing Synchronization.Ongoing Synchronization onlyInsert, update, and delete operations in the source database are replicated in the target database as for the second phase of Lift and Shift with Ongoing Synchronization.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/lift-and-shift-versus-ongoing-synchronization.html", "title": "Lift and Shift versus Ongoing Synchronization", "language": "en"}} {"page_content": "\n\nSupported sources and targetsSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationGetting Started with StreamShiftSupported sources and targetsPrevNextSupported sources and targetsStreamShift currently supports the following:sourcetargetnotesAzure Cosmos DBxprovisioned throughput only (see How to choose between provisioned throughput and serverless)Maria DBMariaDB on-premisexxAmazon RDS for MariaDBxxMySQLMySQL on-premisexxAmazon Aurora with MySQL compatibilityxxAzure Database for MySQLxxAmazon RDS for MySQLxxCloud SQL for MySQLxOracleOracle on-premisexxAmazon RDS for OraclexxPoatgreSQLPostgreSQL on-premisexxAmazon Aurora with PostgreSQL compatibilityxxAmazon RDS for PostgreSQLxxAzure Database for PostgreSQL Flexible ServerxxAzure Database for PostgreSQL Hyperscalexdoes not support change data capture so is not supported as a sourceAzure Database for PostgreSQL Single ServerxxCloud SQL for PostgreSQLxxSQL ServerSQL Server on-premisexxAmazon RDS for SQL ServerxxAzure SQL Databasexdoes not support change data capture so is not supported as a sourceAzure SQL Managed InstancexxIn this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/supported-sources-and-targets.html", "title": "Supported sources and targets", "language": "en"}} {"page_content": "\n\nStreamShift workflow overviewSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationGetting Started with StreamShiftStreamShift workflow overviewPrevNextStreamShift workflow overviewA StreamShift migration has four major phases:Configure: Select the migration type and specify the source and target connection details.Assess: View StreamShift's assessment of the scope of the migration and any compatibility issues.Customize: Map the source schemas (in Oracle, PostgreSQL, or SQL Server) or databases (in MariaDB or MySQL) to the target schemas or databases and address any incompatibilities.Migrate: Migrate the schema and data.For a detailed description of this workflow, see Prerequisite setup for sources and targets and Migrating a database with StreamShift.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/streamshift-workflow-overview.html", "title": "StreamShift workflow overview", "language": "en"}} {"page_content": "\n\nPrerequisite setup for sources and targetsSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsPrevNextPrerequisite setup for sources and targetsIn this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/prerequisite-setup-for-sources-and-targets.html", "title": "Prerequisite setup for sources and targets", "language": "en"}} {"page_content": "\n\nConnecting with sources and targets over the internetSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsConnecting with sources and targets over the internetPrevNextConnecting with sources and targets over the internetThere are several ways to connect with sources and targets over the internet.... using cloud provider keysSome cloud sources and targets, such as Cosmos DB, secure their connections using keys. No additional configuration is required on your part, you simply provide the appropriate key in the source or target properties.... using an SSH tunnelSee Using an SSH tunnel to connect to a source or target.... by adding an inbound port rule to your firewall or cloud security groupIn your subscription, go to the Services page.Next to the service, click ... and select Security.Copy StreamShift's IP address.In the firewall or cloud security group for your source or target, create an inbound port rule for that IP address and the port for your database (typically 3306 for MariaDB or MySQL, 1521 for Oracle, 5432 for PostgreSQL, or 1433 for SQL Server).... using port forwardingIn your router configuration, create a port forwarding rule for your database's port. If supported by your router, set the source IP to your database's IP address and the target IP to StreamShift's IP address (which you can get as described above).In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/connecting-with-sources-and-targets-over-the-internet.html", "title": "Connecting with sources and targets over the internet", "language": "en"}} {"page_content": "\n\nUsing an SSH tunnel to connect to a source or targetSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsUsing an SSH tunnel to connect to a source or targetPrevNextUsing an SSH tunnel to connect to a source or targetNoteThis feature is available only in Striim Cloud, not in Striim Platform.When you need to connect to a source or target through a jump server, set up an SSH tunnel as follows.In the Striim Cloud Console, go to the Services page. If the service is not running, start it and wait for its status to change to Running.Next to the service, click ... and select Security.Click Create New Tunnel and enter the following:Name: choose a descriptive name for this tunnelJump Host: the IP address or DNS name of the jump serverJump Host Port: the port number for the tunnelJump Host Username: the jump host operating system user account that Striim Cloud will use to connectDatabase Host: the IP address or DNS name of the source or target databaseDatabase Port: the port for the databaseClick Create Tunnel. Do not click Start yet.Under Public Key, click Get Key > Copy Key.Add the copied key to your jump server's authorized keys file, then return to the Striim Cloud Security page and click Start. The SSH tunnel will now be usable in the source or target settings.Give the user specified for Jump Host Username the necessary file system permissions to access the key.Under Tunnel Address, click Copy to get the string to provide as the host name.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/using-an-ssh-tunnel-to-connect-to-a-source-or-target.html", "title": "Using an SSH tunnel to connect to a source or target", "language": "en"}} {"page_content": "\n\nCosmos DB setupSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsCosmos DB setupPrevNextCosmos DB setupCosmos DB is supported only as a target and only using the Core (SQL) API with provisioned throughput (see How to choose between provisioned throughput and serverless).In the Data Explorer for the target Cosmos DB account, click New Database. For simplicity, you may wish to use the source database's name as the Cosmos DB database ID.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/cosmos-db-setup.html", "title": "Cosmos DB setup", "language": "en"}} {"page_content": "\n\nMySQL / MariaDB setupSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsMySQL / MariaDB setupPrevNextMySQL / MariaDB setupSource setupIf the source and target are both MySQL and you are doing a \"Lift and Shift only\" migration, create a user for use by Striim and grant it SELECT privileges on all tables to be migrated.In all other cases (including when the source and target are both MariaDB), set up the source as described in MySQL / MariaDB setup.Target setupCreate the target database(s).Create a CHKPOINT table in one of the target databases.CREATE TABLE CHKPOINT (\n id VARCHAR(100) PRIMARY KEY, \n sourceposition BLOB, \n pendingddl BIT(1), \n ddl LONGTEXT);Create a user for use by Striim as follows, replacing ******** with a strong password. You may use any user name you wish.CREATE USER 'striim'@'%' IDENTIFIED BY '********';\nGRANT ALTER, CREATE, DELETE, DROP, INSERT, SELECT ON *.* TO 'striim'@'%';If the source and target are both MySQL, add these additional privileges:GRANT ALTER ROUTINE, CREATE ROUTINE, CREATE VIEW, EVENT, LOCK TABLES, REFERENCES, RELOAD,\n REPLICATION CLIENT, TRIGGER, UPDATE ON *.* TO 'striim'@'%';In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/mysql---mariadb-setup.html", "title": "MySQL / MariaDB setup", "language": "en"}} {"page_content": "\n\nOracle setupSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsOracle setupPrevNextOracle setupSource setupIf the source and target are both Oracle and you are doing a \"Lift and Shift only\" migration, create a user for use by Striim and grant it SELECT privileges on all tables to be migrated.In all other cases, set up the source as described in Basic Oracle configuration tasks, Creating an Oracle user with LogMiner privileges, and Creating the quiescemarker table. If your source database is PDB, also grant the Oracle user the ALTER SESSION privilege.Target setupCreate the target schema(s).Create a CHKPOINT table in one of the target schemas:CREATE TABLE CHKPOINT (\n ID VARCHAR2(100) PRIMARY KEY, \n SOURCEPOSITION BLOB, \n PENDINGDDL NUMBER(1), \n DDL CLOB);Create a role and user for use by Striim as follows, replacing ******** with a strong password. You may use any role or user name you wish.CREATE ROLE STRIIM_PRIVS;\nGRANT CREATE SESSION TO STRIIM_PRIVS;\nGRANT CREATE USER TO STRIIM_PRIVS;\nGRANT CREATE ANY TABLE TO STRIIM_PRIVS;\nGRANT SELECT ANY TABLE TO STRIIM_PRIVS;\nGRANT ALTER ANY TABLE TO STRIIM_PRIVS;\nGRANT INSERT ANY TABLE TO STRIIM_PRIVS;\nGRANT UPDATE ANY TABLE TO STRIIM_PRIVS;\nGRANT DROP ANY TABLE TO STRIIM_PRIVS;\nGRANT CREATE ANY INDEX TO STRIIM_PRIVS;\nGRANT UNLIMITED TABLESPACE TO STRIIM_PRIVS WITH ADMIN OPTION;\nGRANT SELECT_CATALOG_ROLE TO STRIIM_PRIVS;\nCREATE USER STRIIM IDENTIFIED BY ******** DEFAULT TABLESPACE USERS;\nGRANT STRIIM_PRIVS TO STRIIM;If the source is also Oracle, add these additional privileges:GRANT SELECT ANY DICTIONARY TO STRIIM_PRIVS;\nGRANT DATAPUMP_IMP_FULL_DATABASE TO STRIIM_PRIVS;\nGRANT EXECUTE ANY PROCEDURE TO STRIIM_PRIVS;In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/oracle-setup.html", "title": "Oracle setup", "language": "en"}} {"page_content": "\n\nPostgreSQL setupSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsPostgreSQL setupPrevNextPostgreSQL setupSource setupIf the source and target are both PostgreSQL and you are doing a \"Lift and Shift only\" migration, create a user for use by Striim and grant it SELECT privileges on all tables to be migrated.In all other cases, set up the source as described in PosgreSQL setup.Target setupIf you are migrating only one schema and plan to use the default public schema, skip this step. Otherwise, create the target schema(s).Create a CHKPOINT table in one of the target schemas:create table chkpoint (\n id character varying(100) primary key,\n sourceposition bytea,\n pendingddl numeric(1), \n ddl text);Create a role for use by Striim as follows, replacing ******** with a strong password. You may use any role name you wish.CREATE ROLE striim WITH LOGIN PASSWORD '******';\nGRANT CONNECT ON DATABASE <database name> TO striim;\nGRANT CREATE ON DATABASE <database name> TO striim;\nIn this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/postgresql-setup.html", "title": "PostgreSQL setup", "language": "en"}} {"page_content": "\n\nSQL Server setupSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationPrerequisite setup for sources and targetsSQL Server setupPrevNextSQL Server setupSource setupIf the source and target are both SQL Server and you are doing a \"Lift and Shift only\" migration, create a user for use by Striim and grant it SELECT permission on all tables to be migrated.In all other cases, set up the source as described in SQL Server setup or, if the source is in an Azure virtual machine, Configuring an Azure virtual machine running SQL Server.Target setupIf you are migrating only one schema and plan to use the default dbo schema, skip this step. Otherwise, create the target schema(s).Create a CHKPOINT table in one of the target schemas:REATE TABLE CHKPOINT (\n id VARCHAR(100) PRIMARY KEY,\n sourceposition VARBINARY(MAX), \n pendingddl BIT, \n ddl VARCHAR(MAX));Create a user for use by Striim as follows, replacing ******** with a strong password. You may use any user name you wish.USE <database name>;\nCREATE LOGIN striim WITH PASSWORD = '********';\nCREATE USER striim FOR LOGIN striim;\nEXEC sp_addrolemember @rolename=db_owner, @membername=striim;\nIn this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/sql-server-setup.html", "title": "SQL Server setup", "language": "en"}} {"page_content": "\n\nMigrating a database with StreamShiftSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftPrevNextMigrating a database with StreamShiftIn this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/migrating-a-database-with-streamshift.html", "title": "Migrating a database with StreamShift", "language": "en"}} {"page_content": "\n\nSubscribe to StreamShift in the AWS MarketplaceSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplacePrevNextSubscribe to StreamShift in the AWS MarketplaceIn the AWS Marketplace, search for StreamShift by Striim and click it.Click View purchase options.Select how long you want your contract to run, whether to automatically renew it, and which plan you want, then click Create contract > Pay now > Set up your account.In the Sign up for Striim Cloud dialog, enter your name, email address, company name, your desired sub-domain (part of the URL where you will access Striim Cloud), and password, then click Sign up.When you receive the Activate your account email, open it and click the activation link.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/subscribe-to-streamshift-in-the-aws-marketplace.html", "title": "Subscribe to StreamShift in the AWS Marketplace", "language": "en"}} {"page_content": "\n\nSubscribe to StreamShift in the Microsoft Azure MarketplaceSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftSubscribe to StreamShift in the Microsoft Azure MarketplacePrevNextSubscribe to StreamShift in the Microsoft Azure MarketplaceIn the Azure Marketplace, search for StreamShift and click it.Click Get It Now, check the box to accept Microsoft's terms, and click Continue.Select a plan, then click Subscribe.Select one of your existing resource groups or create a new one, enter a name for this subscription, and click Review + subscribe.Click Subscribe.When you receive an \"Activate your Striim Cloud Enterprise\" email from Microsoft AppSource, open it and click Activate now.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/subscribe-to-streamshift-in-the-microsoft-azure-marketplace.html", "title": "Subscribe to StreamShift in the Microsoft Azure Marketplace", "language": "en"}} {"page_content": "\n\nSubscribe to StreamShift in the Google Cloud MarketplaceSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftSubscribe to StreamShift in the Google Cloud MarketplacePrevNextSubscribe to StreamShift in the Google Cloud MarketplaceIn the Google Cloud Platform Marketplace, search for\u00a0StreamShift\u00a0and click it.Scroll down to\u00a0Pricing, select a plan, and click\u00a0Select.Scroll down to\u00a0Additional terms, check to accept them all, and click\u00a0Subscribe.Click\u00a0Register with Striim Inc., then follow the instructions to complete registration. Make a note of the domain and password you enter.When you receive the\u00a0Striim for BigQuery | Activate your account\u00a0email, open it and click the activation link.Enter your email address and password, then click\u00a0Sign up.You will receive another email with information about your subscription.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/subscribe-to-streamshift-in-the-google-cloud-marketplace.html", "title": "Subscribe to StreamShift in the Google Cloud Marketplace", "language": "en"}} {"page_content": "\n\nCreate a StreamShift serviceSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftCreate a StreamShift servicePrevNextCreate a StreamShift serviceOn the Services page, click Create new, then under StreamShift click Create.Enter a name for your service, then click Create.Your new service will not be usable until its status is Running.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/create-a-streamshift-service.html", "title": "Create a StreamShift service", "language": "en"}} {"page_content": "\n\nCreate a StreamShift projectSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftCreate a StreamShift projectPrevNextCreate a StreamShift projectOn the Services page, click Launch for your service, then click Create Project.Enter a name for your project, then click Create.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/create-a-streamshift-project.html", "title": "Create a StreamShift project", "language": "en"}} {"page_content": "\n\nChoose your migration typeSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftChoose your migration typePrevNextChoose your migration typeSee Lift and Shift versus Ongoing Synchronization. For most customers, the right choice is Lift and Shift with Ongoing Synchronization. After you have made your choice, click Next.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/choose-your-migration-type.html", "title": "Choose your migration type", "language": "en"}} {"page_content": "\n\nSelect and connect to your source databaseSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftSelect and connect to your source databasePrevNextSelect and connect to your source databaseAfter you create a new project, StreamShift will prompt you to select your source database type.If your source is running on your own hardware or your own virtual machine, select MariaDB, MySQL, Oracle, PostgreSQL, or SQL Server. If your source is in Amazon AWS, Microsoft Azure, or Google Cloud, select AWS, Azure, or Google, then the database type. For AWS, note that there are both Aurora and RDS versions of MySQL and PostgreSQL.StreamShift will prompt you to enter the connection details, including:Host name, IP address, or SSH tunnel name (see Connecting with sources and targets over the internet). If you are not using an SSH tunnel, StreamShift must be able to connect to the host over the public Internet, for example via port forwarding or IP allowlisting.port for the database host (not for the SSH tunnel)for Oracle, the SID or service name of the schema(s) to be migratedfor PostgreSQL or SQL Server, the name of the database containing schemas to be migratedusername and password (see Prerequisite setup for sources and targets)If the connection will use SSL, check Use SSL. See Configure SSL for more information.When done, click Next. If StreamShift is unable to connect to the source, correct any settings or perform any prerequisite tasks (see Prerequisite setup for sources and targets) as necessary, then click Next to try again. When you see the source profile, click Next.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/select-and-connect-to-your-source-database.html", "title": "Select and connect to your source database", "language": "en"}} {"page_content": "\n\nConfigure SSLSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftConfigure SSLPrevNextConfigure SSLIf you select Use SSL in source or target properties, set the options as follows.CautionIn all cases, replace mypassword with a secure password.... for Amazon RDS for MariaDBUsing SSL certificates is optional.Download the root certificate rds-ca-2019-root.pem.Import that certificate into a custom Java truststore file:keytool -importcert -alias MariaCACert -file rds-ca-2019-root.pem \\\n -keystore clientkeystore.jks -storepass mypassword In Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )Verify Server Certificate - check this box or set to true. ( On setting it to true, all the ssl certificates mentioned below will be verified while establishing the connection. )Trust Certificate Key Store Url - upload the certificate key store file clientkeystore.jks created as part of step 2.Trust Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 2.Trust Certificate Key Store Password - set the value specified in --storepass in step 2.Trust Certificate: if you selected Lift and Shift only and the source and target are both MariaDB, upload the rds-ca-2019-root.pem root certificate downloaded in step 1. Otherwise leave blank.... for MariaDB on premiseUsing SSL certificates is optional.To import the certificate (must be in .pem format) into a custom Java truststore file:keytool -importcert -alias MariaCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Client Certificate Settings:Convert client keys/certificate files to PKCS#12 before creating a keystoreopenssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 filekeytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks -deststoretype JKS \\ \n -deststorepassmypasswordIn Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )Verify Server Certificate - check this box or set to true. ( On setting it to true, all the ssl certificates mentioned below will be verified while establishing the connection. )Trust Certificate Key Store Url - upload the certificate key store file truststore.jks created as part of step 1.Trust Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 1.Trust Certificate Key Store Password - set the value specified in --storepass in step 1.Client Certificate Key Store Url - upload the certificate key store file keystore.jks created as part of step 2.Client Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 2.Client Certificate Key Store Password- set the value specified in --deststorepass in step 2.If you selected Lift and Shift only and the source and target are both MariaDB, set these additional properties:Trust Certificate - Upload server-ca.pemClient Certificate - Upload client-cert.pemClient Certificate Key- Upload client-key.pem... for Amazon RDS for MySQLUsing SSL certificates is optional.Download root certificate rds-ca-2019-root.pem.To import the certificate into a custom Java truststore file:keytool -importcert -alias MySQLCACert -file rds-ca-2019-root.pem \\\n -keystore clientkeystore.jks -storepass mypasswordIn Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )Verify Server Certificate - check this box to set it to true. ( On setting it to true, all the ssl certificates mentioned below will be verified while establishing the connection. )Trust Certificate Key Store Url - upload the certificate key store file clientkeystore.jks created as part of step 2.Trust Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 2.Trust Certificate Key Store Password - set the value specified in --storepass in step 2.Trust Certificate: if you selected Lift and Shift only and the source and target are both MySQL, upload the rds-ca-2019-root.pem certificate downloaded in step 1. Otherwise leave blank.... for Azure Database for MySQLUsing SSL certificates is optional.Download BaltimoreCyberTrustRoot.crt.pem & DigiCertGlobalRootG2.crt.pem certificatesCreate truststore files:keytool -importcert -alias MySQLServerCACert -file /path...../BaltimoreCyberTrustRoot.crt.pem-keystore \\\n truststore.jks -storepass password -noprompt\nkeytool -importcert -alias MySQLServerCACert2 -file /path...../DigiCertGlobalRootG2.crt.pem-keystore \\ \n truststore.jks -storepass password -nopromptIn Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )Verify Server Certificate - check this box to set to true. ( On setting it to true, all the ssl certificates mentioned below will be verified while establishing the connection. )Trust Certificate Key Store Url - upload the certificate key store file truststore.jks created as part of step 2.Trust Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 2.Trust Certificate Key Store Password - set the value specified in --storepass in step 2.Trust Certificate: if you selected Lift and Shift only and the source and target are both MySQL, upload the BaltimoreCyberTrustRoot.crt.pem certificate downloaded in step 1. Otherwise leave blank.... for Google Cloud SQL for MySQLUsing SSL certificates is optional.Download server-ca.pem, client-cert.pem & client-key.pem from GCP.To import the certificate into a custom Java truststore file:keytool -importcert -alias MySQLCACert -file server-ca.pem -keystore truststore.jks \\\n -storepass mypasswordClient certificate settings:Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordIn Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )Verify Server Certificate - check this box to set it to true. ( On setting it to true, all the ssl certificates mentioned below will be verified while establishing the connection. )Trust Certificate Key Store Url - upload the certificate key store file truststore.jks created as part of step 2.Trust Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 2.Trust Certificate Key Store Password - set the value specified in --storepass in step 2.Client Certificate Key Store Url - upload the certificate key store file keystore.jks created as part of step 3.Client Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 3.Client Certificate Key Store Password- set the value specified in --deststorepass in step 3.If you selected Lift and Shift only and the source and target are both MySQL, set these additional properties:Trust Certificate - Upload server-ca.pemClient Certificate - Upload client-cert.pemClient Certificate Key- Upload client-key.pem... for MySQL on premiseUsing SSL certificates is optional.To import the certificate (must be in .pem format) nto a custom Java truststore file:keytool -importcert -alias MariaCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypassword Convert client keys/certificate files to PKCS#12:openssl pkcs12 -export -in client-cert.pem -inkeyclient-key.pem \\\n -name \u201cmysqlclient\u201d -passoutpass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12-srcstoretype pkcs12 \\\n -srcstorepass mypassword-destkeystore keystore.jks \\\n -deststoretype JKS-deststorepass mypasswordIn Streamshift SSL UI:SSL - check this box to set to true (The client must set this property in order to use encrypted connections )Verify Server Certificate - check this box to set it to true. ( On setting it to true, all the ssl certificates mentioned below will be verified while establishing the connection. )Trust Certificate Key Store Url - upload the certificate key store file truststore.jks created as part of step 2.Trust Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 2.Trust Certificate Key Store Password - set the value specified in --storepass in step 2.Client Certificate Key Store Url - upload the certificate key store file keystore.jks created as part of step 3.Client Certificate Key Store Type - provide the store type as specified (eg.JKS) in step 3.Client Certificate Key Store Password- set the value specified in --deststorepass in step 3.If you selected Lift and Shift only and the source and target are both MySQL, set these additional properties:Trust Certificate - Upload server-ca.pemClient Certificate - Upload client-cert.pemClient Certificate Key- Upload client-key.pem... for Amazon RDS for OracleAn SSL certificate is required.Download root certificate rds-ca-2019-root.pem.To import the certificate into a custom Java truststore file:keytool -importcert -alias OracleCACert -file rds-ca-2019-root.pem \\\n -keystore clientkeystore.jks -storepass mypasswordIn Streamshift SSL UI:SSL - check this box to set to trueTrust store - upload this certificate key store file clientkeystore.jks created as part of step 2.Trust store type - JKSTrust store password - set the value specified in --storepass in step 2.Trust Certificate: if you selected Lift and Shift only and the source and target are both Oracle, upload the rds-ca-2019-root.pem root certificate downloaded in step 1. Otherwise leave blank.... for Oracle on premiseAn SSL certificate is required.To import the certificate(.pem format) into a custom Java truststore file:keytool -importcert -alias OracleCACert -file server-ca.pem \\\n -keystore truststore.jks -storepass mypasswordConvert client keys/certificate files to PKCS#12 before creating a keystore:openssl pkcs12 -export -in client-cert.pem -inkey client-key.pem \\\n -name \u201cmysqlclient\u201d -passout pass:mypassword -out client-keystore.p12Create a Java Keystore using the client-keystore.p12 file:keytool -importkeystore -srckeystore client-keystore.p12 -srcstoretype pkcs12 \\\n -srcstorepass mypassword -destkeystore keystore.jks -deststoretype JKS \\\n -deststorepass mypasswordIn Streamshift SSL UI:SSL - check this box to set to trueTrust store - upload this certificate key store file truststore.jks created as part of step 1.Trust store type - provide the store type as specified (eg.JKS) in step 1.Trust store password - set the value specified in --storepass in step 1.Key Store - upload this certificate key store file keystore.jks created as part of step 3.Key Store Type - provide the store type as specified (eg.JKS) in step 3.Key Store Password- set the value specified in --deststorepass in step 3.Additional properties for Lift and Shift Only from Oracle to Oracle (homogenous migration):... for Amazon RDS for PostgreSQLUsing SSL certificates is optional.Download root certificate rds-ca-2019-root.pem.In Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )SSL Mode - disable / allow / prefer / require /verify-ca, based on the type of encryption and validation required for the user (verify-full is not supported)SSL Root Certificate - upload the downloaded root certificate mentioned in step 1... for Azure Database for PostgreSQLUsing SSL certificates is optional.Download BaltimoreCyberTrustRoot.crt.pem certificate.In Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )SSL Mode - disable / allow / prefer / require /verify-ca , based on the type of encryption and validation required for the user (verify-full is not supported)SSL Root Certificate - upload the downloaded root certificate mentioned in step 1... for Google Cloud SQL for PostgreSQLUsing SSL certificates is optional.Download server-ca.pem, client-cert.pem & client-key.pem from GCP.Convert client-key.pem to .pk8 format:openssl pkcs8 -topk8 -inform PEM -outform DER -inclient-key.pem -out client.root.pk8 \\\n -nocryptIn Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )SSL Mode - disable / allow / prefer / require /verify-ca , based on the type of encryption and validation required for the user (verify-full is not supported)SSL Certificate - upload the downloaded certificate client-cert.pem mentioned in step 1.SSL Certificate Key (in .pk8 format) - upload the certificate key file client.root.pk8 specified in step 1.SSL Root Certificate - upload the downloaded root certificate server-ca.pem mentioned in step 1Client Certificate Key: if you selected Lift and Shift only and the source and target are both PostgreSQL, upload the client-key.pem file downloaded in step 1. Otherwise leave blank.... for PostgreSQL on premiseUsing SSL certificates is optional.Convert client-key.pem to .pk8 format:openssl pkcs8 -topk8 -inform PEM -outform DER -in client-key.pem -out client.root.pk8 \\\n -nocryptIn Streamshift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )SSL Mode - disable / allow / prefer / require /verify-ca , to match the type of encryption and validation required for the user (verify-full is not supported)SSL Certificate - upload the created SSL certificate client-cert.pemSSL Certificate Key (in .pk8 format) - upload the certificate key file client.root.pk8 specified in step 1.SSL Root Certificate - upload the created root certificate server-ca.pemClient Certificate Key: if you selected Lift and Shift only and the source and target are both PostgreSQL, upload the client-key.pem file downloaded in step 1. Otherwise leave blank.... for Amazon RDS for SQL ServerUsing SSL certificates is optional.Download the root certificate rds-ca-2019-root.pem.Import that certificate into a custom Java truststore file:keytool -importcert -alias MSSQLCACert -file rds-ca-2019-root.pem \\\n -keystore clientkeystore.jks -storepass mypassword In Streamshift SSL UI:SSL - check this box to set to true (The client must set this property in order to use encrypted connections)Use Trust Server Certificate - check this box or set to trueIntegrated Security - check this box or set to true (If \u201ctrue\u201d, it indicates that Windows credentials are used by SQL Server on Windows OS. The JDBC driver searches the local computer credential cache for credentials that were provided when a user signed in to the computer or network. If \"false\", the username and password must be supplied)Trust Store - upload the certificate key store file clientkeystore.jks created as part of step 2.Trust Store Password - set the value specified in --storepass in step 2.Certificate Host Name - Host name of the server used to validate the SQL Server TLS/SSL certificate. (Eg : *.database.windows.net )... for Azure SQL Database or Azure SQL Managed InstanceUsing SSL certificates is optional.Download BaltimoreCyberTrustRoot.crt.pem & DigiCertGlobalRootG2.crt.pem certificatesCreate truststore files:keytool -importcert -alias MSSQLServerCACert \\\n -file /path...../BaltimoreCyberTrustRoot.crt.pem-keystore truststore.jks \\\n -storepass password -noprompt\nkeytool -importcert -alias MSSQLServerCACert2 \\\n -file /path...../DigiCertGlobalRootG2.crt.pem-keystore truststore.jks \\\n -storepass password -nopromptIn Streamshift SSL UI:SSL - check this box to set to true (The client must set this property in order to use encrypted connections)Use Trust Server Certificate - check this box or set to trueIntegrated Security - check this box or set to true (If \u201ctrue\u201d, it indicates that Windows credentials are used by SQLServer on Windows OS. The JDBC driver searches the localcomputer credential cache for credentials that were providedwhen a user signed in to the computer or networkIf \"false\", the username and password must be supplied.)Trust Store - upload the certificate key store file truststore.jks created as part of step 2.Trust Store Password - set the value specified in --storepass in step 2.Certificate Host Name - Host name of the server used to validate the SQL Server TLS/SSL certificate. ( Eg : *.database.windows.net )... for Google Cloud SQL for SQL ServerUsing SSL certificates is optional.Download server-ca.pem from GCP.To import the certificate into a custom Java truststore file:keytool -importcert -alias MSSQLCACert -file server-ca.pem-keystore truststore.jks \\\n -storepass mypasswordIn Streamshift SSL UI:SSL - check this box to set to true (The client must set this property in order to use encrypted connections )Use Trust Server Certificate - check this box or set to trueIntegrated Security - check this box or set to true (If \u201ctrue\u201d, it indicates that Windows credentials are used by SQLServer on Windows OS. The JDBC driver searches the localcomputer credential cache for credentials that were providedwhen a user signed in to the computer or networkIf \"false\", the username and password must be supplied.)Trust Store - upload the certificate key store file truststore.jks created as part of step 2.Trust Store Password - set the value specified in --storepass in step 2.Certificate Host Name - Host name of the server used to validate the SQL Server TLS/SSL certificate. ( Eg : *.database.windows.net )... for SQL Server on premiseUsing SSL certificates is optional.Create server-ca.pemTo Create truststore file, use the following command:keytool -importcert -alias MSSQLCACert -file server-ca.pem-keystore truststore.jks \\\n -storepass mypasswordIn StreamShift SSL UI:SSL - check this box to set to true ( The client must set this property in order to use encrypted connections )Use Trust Server Certificate - check this box or set to trueIntegrated Security - check this box or set to true (If \u201ctrue\u201d, it indicates that Windows credentials are used by SQLServer on Windows OS. The JDBC driver searches the localcomputer credential cache for credentials that were providedwhen a user signed in to the computer or networkIf \"false\", the username and password must be supplied.)Trust Store - upload the certificate key store file truststore.jks created as part of step 2.Trust Store Password - set the value specified in --storepass in step 2.Certificate Host Name - Host name of the server used to validate the SQL Server TLS/SSL certificate. ( Eg : *.database.windows.net )In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/configure-ssl.html", "title": "Configure SSL", "language": "en"}} {"page_content": "\n\nSelect and connect to your target databaseSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftSelect and connect to your target databasePrevNextSelect and connect to your target databaseSelect your target database type as described in Select and connect to your source database.Optionally, to preserve case-sensitive table names in the target, check Retain case sensitivity of object names.CautionIf you are migrating source tables with the same names except for the case (for example, employees and Employees), you must check Retain case sensitivity of object names to avoid errors in migration.If your target is Cosmos DB:for the Service endpoint, enter the URI from your Azure Cosmos DB account's Keys tab.for the Access key, enter the Primary Key from your Azure Cosmos DB account's Keys tab.For other targets, enter your connection details as described in Select and connect to your source database.When you have entered the connection properties, click Next.If any of the prerequisite checks fail, correct any settings or perform any prerequisite tasks as necessary, then click Next to try again.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/select-and-connect-to-your-target-database.html", "title": "Select and connect to your target database", "language": "en"}} {"page_content": "\n\nSelect what to migrateSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftSelect what to migratePrevNextSelect what to migrateSelect the schema(s) or database(s) to migrate, then click Assess schemas.For PostgreSQL sources, the database may have only the default schema, in which case, select public.For SQL Server sources, the database may have only the default schema, in which case, select dbo.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/select-what-to-migrate.html", "title": "Select what to migrate", "language": "en"}} {"page_content": "\n\nUnderstanding the assessment and compatibility reportsSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftUnderstanding the assessment and compatibility reportsPrevNextUnderstanding the assessment and compatibility reportsAfter you select the schema(s) to migrate and click Assess, StreamShift will analyze the complexity of the source data and its compatibility with the target database.The Assessment report may be helpful in planning multiple migrations, or a single migration so large that it may take days or weeks. This report will be displayed in the right-hand panel during the Customize phase. See How the assessment score is calculated for more information.The Compatibility report shows which source tables have data types or other attributes that can not be read or are incompatible with the target. Click Customize to resolve these issues. See How the compatibility score is calculated for more information.When the target is Cosmos DB, which does not have data types, there are no incompatible data types, but source tables may still be shown as incompatible because they cannot be read.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/understanding-the-assessment-and-compatibility-reports.html", "title": "Understanding the assessment and compatibility reports", "language": "en"}} {"page_content": "\n\nHow the assessment score is calculatedSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftUnderstanding the assessment and compatibility reportsHow the assessment score is calculatedPrevNextHow the assessment score is calculatedIn database migrations scenarios, the assessment score is primarily relevant for the source database since the target database is usually empty and new.The score is calculated based on multiple factors including the type, object structure, schema diversity, cardinality and data characteristics of the underlying database.In general, given a database migration scenario for a specific source and target combination, the higher the source database assessment score, the larger is the effort to migrate this database with respect to another source database with a lower assessment score.For example, In an Oracle to PostgreSQL database migration scenario:Given two Source Oracle Databases SALES, and FIN with the following assessment scores:Source Database SALES : Score 50000Source Database FIN : Score 25000And a target database Azure PostGreSQL, it would require more effort to migrate database SALES to the target PostgreSQL database, as compared to database FIN using the same migration technique.As you plan, customize and undertake your migration activities, you can run multiple assessments and see how this score iteratively changes. For example, excluding certain tables or schemas from migration and rerunning the assessment should lower the assessment score, and in turn the migration effort.Score calculationThere are various factors to be considered when planning for a migration. Specifically these factors affect migration in terms of duration, requiring more migration compute resources and special handling of the migration application.When migrating a specific table, Streamshift has identified following factors affecting the migration:Table sizeTotal number of tablesTotal number of columnsTotal number of rows in a tablesPresence of primary and unique keysTable containing special column types.Rather than looking at the above factors individually and evaluating the total cost involved(in terms of duration, migration resources), Streamshift will assign a specific score to each factor and the total value of all these factors are presented as a score. For example, higher the Assessment score results in higher cost of migration in terms of longer duration, higher migration compute resources.Also migration complexity can be evaluated when comparing the assessment score for a table against its source and target database type. To compute such migration complexity involving various factors such as data types, primary keys, no of columns, a simple representation of total assessment score involving such factors will help us to compute such complexity rather than evaluating complexity against various criteria (or factors) of a given table.Streamshift assignment of scores are documented in the following link - Assessment Calculation Matrix.Examples1. table containing simple data typesHere are the characteristics of the table:Table containing three columns - ID int, Name VARCHAR2(50), AGE intTable containing ID being a primary key columnTotal number of total rows - 1,000,000Total size of the table is 100 MBFrom the Assessment Calculation Matrix, let us consider the various factors associated with the table:FactorsValueCategoryScoreColumn count3Narrow Table100Size100 MBSmall table100Row Count1,000,000Ultra long table400Primary key Column110Data types3Simple3Total6132. table containing complex data typesHere are the characteristics of the table:Table containing columns id int, name VARCHAR2(50), age int , resume BLOB, hiring_date DATETable containing ID being a primary key columnTotal number of total rows - 1,000,000Total size of the table is 10 GBFrom the Assessment Calculation Matrix, let us consider the various factors associated with the table:FactorsValueCategoryScoreColumn count5Narrow Table100Size10 GBLarge table10000Row Count1,000,000Ultra long table400Primary key Column110Data types3Simple3Date types2Complex (Blob, Date)13Total105263. Wide table containing simple and complex data typesHere are the characteristics of the table:Table containing 50 columns: 30 of simple data types, 10 of date, 10 of XMLTable containing three columns being a primary key columnTotal number of total rows - 1,000Total size of the table is 1000 MBFrom the Assessment Calculation Matrix, let us consider the various factors associated with the table:FactorsValueCategoryScoreColumn count50Wide Table200Size1 GBMedium table1000Row Count1000Long table200Primary key Column330Data types30Simple30Date types10Complex (Blob, Date)230 (200 for XML, 30 for Date)Total1690Computing Migration CompatibilityComputing Migration CompatibilityIn this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/how-the-assessment-score-is-calculated.html", "title": "How the assessment score is calculated", "language": "en"}} {"page_content": "\n\nHow the compatibility score is calculatedSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftUnderstanding the assessment and compatibility reportsHow the compatibility score is calculatedPrevNextHow the compatibility score is calculatedMigration compatibility (expressed in terms of %) helps to find out how easy to move a given table from one source database type to target database type.Take the example of \u2018Wide table containing simple and complex data type\u2019 and compatibility has to be calculated between Oracle source and Postgres target database.Here are the characteristics of the table:Table containing 50 columns30 columns of simple data types, 10 columns of date, 10 columns of XMLTable containing three columns being a primary key columnsTotal number of total rows - 1,000Total size of the table is 1000 MBAgain, analysing each factor of the table and its compatibility with the source and target database and adjusting the score based on the supported factors will help us to find out the migration compatibility.FactorsValueTarget Postgres compatibilityOracle ScoreTarget ScoreColumn count50Postgres supports table with 50 columns200200Size1 GBPostgres supports table of 1 GB size10001000Row Count1000Postgres supports table with 1000 rows200200Primary key Column3Postgres supports table with PK3030Data types30Postgres supports all simple data types3030Data types - Date10Postgres supports Date datatype3030Data types - XML10Postgres does not supports XML datatype200-Total16901490Total migration compatibility (computed as % of Target score/Source score) = 1490/1690 = 88.1%In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/how-the-compatibility-score-is-calculated.html", "title": "How the compatibility score is calculated", "language": "en"}} {"page_content": "\n\nCustomize the migration and migrate the schemaSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftCustomize the migration and migrate the schemaPrevNextCustomize the migration and migrate the schemaIf you skipped configuring your target database connection in the Configure phase, click the Configure button to do so now.If the target schema(s) or database(s) do not have the same name as the source schema(s) or database(s), click Edit and adjust as necessary. When the mapping is correct, click Next.NoteFor PostgreSQL targets, the database may have only the default public schema. For SQL Server targets, the database may have only the default dbo database.In this phase, StreamShift shows three categories of compatibility:Compatible: can be migrated to target. Optionally, select Edit SQL to modify the DDL for the target, for example if you disagree with the Striim Intelligence data type mapping detailed in the Compatibility report.Incompatible, Edit SQL enabled: can not be migrated to target without modification. For example, StreamShift might not have been able to determine a compatible data type. For these tables, you have the following options:Select Edit SQL to modify the DDL for the target table, for example, to replace <source data type>_STRIIM_UNKNOWN with a compatible target data type.Select ... > Exclude columns to select incompatible columns to exclude from the migration.Select ... > Exclude table to omit the table from the migration.Incompatible, Edit SQL disabled: can not be migrated to target. For example, the source table may have attributes that make it incompatible with change data capture. Select ... > Exclude table to omit the table from the migration. Alternatively, abandon the StreamShift project, alter the source database to eliminate the compatibility, and start over with a new project.The process is similar for making foreign keys compatible.KNOWN ISSUE (SMS-1373): if a table is excluded, foreign keys that require it still show as compatible.NoteFor Cosmos DB targetsDuring customization, only Exclude table is available, not Exclude column or Edit SQL.Foreign keys are not migrated as they do not exist in Cosmos DB.StreamShift will create a Cosmos DB database for each schema to be migrated, and a container for each source table to be migrated.Containers default to Shared throughput. If that is not appropriate for your Cosmos DB account, change it to Autoscale or Manual.You must define a partition key for each container (see Choosing a partition key).Primary key updates in the source are handled as a delete followed by an insert in the CosmosDB target.When you have dealt with all the incompatibilities, click Migrate Schema. When the migration is complete StreamShift will display a summary.KNOWN ISSUE (SMS-1863): StreamShift may attempt to migrate foreign keys in the wrong order, resulting in some that were reported as compatible failing to migrated. If this happens, click Retry all failed schemas.When schema migration is complete, click Configure CDC Capture.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/customize-the-migration-and-migrate-the-schema.html", "title": "Customize the migration and migrate the schema", "language": "en"}} {"page_content": "\n\nMigrate the dataSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftMigrate the dataPrevNextMigrate the dataTypically you will not need to make changes on the Configure CDC Capture page unless instructed to by StreamShift support. Click Configure CDC Apply to continue.On the Configure CDC Apply page, you may be able to speed up data migration by creating table groups to allow StreamShift to migrate them in parallel. For example, if one table contains 50% of the data, moving it to its own table group could reduce migration time by up to half.Typically you will not need to make other changes on this page unless instructed to by StreamShift support. Click Migrate Data to continue. Since data may be written out of order, to avoid errors StreamShift will not apply the foreign keys yet.At the same time StreamShift starts initial load, it starts capturing insert, update, and delete operations in the source database using change data capture (CDC) and stores those events in the integrated Kafka instance. In the event this data consumes 60% of Kafka's available disk space, you will receive an email alert, and Striim operations staff will expand the virtual disks.When initial load is complete, StreamShift starts applying the captured change data to the target database. This ongoing synchronization picks up where initial load stopped, and there should be no missing or duplicate transactions. Synchronization continues until you stop the migration manually.When the migration of existing source data is complete, continuous synchronization from source to target using CDC starts automatically (see Lift and Shift versus Ongoing Synchronization).At this point, for sources other than SQL Server, you may click Apply Constraints to restore the foreign keys.If the source is SQL Server, Apply Constraints is disabled, and constraints will be applied when migration is stopped.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/migrate-the-data.html", "title": "Migrate the data", "language": "en"}} {"page_content": "\n\nStop migrationSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMigrating a database with StreamShiftStop migrationPrevNextStop migrationWhen you no longer require ongoing synchronization, click Stop Migration. This may take a few minutes. If you did not previously apply constraints, they will be applied.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/stop-migration.html", "title": "Stop migration", "language": "en"}} {"page_content": "\n\nAdding users to a StreamShift serviceSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationAdding users to a StreamShift servicePrevNextAdding users to a StreamShift serviceIn your Striim subscription, go to the Users page and click Invite User.Enter the new user's email address, select the appropriate role (see the text of the drop-down for details), and click Save.The new user will receive an email with a signup link. Once they have signed up, their status will change from Pending to Activated. Once the new user is activated, select ... > Edit, add the service(s) you want them to have access to, and click Save.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/adding-users-to-a-streamshift-service.html", "title": "Adding users to a StreamShift service", "language": "en"}} {"page_content": "\n\nMonitoring a StreamShift serviceSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationMonitoring a StreamShift servicePrevNextMonitoring a StreamShift serviceIn your subscription, go to the Users page and click ... > Monitoring for the service.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/monitoring-a-streamshift-service.html", "title": "Monitoring a StreamShift service", "language": "en"}} {"page_content": "\n\nStreamShift 1.0.0 release notesSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationStreamShift 1.0.0 release notesPrevNextStreamShift 1.0.0 release notesThe following are limitations in this release:Make sure there is no active change data flowing from the source before stopping synchronization. Otherwise some events might not be applied to the target.MySQL source and target: when doing \"Lift and Shift only\" migration between two MySQL databases, triggers are migrated automatically to the target database during schema migration. Streamshift drops the triggers before doing the initial data load and recreates them after initial load. Any triggers that refer to tables in a different database will not be recreated properly after initial load.Oracle sources: when migrating XMLType columns from Oracle, exclude the column from the migration during the customization process to avoid errors during the data migration phase.SQL Server source and target: when doing \"Lift and Shift only\" migration between two SQLServer databases, with the target is SQLServer 2019, Streamshift may wrongly report that the foreign key migration has failed. This can be ignored. Streamshift actually migrates foreign keys in this setup.SQL Server source or target: when both an SSH tunnel and SSL are specified, you may encounter the error Connection Not Made: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: \"Failed to validate the server name in a certificate during Secure Sockets Layer (SSL) initialization.\"The following are known issues for this release:Oracle sources: StreamShift does not flag schemas or tables with names of more than 30 characters as incompatible (SMS-2233). (Names longer than 30 characters are not supported by LogMiner, the Oracle tool StreamShift uses to capture Oracle CDC.)In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.PrevNext \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/streamshift-1-0-0-release-notes.html", "title": "StreamShift 1.0.0 release notes", "language": "en"}} {"page_content": "\n\nContact StreamShift supportSkip to main contentToggle navigationToggle navigation Getting Started with StreamShiftUnderstanding database migration and replicationLift and Shift versus Ongoing SynchronizationSupported sources and targetsStreamShift workflow overview Prerequisite setup for sources and targetsConnecting with sources and targets over the internetUsing an SSH tunnel to connect to a source or targetCosmos DB setupMySQL / MariaDB setupOracle setupPostgreSQL setupSQL Server setup Migrating a database with StreamShiftSubscribe to StreamShift in the AWS MarketplaceSubscribe to StreamShift in the Microsoft Azure MarketplaceSubscribe to StreamShift in the Google Cloud MarketplaceCreate a StreamShift serviceCreate a StreamShift projectChoose your migration typeSelect and connect to your source databaseConfigure SSLSelect and connect to your target databaseSelect what to migrate Understanding the assessment and compatibility reportsHow the assessment score is calculatedHow the compatibility score is calculatedCustomize the migration and migrate the schemaMigrate the dataStop migrationAdding users to a StreamShift serviceMonitoring a StreamShift serviceStreamShift 1.0.0 release notesContact StreamShift supportStreamShift DocumentationprintToggle navigationStreamShift DocumentationContact StreamShift supportPrevContact StreamShift supportTo contact StreamShift support, email streamshift-support@striim.com.In this section: Search resultsNo results foundWas this helpful?YesNoWould you like to provide feedback? Just click here to suggest edits.Prev \u00a9 2021 Striim, Inc. Publication date: \n", "metadata": {"source": "https://www.striim.com/docs/StreamShift/en/contact-streamshift-support.html", "title": "Contact StreamShift support", "language": "en"}} {"page_content": " About Striim | A \"Best Place to Work\" in the US and in the Bay Area Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial About Striim Powering digital transformation Striim was founded with the simple goal of helping companies make data useful the instant it\u2019s born Meet With Us Managing large-scale data is a challenge for every enterprise. Real-time, integrated data is a requirement to stay competitive, but modernizing data architecture can be an overwhelming task. We built Striim to handle the volume, complexity, and velocity of enterprise data by connecting legacy systems to modern cloud applications on a scalable platform. Our customers don\u2019t have to pause operations to migrate data or juggle different tools for every data source\u2014they simply connect legacy systems to newer cloud applications and get data streaming in a few clicks. Seamless integrations. Near-perfect performance. Data up to the moment. That\u2019s what embracing complexity without sacrificing performance looks like to the enterprise with a modern data stack. Our mission: to power every decision with real-time data. Meet the Team Striim was launched by executive and technical members of pioneering organizations like GoldenGate Software (acquired by Oracle in 2009), Informatica, Oracle, Embarcadero Technologies, and BEA/WebLogic. Ali Kutay Chairman, President, and Chief Executive OfficerAli Kutay is a successful serial entrepreneur, executive and investor with 25 years of experience. Before founding Striim, Ali was the Chairman and CEO of GoldenGate Software (acquired by Oracle), the leading infrastructure software company for high availability. Previously, Ali was the Chairman and Chief Executive Officer of AltoWeb, Inc., a provider of e-business infrastructure software. He was an angel investor, President and Chief Executive Officer of WebLogic, Inc., the company that pioneered the Application Server technology which became one of the major building blocks of web-enabled applications. WebLogic merged with BEA Systems, Inc. and was acquired by Oracle in 2007. Earlier in his career, Ali was the President and CEO of Formtek, one of the first enterprise infrastructure software companies, founded in 1984 as a Carnegie-Mellon University spin-off, and acquired by Lockheed Corporation in 1989. He continued to lead Formtek as its CEO under Lockheed Martin for seven years. He currently serves on the boards of public and private companies and advises startups. Ali\u00a0completed his undergraduate and masters degrees at Middle East Technical University, and his PhD work at Carnegie Mellon University. Close Ali Kutay: Chairman, President, and Chief Executive Officer Steve Wilkes Founder and Chief Technology OfficerSteve Wilkes is a life-long technologist, architect, and hands-on development executive. Prior to founding Striim, Steve was the senior director of the Advanced Technology Group at GoldenGate Software. Here he focused on data integration, and continued this role following the acquisition by Oracle, where he also took the lead for Oracle\u2019s cloud data integration strategy. His earlier career included Senior Enterprise Architect at The Middleware Company, principal technologist at AltoWeb and a number of product development and consulting roles including Cap Gemini\u2019s Advanced Technology Group. Steve has handled every role in the software lifecycle and most roles in a technology company at some point during his career. He still codes in multiple languages, often at the same time. Steve holds a Master of Engineering Degree in microelectronics and software engineering from the University of Newcastle-upon-Tyne in the UK. Close Steve Wilkes: Founder and Chief Technology Officer Alok Pareek Founder and Executive Vice President, ProductsAlok Pareek is a founder of Striim and head of products. Prior to Striim, Alok served as Vice President at Oracle in the Server Technologies development organization, where he had overall responsibility for product strategy, management, and vision for data integration, and data replication products. Alok also led the engineering and performance teams that collaborated on architecture, solutions, and future product functionality with global strategic customers. Alok was the Vice President of Technology at GoldenGate where he led the technology vision and strategy from 2004 through its acquisition by Oracle in 2009. He started his career as an engineer in Oracle\u2019s kernel development team where he worked on redo generation, recovery, and high-speed data movement for over ten years. He has multiple patents, has published several papers, and has presented at numerous academic and industry conferences. Alok holds a graduate degree in Computer Science from Stanford University. Close Alok Pareek: Founder and Executive Vice President, Products Andrew Lubesnick CFOAndy is CFO of Striim. Prior to joining Striim, Andy was CFO of CivicConnect, a cloud-based, real-time smart data management platform with solutions in government, mining, and security industries. Prior to CivicConnect, Andy was head of FP&A and Controller at Empyr, Inc., where he was in charge of the financial, strategy, analytics, and legal functions. Andy is a CPA. At Grant Thornton, LLP he lead international audit teams working with both public and private companies, including 20+ business acquisitions. He holds and MA and BA in accounting from the University of Illinois, Gies College of Business. Close Andrew Lubesnick: Chief Financial Officer Phillip Cockrell Senior Vice President of Business DevelopmentAs Senior Vice President of Business Development, Phillip brings 20+ years of industry experience to Striim. Prior to joining, Phillip lead business development at Quali holding responsibility for strategic partnerships and corporate development initiatives. From 2006 to 2020, Phillip was VP of Global Alliances at SUSE, the world\u2019s largest independent open source company. Phillip\u2019s group was responsible for strategic relationships with SUSE\u2019s top-tier partners such as AWS, Dell, Fujitsu, HPE, IBM, Microsoft Azure, and SAP; which drove nine consecutive years of significant expansion and material growth that resulted in a PE buyout of $2.5bn. Prior joining SUSE in 2006, Phillip managed the infrastructure and technology group at Rackspace. He\u2019s a native Texan and lives in Salt Lake City, UT, USA. Close Phillip Cockrell: Senior Vice President of Business Development Nadim Antar Senior Vice President, Worldwide Revenue & GM, EMEANadim has been in sales, sales leadership and general management for close to 20 years and has helped scale up a number of organizations in the data space, including MongoDB and NuoDB. Most recently he led Dataiku across UK and Northern Europe, where he helped accelerate the rapid scaling of the business in terms of customers, revenue and employees during his tenure there as VP and GM of UK and Northern Europe. Close Nadim Antar: Senior Vice President, Worldwide Revenue & GM, EMEA Taly Avigdory General Counsel In her role as General Counsel, Taly is responsible for managing Striim\u2019s in-house legal department, supporting multiple business lines and advising on transactional, regulatory and privacy matters.\u00a0 Prior to going in-house, Taly was an associate with the law firm of Herzog, Fox & Neeman, where she represented public and private companies in connection with large scale international arbitration and litigation. Taly holds a Master of Laws (LL.M) degree from Stanford University Law School and is admitted to practice law in California. Close Taly Avigdory: General Counsel Our Core Values One Striim We strive to support a human-first: employee-second work environment while holding high standards of customer satisfaction and data security. Unlimited Potential We encourage employee collaboration, access to leadership, transparency, and empathy to promote continuous growth, learning, development and innovation. Dignity We hold high standards of ethics, treat our clients, partners and employees with respect and support our diverse workforce. Careers at Striim Join a team of experts in integrations real-time streaming, data infrastructure, and more. We all share one vision for the future: a world where companies can transform every aspect of their business with data. View Openings We\u2019re backed by the best. Striim is fortunate to be backed by some of the top investors in the world. Latest News Striim Announces Streaming Integration Platform for Snowflake to Enable Industry Adoption Of Real-Time Data June 27, 2023 Striim Announces Fully Managed Real-Time Streaming and Integration Service for Analytics on the Databricks Lakehouse June 23, 2023 Striim Announces a Fully-Managed Real-Time Enterprise Data Integration Service for Snowflake April 4, 2023 \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/company/", "title": "About Striim | A \"Best Place to Work\" in the US and in the Bay Area", "description": "Striim's mission is to companies to move fast, real time data from on-prem and cloud sources to cloud and big data targets. Read more about the Striim Team!", "language": "en-US"}} {"page_content": " Real-time data integration and streaming platform Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial The Real Time Revolution Slow data and silos stifle innovation in analytics, operations, and artificial intelligence. Striim breaks barriers in real-time with its unified data integration and streaming intelligence platform. Easy to start, limitless potential. Benchmarking Oracle replication to a cloud data warehouse 0 mb/sec Database changedata capture rate 0 TB per day 0 QPS Streaming SQL 0 seconds end-to-end latency See it in action Connectors Over 100 high performance connectors and Change Data Capture Striim makes it easy to connect to popular sources like databases in GCP, AWS, or Azure with a point and click wizard. Select tables, migrate schemas, and start moving data to your warehouse or message queues in seconds. Learn More See How it Works Artificial Intelligence and Machine Learning Integration for AI and ML Seamlessly synthesize relational and unstructured data into AI-ready vectors and prompts in real-time. Leverage Striim\u2019s built in machine learning functions to generate intelligent actions to systems of engagement and build AI-powered customer chat experiences. Learn More See How it Works Schema Evolution Intelligent Schema Evolution With Striim\u2019s schema evolution capabilities, you can have full control whenever data drifts. Capture schema changes, configure how each consumer propagates the change or simply halt and alert when a manual resolution is needed. Learn More See How it Works Streaming SQL Streaming SQL Striim is built on a distributed, streaming SQL platform. Run transformations on streaming data like queries, joins with historical caches, and scale up to billions of events per minute. Learn More See How it Works Pipeline Monitoring Data Pipeline Monitoring Visualize your pipeline health, end-to-end data latency, and table-level metrics. Monitor from Striim\u2019s dashboard and get alerted where you pay attention most. Plug in with Striim\u2019s REST APIs and automate alerts even further. Learn More See How it Works Powering Fortune 500s across all industries Simple interfaces helps you build smart data pipelines in minutes Finance Retail Healthcare Travel Manufacturing Telecom Technology Edit Content Financial Services STRIIM SOLUTIONS FOR FINANCIAL SERVICES Innovate by building modern, AI-driven banking experiences while unifying customer data and assets in real-time. Learn More Edit Content Retail & CPG STRIIM SOLUTIONS FOR RETAIL & CPG Deliver personalized customer experience and improve operational efficiency from a single source of truth. Learn More Edit Content Healthcare & Pharma STRIIM FOR HEALTHCARE & PHARMACEUTICALS Optimize operations and ensure better patient outcomes with unified data integration and real-time streaming Learn More Edit Content Travel, Transport & logistics Striim for Travel, Transport, & Logistics Streamline flight operations and win customer loyalty by powering all decisions with data streaming in real time Learn More Edit Content Manufacturing & Energy Striim for Manufacturing Ensure access and visibility across the global supply chain with unified data integration and real-time streaming Learn More Edit Content Telecom Striim Solutions for Telecommunications Drive cost savings, automation, and efficiency gains, while also enhancing customer personalization by streaming the right data at the right time Learn More Edit Content Technology STRIIM SOLUTIONS FOR TECHNOLOGY Win more customers with real-time product analytics, enhanced operational efficiency through multi-database syncs, and the latest in data security and compliance Learn More Sign up for a free trial of Striim! Join our community to learn about all the ways you could use Striim to streamline your practice. Get Started Accelerate your analytics Striim makes it easy and quick to set up data pipelines to stream real-time data to the most popular targets for modern analytics. Google BigQuery Striim provides a fully managed SaaS solution optimized for BigQuery \u2013 Striim for BigQuery Azure Synapse Accelerate time-to-insight with Azure Synapse Analytics and Power BI. Databricks Unleash the power of Databricks AI/ML and Predictive Analytics. Snowflake Fulfill the promise of the Snowflake Data Cloud with real-time data. 100's of connectors Select a Source and Target for more information Sources Microsoft SQL Server MongoDB MySQL Oracle PostgreSQL Salesforce -- Choose from more Sources -- Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon RDS for MariaDB Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon S3 AMQP Apache Log AVRO Azure Database for MySQL Azure Database for PostgreSQL / Hyperscale Azure SQL Database Batch Files Binary Files Cisco Netflow CollectD Common Event Format (CEF) Delimited Files DHCP Log Flume Free Text Files Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server HDFS HPE NonStop Enscribe HPE NonStop SQL/MP HPE NonStop SQL/MX HTTP JMS JMX JSON Kafka Log Files Mail Log MapR FS MariaDB Microsoft SQL Server Microsoft Teams MongoDB MQTT MySQL Name/Value OPC UA Oracle Oracle Exadata Oracle GoldenGate Trail Files PCAP PostgreSQL Salesforce ServiceNow Slack SNMP Sys Log System Files TCP Teradata UDP WCF Windows Event Log XML Files Zipped Files Targets Azure Synapse Analytics Databricks Google BigQuery Kafka PostgreSQL Snowflake -- Choose from more Targets -- AlloyDB Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon Kinesis Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon Redshift Amazon S3 AMQP AVRO Azure Blob Storage Azure Cosmos DB Azure Cosmos DB (Cassandra API) Azure Cosmos DB (MongoDB API) Azure Data Lake Storage Azure Database for MySQL Azure Database for PostgreSQL Azure Event Hubs Azure HDInsight Azure SQL Database Azure Synapse Analytics Cassandra Cloudera CockroachDB Data Warehouses Databases Databricks Delimited Files Files Google BigQuery Google Cloud Pub/Sub Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server Google Cloud Storage Hazelcast HBase HDFS Hive (Cloudera & Hortonworks) HPE NonStop SQL/MP HPE NonStop SQL/MX Impala JMS JSON Kafka Kudu MapR DB MapR FS MapR Streams MariaDB Microsoft SQL Server MongoDB MQTT MySQL Oracle Parquet PostgreSQL Salesforce SAP Hana SingleStore (MemSQL) Snowflake Template Teradata XML Files Yellowbrick Learn More Please choose a source and target to continue. Available on your cloud AWS Deliver real-time data to AWS, for faster analysis and processing. Learn More Google Cloud Unify data on Google Cloud and power real-time data analytics in BigQuery. Learn More Microsoft Azure Quickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. Learn More Your choice of deployment You can choice the best tool for deploy your projects Striim Cloud Fully managed, SaaS available on AWS, Azure and Google Cloud Striim Platform Self-managed, available on-premises or in AWS, Azure and Google Cloud marketplaces \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com", "title": "Real-time data integration and streaming platform", "description": "Data integration and streaming platform for analytics and business intelligence. Build data pipelines to stream trillions of events in real-time.", "language": "en-US"}} {"page_content": " Real-Time Data Streaming And Integration As A Service Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim Cloud Build real-time data streaming pipelines in minutes Power your operations and decision-making with real-time data. Available as a fully managed service on AWS, Google Cloud, and Microsoft Azure. Get a Demo Free Trial 0 s of Connectors 0 % Uptime Guarantee 0 B + Events per Day Accelerate Analytics Fully-managed SaaS platform makes it easy to integrate and stream data for real-time analytics and agile operations. See it in Action Simple interfaces helps you build smart data pipelines in minutes Sources & Targets Schema Evolution Streaming SQL Pipeline Monitoring Edit Content Connect Data Sources and Targets Striim makes it easy to connect to your sources with a point and click wizard. Select tables, migrate schemas, and start moving data in seconds. Learn More Edit Content Intelligent Schema Evolution With Striim\u2019s schema evolution capabilities, you can have full control whenever data drifts. Capture schema changes, configure how each consumer propagates the change or simply halt and alert when a manual resolution is needed. Learn More Edit Content Streaming SQL Striim is built on a distributed, streaming SQL platform. Run continuous queries on streaming data, join streaming data with historical caches, and scale up to billions of events per minute. Learn More Edit Content Data Pipeline Monitoring Striim makes it easy to connect to your sources with a point and click wizard. Select tables, migrate schemas, and start moving data in seconds. Learn More Get a guided tour Power your analytics with real-time data Accelerate your analytics Striim makes it easy and quick to set up data pipelines to stream real-time data to the most popular targets for modern analytics. Google BigQuery Striim provides a fully managed SaaS solution optimized for BigQuery \u2013 Striim for BigQuery Snowflake Fulfill the promise of the Snowflake Data Cloud with real-time data. Azure Synapse Accelerate time-to-insight with Azure Synapse Analytics and Power BI. Databricks Unleash the power of Databricks AI/ML and Predictive Analytics. Available on your cloud AWS Deliver real-time data to AWS, for faster analysis and processing. Learn More Google Cloud Unify data on Google Cloud and power real-time data analytics in BigQuery. Learn More Microsoft Azure Quickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. Learn More Connect anything to anything Select a Source and Target for more information Sources Microsoft SQL Server MongoDB MySQL Oracle PostgreSQL Salesforce -- Choose from more Sources -- Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon RDS for MariaDB Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon S3 AMQP Apache Log AVRO Azure Database for MySQL Azure Database for PostgreSQL / Hyperscale Azure SQL Database Batch Files Binary Files Cisco Netflow CollectD Common Event Format (CEF) Delimited Files DHCP Log Flume Free Text Files Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server HDFS HPE NonStop Enscribe HPE NonStop SQL/MP HPE NonStop SQL/MX HTTP JMS JMX JSON Kafka Log Files Mail Log MapR FS MariaDB Microsoft SQL Server Microsoft Teams MongoDB MQTT MySQL Name/Value OPC UA Oracle Oracle Exadata Oracle GoldenGate Trail Files PCAP PostgreSQL Salesforce ServiceNow Slack SNMP Sys Log System Files TCP Teradata UDP WCF Windows Event Log XML Files Zipped Files Targets Azure Synapse Analytics Databricks Google BigQuery Kafka PostgreSQL Snowflake -- Choose from more Targets -- AlloyDB Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon Kinesis Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon Redshift Amazon S3 AMQP AVRO Azure Blob Storage Azure Cosmos DB Azure Cosmos DB (Cassandra API) Azure Cosmos DB (MongoDB API) Azure Data Lake Storage Azure Database for MySQL Azure Database for PostgreSQL Azure Event Hubs Azure HDInsight Azure SQL Database Azure Synapse Analytics Cassandra Cloudera CockroachDB Data Warehouses Databases Databricks Delimited Files Files Google BigQuery Google Cloud Pub/Sub Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server Google Cloud Storage Hazelcast HBase HDFS Hive (Cloudera & Hortonworks) HPE NonStop SQL/MP HPE NonStop SQL/MX Impala JMS JSON Kafka Kudu MapR DB MapR FS MapR Streams MariaDB Microsoft SQL Server MongoDB MQTT MySQL Oracle Parquet PostgreSQL Salesforce SAP Hana SingleStore (MemSQL) Snowflake Template Teradata XML Files Yellowbrick Learn More Please choose a source and target to continue. Striim gives us a single source of truth across domains and speeds our time to market delivering a cohesive experience across different systems. Neel Chinta, IT Manager at Macy's Get more with Striim Cloud 100+ Enterprise sources and targets Connect any data, anywhere with a single solution that reduces the cost of managing multiple products and tools. Data freshness Stale batch data can cost you a customer. Fresh data guarantees the latest insights on operational data to make profitable real-time decisions. Scalability and throughput Infinitely scale as your business expands, without any additional planning or cost to execute so you save time and money. Flexibility and freedom Easily add and remove new targets as often as you need to \u2013 with a \u201cReadOnceWriteMany\u201d method you never have to worry about impacting the production database. Hundreds of data pipelines can stream billions of events a day Whether it\u2019s the busiest travel day of the year or Black Friday, Striim can support your busiest times of the year and stream billions of events a day. Explore our documentation Explore our resources Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. View Pricing Explore more Use Cases Data Modernization Real Time Operations Data Fabric Data Mesh Digital Customer Experience Real-Time Analytics Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/product/striim-cloud/", "title": "Real-Time Data Streaming And Integration As A Service", "description": "Infinitely scalable unified data integration and streaming software as a service that provides access to real-time data for analytics and digital operations", "language": "en-US"}} {"page_content": " Striim Platform | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim Platform On-premise data integration for real-time streaming 0 s of Connectors 0 % Uptime Guarantee 0 B + Events per Day On-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Get a Demo Free Trial An infinitely scalable platform A Flexible solution without limitations that moves data in milliseconds See it in Action Simple interfaces helps you build smart data pipelines in minutes Sources & Targets Schema Evolution Streaming SQL Pipeline Monitoring Edit Content Connect Data Sources and Targets Striim makes it easy to connect to your sources with a point and click wizard. Select tables, migrate schemas, and start moving data in seconds. Learn More Edit Content Intelligent Schema Evolution With Striim\u2019s schema evolution capabilities, you can have full control whenever data drifts. Capture schema changes, configure how each consumer propagates the change or simply halt and alert when a manual resolution is needed. Learn More Edit Content Streaming SQL Striim is built on a distributed, streaming SQL platform. Run continuous queries on streaming data, join streaming data with historical caches, and scale up to billions of events per minute. Learn More Edit Content Data Pipeline Monitoring Visualize your pipeline health, end-to-end data latency, and table-level metrics. Plug in with Striim\u2019s REST APIs. Learn More Popular deployments that fits every need Accelerate your analytics Striim makes it easy and quick to set up data pipelines to stream real-time data to the most popular targets for modern analytics. Google BigQuery Striim provides a fully managed SaaS solution optimized for BigQuery \u2013 Striim for BigQuery Snowflake Fulfill the promise of the Snowflake Data Cloud with real-time data. Azure Synapse Accelerate time-to-insight with Azure Synapse Analytics and Power BI. Databricks Unleash the power of Databricks AI/ML and Predictive Analytics. See how to connect your sources and targets Sources Microsoft SQL Server MongoDB MySQL Oracle PostgreSQL Salesforce -- Choose from more Sources -- Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon RDS for MariaDB Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon S3 AMQP Apache Log AVRO Azure Database for MySQL Azure Database for PostgreSQL / Hyperscale Azure SQL Database Batch Files Binary Files Cisco Netflow CollectD Common Event Format (CEF) Delimited Files DHCP Log Flume Free Text Files Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server HDFS HPE NonStop Enscribe HPE NonStop SQL/MP HPE NonStop SQL/MX HTTP JMS JMX JSON Kafka Log Files Mail Log MapR FS MariaDB Microsoft SQL Server Microsoft Teams MongoDB MQTT MySQL Name/Value OPC UA Oracle Oracle Exadata Oracle GoldenGate Trail Files PCAP PostgreSQL Salesforce ServiceNow Slack SNMP Sys Log System Files TCP Teradata UDP WCF Windows Event Log XML Files Zipped Files Targets Azure Synapse Analytics Databricks Google BigQuery Kafka PostgreSQL Snowflake -- Choose from more Targets -- AlloyDB Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon Kinesis Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon Redshift Amazon S3 AMQP AVRO Azure Blob Storage Azure Cosmos DB Azure Cosmos DB (Cassandra API) Azure Cosmos DB (MongoDB API) Azure Data Lake Storage Azure Database for MySQL Azure Database for PostgreSQL Azure Event Hubs Azure HDInsight Azure SQL Database Azure Synapse Analytics Cassandra Cloudera CockroachDB Data Warehouses Databases Databricks Delimited Files Files Google BigQuery Google Cloud Pub/Sub Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server Google Cloud Storage Hazelcast HBase HDFS Hive (Cloudera & Hortonworks) HPE NonStop SQL/MP HPE NonStop SQL/MX Impala JMS JSON Kafka Kudu MapR DB MapR FS MapR Streams MariaDB Microsoft SQL Server MongoDB MQTT MySQL Oracle Parquet PostgreSQL Salesforce SAP Hana SingleStore (MemSQL) Snowflake Template Teradata XML Files Yellowbrick Learn More Please choose a source and target to continue. Striim gives us a single source of truth across domains and speeds our time to market delivering a cohesive experience across different systems. Neel Chinta, IT Manager at Macy's Get more with Striim Platform Data freshness Stale batch data can cost you a customer. Fresh data guarantees the latest insights on operational data to make profitable real-time decisions. Scalability and throughput Infinitely scale as your business expands, without any additional planning or cost to execute so you save time and money. 100+ Enterprise sources and targets Connect any data, anywhere with a single solution that reduces the cost of managing multiple products and tools. Flexibility and freedom Easily add and remove new targets as often as you need to \u2013 with a \u201cReadOnceWriteMany\u201d method you never have to worry about impacting the production database. Hundreds of data pipelines can stream billions of events a day Whether it\u2019s the busiest travel day of the year or Black Friday, Striim can support your busiest times of the year and stream billions of events a day. Explore our documentation Explore our resources Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. View Pricing Explore more Use Cases Data Modernization Real Time Operations Data Fabric Data Mesh Digital Customer Experience Real-Time Analytics Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/product/striim-platform/", "title": "Striim Platform | Striim", "language": "en-US"}} {"page_content": " Striim for BigQuery | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim for BigQuery Fully managed real-time data streaming for BigQuery Striim for BigQuery offers real-time data integration and streaming as a fully managed SaaS solution optimized for BigQuery. Get a Demo Free Trial # 1 Fastest Oracle CDC on Earth 0 % Uptime Guarantee 0 B Events per Day Stream real-time data to BigQuery Striim reduces your time to insights with fully automated pipeline creation and management. See it in Action: Production Database to BigQuery Striim for BigQuery is built to power analytics by loading data to BigQuery with maximum performance, simplicity, and ease of use. Striim for BigQuery Built to power analytics by loading data to BigQuery with maximum performance, simplicity, and ease of use. Get more with Striim Cloud Deliver all your data to BigQuery with sub-second latency Heterogeneous sources can be streamed in real time \u2013 with high performance parallel Striim writes to BigQuery for maximum write throughput.\u00a0 Scalability and throughput in a single click Infinitely scale as your business expands, without any additional planning or cost, to execute so you save time and money. Get started quickly and easily Create your first data pipeline in minutes with a no-code solution optimized for BigQuery users. Accelerate time to value with an automated solution On setup the automatic initial load and CDC saves time while the automated error handling capabilities ensure your data pipelines run smoothly. Keep total cost of ownership low without affecting efficiency Striim\u2019s fully managed infrastructure helps you keep TCO low with events based billing and pay-as-you-go consumption based pricing. Explore our documentation Explore our resources Striim gives us a single source of truth across domains and speeds our time to market delivering a cohesive experience across different systems. Neel Chinta, IT Manager at Macy's Resources Real-Time Data Integration On set up the automatic initial load and CDC saves time while the automated error handling capabilities ensure your data pipelines run smoothly. Read More Set sail in Google Cloud with streamlined retail operations As retailers strive to meet the growing expectations of shoppers, they are turning to Google Cloud to transform their businesses and tackle opportunities in an increasingly challenging industry. View Video Real-Time Data Streaming from Oracle to Google BigQuery: A Performance Study Use Striim Cloud to stream data securely from PostgreSQL database into Google BigQuery Read More Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. View Pricing Explore more Use Cases Data Modernization Real Time Operations Data Fabric Data Mesh Digital Customer Experience Real-Time Analytics Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/product/bigquery/", "title": "Striim for BigQuery | Striim", "description": "Striim for BigQuery offers real-time data streaming and integration that's optimized for BigQuery's interface. Grab a demo today and see how easy it is!", "language": "en-US"}} {"page_content": " Striim for Databricks | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim for Databricks Fully managed real-time data streaming for Databricks Striim for Databricks offers real-time data integration and streaming as a fully managed SaaS solution optimized for Databricks. Get a Demo Free Trial # 1 Fastest Oracle CDC on Earth 0 % Uptime Guarantee 0 B Events per Day Stream real-time data to Databricks Striim reduces your time to insights with fully automated pipeline creation and management. See it in Action: Product Database to Databricks Striim for Databricks is built to power analytics by loading data to Databricks with maximum performance, simplicity, and ease of use. Striim for Databricks Built to power analytics by loading data to Databricks with maximum performance, simplicity, and ease of use. Get more with Striim Databricks Deliver all your data to BigQuery with sub-second latency Heterogeneous sources can be streamed in real time \u2013 with high performance parallel Striim writes to BigQuery for maximum write throughput.\u00a0 Scalability and throughput in a single click Infinitely scale as your business expands, without any additional planning or cost, to execute so you save time and money. Get started quickly and easily Create your first data pipeline in minutes with a no-code solution optimized for BigQuery users. Accelerate time to value with an automated solution On setup the automatic initial load and CDC saves time while the automated error handling capabilities ensure your data pipelines run smoothly. Keep total cost of ownership low without affecting efficiency Striim\u2019s fully managed infrastructure helps you keep TCO low with events based billing and pay-as-you-go consumption based pricing. Explore our documentation Explore our resources Striim gives us a single source of truth across domains and speeds our time to market delivering a cohesive experience across different systems. Neel Chinta, IT Manager at Macy's Resources Build a Real-Time Streaming Data Lakehouse with Striim and Databricks For a deeper look into Striim and Databricks, read the ebook here: Read More Building a Real-Time Lakehouse with Data Streaming To learn more about moving multi-terabytes of data with low latency in seconds, watch a recent webinar hosted by Databricks and Striim here: View Video A brief overview of the Data Lakehouse For an overview of the data lakehouse read our blog to learn more: Read More Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. View Pricing Explore more Use Cases Data Modernization Real Time Operations Data Fabric Data Mesh Digital Customer Experience Real-Time Analytics Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/product/databricks/", "title": "Striim for Databricks | Striim", "description": "Striim for BigQuery offers real-time data streaming and integration that's optimized for BigQuery's interface. Grab a demo today and see how easy it is!", "language": "en-US"}} {"page_content": " Striim for Snowflake | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim for Snowflake Fully managed real-time data streaming for Snowflake Striim for Snowflake offers real-time data integration and streaming as a fully managed SaaS solution optimized for Snowflake. Get a Demo Free Trial # 1 Fastest Oracle CDC on Earth 0 % Uptime Guarantee 0 B Events per Day Stream data in real-time to Snowflake Striim reduces your time to insights with fully automated pipeline creation and management.\u00a0 See it in Action: Oracle to Snowflake Striim for Snowflake is built to power analytics by loading data to Snowflake with maximum performance, simplicity, and ease of use. Striim for Snowflake Built to power analytics by loading data to Snowflake with maximum performance, simplicity, and ease of use. Get more with Striim Cloud Deliver all your data to Snowflake with sub-second latency Heterogeneous sources can be streamed in real time \u2013 with high performance parallel Striim writes to Snowflake for maximum write throughput.\u00a0 Scalability and throughput in a single click Infinitely scale as your business expands, without any additional planning or cost, to execute so you save time and money. Get started quickly and easily Create your first data pipeline in minutes with a no-code solution optimized for Snowflake users. Accelerate time to value with an automated solution On setup the automatic initial load and CDC saves time while the automated error handling capabilities ensure your data pipelines run smoothly. Keep total cost of ownership low without affecting efficiency Striim\u2019s fully managed infrastructure helps you keep TCO low with events based billing and pay-as-you-go consumption based pricing. Explore our documentation Explore our resources The choice to use Snowflake was part of our platform\u2019s evolution. We needed Striim to complete the vision. Rajesh Raju, Director of Data Engineering at Ciena Resources CDC to Snowflake CDC to Snowflake is quickly becoming the preferred method of loading real-time data from transactional databases to Snowflake, without impacting source systems. Read More Ciena Case Study Ciena replicates 100 millionevents to Snowflake per day with Striim\u2019s powerful,autonomous data pipelines. View Case Study Webinar: Introducing Striim for Snowflake Join us for a webinar on how to supercharge your decision-making and insights with Striim\u2019s new low-code, automated solution for real-time data streaming to the Snowflake Data Cloud.\u00a0 Read More Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. View Pricing Explore more Use Cases Data Modernization Real Time Operations Data Fabric Data Mesh Digital Customer Experience Real-Time Analytics Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/product/snowflake/", "title": "Striim for Snowflake | Striim", "description": "Striim for Snowflake offers real-time data integration and streaming as a fully managed SaaS solution optimized for Snowflake.", "language": "en-US"}} {"page_content": " Striim for Snowflake | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim for Snowflake Fully managed real-time data streaming for Snowflake Striim for Snowflake offers real-time data integration and streaming as a fully managed SaaS solution optimized for Snowflake. Get a Demo Free Trial # 1 Fastest Oracle CDC on Earth 0 % Uptime Guarantee 0 B Events per Day Stream data in real-time to Snowflake Striim reduces your time to insights with fully automated pipeline creation and management.\u00a0 See it in Action: Oracle to Snowflake Striim for Snowflake is built to power analytics by loading data to Snowflake with maximum performance, simplicity, and ease of use. Striim for Snowflake Built to power analytics by loading data to Snowflake with maximum performance, simplicity, and ease of use. Get more with Striim Cloud Deliver all your data to Snowflake with sub-second latency Heterogeneous sources can be streamed in real time \u2013 with high performance parallel Striim writes to Snowflake for maximum write throughput.\u00a0 Scalability and throughput in a single click Infinitely scale as your business expands, without any additional planning or cost, to execute so you save time and money. Get started quickly and easily Create your first data pipeline in minutes with a no-code solution optimized for Snowflake users. Accelerate time to value with an automated solution On setup the automatic initial load and CDC saves time while the automated error handling capabilities ensure your data pipelines run smoothly. Keep total cost of ownership low without affecting efficiency Striim\u2019s fully managed infrastructure helps you keep TCO low with events based billing and pay-as-you-go consumption based pricing. Explore our documentation Explore our resources The choice to use Snowflake was part of our platform\u2019s evolution. We needed Striim to complete the vision. Rajesh Raju, Director of Data Engineering at Ciena Resources CDC to Snowflake CDC to Snowflake is quickly becoming the preferred method of loading real-time data from transactional databases to Snowflake, without impacting source systems. Read More Ciena Case Study Ciena replicates 100 millionevents to Snowflake per day with Striim\u2019s powerful,autonomous data pipelines. View Case Study Webinar: Introducing Striim for Snowflake Join us for a webinar on how to supercharge your decision-making and insights with Striim\u2019s new low-code, automated solution for real-time data streaming to the Snowflake Data Cloud.\u00a0 Read More Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. View Pricing Explore more Use Cases Data Modernization Real Time Operations Data Fabric Data Mesh Digital Customer Experience Real-Time Analytics Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/product/snowflake/#", "title": "Striim for Snowflake | Striim", "description": "Striim for Snowflake offers real-time data integration and streaming as a fully managed SaaS solution optimized for Snowflake.", "language": "en-US"}} {"page_content": " Striim on AWS | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial STRIIM AND Databricks A Fully Managed Data Streaming Platform on AWS Striim combines Change Data Capture, Streaming SQL, and Streaming Data Delivery to power data to decisions in real-time Free Trial Change Data Capture to Data Lakes and Data Warehouses on AWS Rapidly set up real-time data streams Break down data silos with Smart Data Pipelines Fully Managed Data Streaming with Unprecedented Speed and Simplicity Striim moves real-time data from virtually any data source, including enterprise databases via log-based change data capture (CDC), other cloud environments, log files, applications, messaging systems, and IoT into AWS. AWS customers can rapidly build real-time data pipelines to Amazon Redshift, Amazon S3, Amazon RDS for Oracle, Amazon RDS for SQL Server, Amazon RDS for MySQL, Amazon Aurora, and Amazon Kinesis services to enable up-to-date data for critical workloads in the cloud. With real-time data synchronization capabilities, Striim also helps AWS customers move data from legacy databases to AWS RDS or Aurora databases with zero downtime and zero data loss, and enables an immediate switchover to the AWS environment. Striim Cloud on AWS Build smart data pipelines on AWS in minutes with out-of-the-box support for top AWS targets like S3, Kinesis, Redshift, MSK and more. Data Streaming to Amazon Redshift with Striim Striim enables secure, reliable, and scalable real-time data pipelines for unstructured, semi-structured, and structured data into Amazon Redshift.\u00a0\u00a0 Industry\u2019s Fastest Oracle CDC to AWS RDS, S3, Databricks, Kinesis, MSK and other AWS Platforms Leverage the industry\u2019s fastest, cloud-scale Oracle CDC as a fully managed service on AWS to stream real-time data to all your AWS Platforms. What\u2019s New News: Striim Teams with Amazon Web Services to Continuously Deliver Data to Amazon RedshiftNews: Striim Bolsters Zero Downtime Migration Solution to Amazon Web Services with Real-Time Data Divergence Monitoring Resources Video: Moving Data to Amazon Web Services in Real TimeVideo: Rapid Adoption of AWS Using Streaming Data Integration with CDCBlog: Streaming Data Integration to AWSBlog: Real-Time AWS Cloud Migration Monitoring: 3-Minute Demo Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/partners/striim-and-aws/", "title": "Striim on AWS | Striim", "description": "Build smart data pipelines on AWS in minutes with out-of-the-box support for top AWS targets like S3, Kinesis, Redshift, MSK and more.", "language": "en-US"}} {"page_content": " Google Cloud and Striim | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial STRIIM AND GOOGLE CLOUD Making Real-Time a Reality Unify your data in Google Cloud with real-time data analytics. Whatever the reason for your move, Striim will help you get there faster. Speak to Striim Connect with popular cloud apps that help you run your business. Power Google BigQuery with fresh data Break down data silos with Smart Data Pipelines Streamlining the cloud and digital transformation Striim and Google Cloud provide a real time solution that enables cloud and digital transformation, while reducing complexity in system architectures and providing improved access to data. Striim continuously loads real-time data to analytics systems, like Google BigQuery, from across the enterprise with minimal impact on data sources. With Striim real-time data integration, customers have access to timely, pre-processed data from on-premises or cloud data sources. Striim Cloud on Google Cloud With Striim Cloud \u2013 a fully managed unified data integration and streaming SaaS platform that connects clouds, data, and applications \u2013 customers can realize the benefits of Google Cloud with unprecedented speed and simplicity. Striim Cloud moves and augments data while reducing latency and enhancing your ability to make informed decisions.Learn more about how Striim Cloud can help. Oracle to BigQuery with Striim BigQuery is Google\u2019s serverless, highly scalable, multi cloud data warehouse designed for business agility. With its petabyte scale and low cost analytics, companies are increasingly moving to Google BigQuery to perform timely and fast SQL queries. Having the capability to move real-time data via change data capture (CDC) to Google BigQuery is essential. This is where Striim excels.Learn how you can use Striim to move data from Oracle to Google BigQuery using CDC and how to build dashboards to visualize the data. Buy Striim solutions straight from the Google Cloud Marketplace Add Striim solutions directly from Google Cloud Marketplace to quickly build data pipelines and stream trillions of events every day to Google BigQuery. What\u2019s New Blog: Striim Cloud on Google CloudNews: Real-Time Data Integration and Streaming Leader Introduces Striim Cloud on Google CloudNews: Real-Time Data Streaming Leader Striim Achieves Google Cloud Ready \u2013 BigQuery DesignationNews: Striim Deepens Strategic Partnership with Google Cloud to Expand Database Migrations for Google Cloud CustomersBlog: Online Enterprise Database Migration to Google Cloud Resources Demo:\u00a0\u00a0How to Migrate Transactional Databases to AlloyDBReference Architecture: Oracle to BigQueryDemo: Moving Oracle to BigQuery in real-timeVideo: Real-Time Hotspot Detection For Transportation with Striim and BigQueryDemos: Striim Migration Service to Google CloudHands-on-Lab: Online Data Migration to BigQuery using Striim Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/google-cloud-and-striim/", "title": "Google Cloud and Striim | Striim", "description": "Striim and Google Cloud provide a real time solution that enables cloud and digital transformation, while reducing complexity in system architectures and providing improved access to data.", "language": "en-US"}} {"page_content": " Striim and Microsoft Azure | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial STRIIM AND MICROSOFT AZURE Faster Analytics, Faster Results Speak to Striim Striim and Microsoft Azure accelerate time-to-insight wherever your data resides. Striim is Microsoft Fabric ready and can help you achieve your goals. Striim Cloud offers a fully managed service on Azure, available on Azure Marketplace Unify data in Azure Synapse & Microsoft Fabric in real-time\u00a0 Break down data silos with Smart Data Pipelines Microsoft Fabric and Striim: Car Dealership Microsoft Fabric and Striim: Baseball League Learn how American Airlines power global TechOps with Striim and Microsoft Azure In 2022, American Airlines announced a strategic partnership with Microsoft Azure to transform operations and analytics in the cloud. The American Airlines TechOps team wanted to step up to the internal challenge of adopting the mandate to modernize and accelerate their operations in the cloud. American Airlines deployed a real-time data hub with Striim and Azure to ensure a seamless, real-time operation at massive scale. Read the case study to learn more. A partnership made for the enterprise With Striim and Microsoft Azure, enterprises can accelerate cloud adoption and digital transformation by making critical business data available in real time. With continuous, streaming data integration from on-premises and cloud enterprise sources to Azure analytics tools like Synapse and Power BI, users have an unbeatable, data-driven experience with up-to-the-second operational visibility. Striim for Oracle and Salesforce to Microsoft Fabric Data in Oracle and Salesforce are the lifeblood for many organizations, not all of that data is being used as fully or effectively as it could be. Striim Cloud on Microsoft Azure captures events as they occur; transforms, filters, and enriches data with an in-memory SQL-based engine; then delivers to the Azure cloud to provide scalable, real-time, low-cost analytics, without affecting the source database. Watch Microsoft and Striim show you how quickly and easily data can be moved to Synapse in this Azure Webinar Series episode.\u00a0 Striim Cloud, a fully managed service on Azure Achieve the kind of sustained agility that allows you to pivot and adapt in real-time, add layers of intelligence to your applications, and unlock fast and predictive insights from data wherever it resides. That\u2019s the power of Striim with Azure Synapse.Let us show you how Buy Striim solutions straight from the Azure Marketplace Add Striim Cloud and Striim Platform directly from Azure Marketplace to quickly build data pipelines and stream trillions of events every day to Azure Synapse Analytics and Power BI to conduct real-time analytics and address time-sensitive operational issues. What\u2019s New Blog: Three Benefits of Azure Cosmos DBBlog: Striim Now Offers Native Integration With Microsoft Azure Cosmos DB Resources Demo: Striim Link to Synapse for Oracle and Salesforce data (Azure Webinar Series episode)Checklist: Ease your cloud database migration to Microsoft AzureBlog: Why CDC to Azure is Essential for Cloud AdoptionBlog: Oracle Change Data Capture \u2013 An Event-Driven Architecture for Cloud AdoptionBlog: Striim Azure Synapse Analytics Marketplace Offering Install Guide (External Site) Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/partners/striim-and-microsoft-azure/", "title": "Striim and Microsoft Azure | Striim", "description": "Striim and Microsoft Azure accelerate time-to-insight wherever your data resides.", "language": "en-US"}} {"page_content": " Striim for Databricks | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim for Databricks Fully managed real-time data streaming for Databricks Striim for Databricks offers real-time data integration and streaming as a fully managed SaaS solution optimized for Databricks. Get a Demo Free Trial # 1 Fastest Oracle CDC on Earth 0 % Uptime Guarantee 0 B Events per Day Stream real-time data to Databricks Striim reduces your time to insights with fully automated pipeline creation and management. See it in Action: Product Database to Databricks Striim for Databricks is built to power analytics by loading data to Databricks with maximum performance, simplicity, and ease of use. Striim for Databricks Built to power analytics by loading data to Databricks with maximum performance, simplicity, and ease of use. Get more with Striim Databricks Deliver all your data to BigQuery with sub-second latency Heterogeneous sources can be streamed in real time \u2013 with high performance parallel Striim writes to BigQuery for maximum write throughput.\u00a0 Scalability and throughput in a single click Infinitely scale as your business expands, without any additional planning or cost, to execute so you save time and money. Get started quickly and easily Create your first data pipeline in minutes with a no-code solution optimized for BigQuery users. Accelerate time to value with an automated solution On setup the automatic initial load and CDC saves time while the automated error handling capabilities ensure your data pipelines run smoothly. Keep total cost of ownership low without affecting efficiency Striim\u2019s fully managed infrastructure helps you keep TCO low with events based billing and pay-as-you-go consumption based pricing. Explore our documentation Explore our resources Striim gives us a single source of truth across domains and speeds our time to market delivering a cohesive experience across different systems. Neel Chinta, IT Manager at Macy's Resources Build a Real-Time Streaming Data Lakehouse with Striim and Databricks For a deeper look into Striim and Databricks, read the ebook here: Read More Building a Real-Time Lakehouse with Data Streaming To learn more about moving multi-terabytes of data with low latency in seconds, watch a recent webinar hosted by Databricks and Striim here: View Video A brief overview of the Data Lakehouse For an overview of the data lakehouse read our blog to learn more: Read More Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. View Pricing Explore more Use Cases Data Modernization Real Time Operations Data Fabric Data Mesh Digital Customer Experience Real-Time Analytics Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/product/databricks/", "title": "Striim for Databricks | Striim", "description": "Striim for BigQuery offers real-time data streaming and integration that's optimized for BigQuery's interface. Grab a demo today and see how easy it is!", "language": "en-US"}} {"page_content": " Real Time ETL to Snowflake Data Warehouse Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial STRIIM AND SNOWFLAKE Fulfilling the Promise of the Snowflake Data Cloud With real time data streaming, Striim centralizes global data sources in Snowflake Speak to Striim Stable, scalable and secure data integration Speed time to decision with real time data Break down data silos with Smart Data Pipelines High performance in real time Snowflake has a vision of a Data Cloud that eliminates data silos and allows organizations to seamlessly unify, analyze, and share their data. Legacy approaches to moving data into Snowflake in batches can negate the speed and flexibility Snowflake provides. Striim provides an avenue to realizing Snowflake\u2019s vision with real-time data streaming from disparate sources \u2014 on-premises or in the cloud \u2014 into the Snowflake Platform. For database sources, Striim offers high-performance, non-intrusive change data capture to stream data to Snowflake in real time. Striim: A Snowflake SELECT Partner As a Select Technology Partner, Striim delivers fast and seamless real-time data integration to Snowflake, accelerating your ability to perform data-driven analytics in the cloud. You will be able to make better, faster business decisions and course-correct current data integration models. You can modernize analytical and database platforms by simplifying data architectures and orchestration. By integrating cloud with on-premises databases, Striim and Snowflake together offer one-stop data integration for improved business intelligence and decision making. From Production Databases to Snowflake The amount and variety of data enterprises must now manage has increased significantly, and many legacy implementations are unable to keep up. Snowflake has separated compute from storage, allowing it to scale where Oracle can\u2019t, whether on-premises or in the cloud.If you make the decision to move your data to Snowflake, let Striim \u2013 a specialist in helping enterprises build continuous, real-time database replication pipelines \u2013 help you make the most of your investment by migrating Oracle to Snowflake with Change Data Capture. What\u2019s New Blog: Striim Announces Strategic Partnership with Snowflake to Drive Cloud-Based Data-Driven AnalyticsNews: Striim Named a Snowflake Select Technology PartnerBlog: Announcing Striim on Snowflake Partner ConnectBlog: Real-Time Data Integration for Snowflake with Striim on Partner Connect Resources Tutorial: Migrate and Replicate Data from SQL Server to Snowflake with StriimBlog: CDC to SnowflakeBlog: Oracle to Snowflake \u2013 Migrate data to Snowflake with Change Data Capture Ready to put your data to action? Striim is a unified data integration and streaming platform that enables real-time analytics across every facet of your operations. Keep data flowing from legacy solutions, proactively run your business, and reach new levels of speed and performance with Striim\u2019s change data capture (CDC) for real-time ETL. View a Demo Free Trial \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/partners/striim-and-snowflake/", "title": "Real Time ETL to Snowflake Data Warehouse", "description": "Data migration and replication to Snowflake from on-prem or cloud data warehouses, relational and noSQL databases, Kafka, files/logs, and more.", "language": "en-US"}} {"page_content": " Striim pricing. Unbeatable performance and flexible price models. Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Unbeatable Price Performance.Flexible Models for Every Business. Striim Developer A sandbox environment to start building streaming pipelines in the cloud FREE for 10 million events/month Streaming SQL and Change Data Capture with a single pane of glass Real-time data delivery to all supported cloud data warehouses, AWS Kinesis, S3, and Kafka Unlimited Streaming SQL Queries and Streams Community Support Sign Up Data Product Solutions Automated data pipelines to stream data to BigQuery and Snowflake 4,400/mo 10M 20M 100M 300M 1B Monthly Events AND Compute $0.75 /vCPU/hour Data Transfer $0.10/gb in, $0.10/gb out 4,400/mo 10M 20M 100M 300M 1B Monthly Events AND Compute $0.75 /vCPU/hour Data Transfer $0.10/gb in, $0.10/gb out 4,400/mo 10M 20M 100M 300M 1B Monthly Events AND Compute $0.685 /vCPU/hour Data Transfer $0.10/gb in, $0.10/gb out Start Free Trial Fully automated schema migration, initial load, streaming CDC to BigQuery, Snowflake or Databricks Schema evolution and monitoring of data delivery SLAs Easy to parallelize for max performance HIPAA, GDPR Compliance Enterprise Support Striim Cloud Enterprise Enterprise real-time data pipelines for maximum uptime and data freshness. 4,400/mo 10M 20M 100M 300M 1B Monthly Events AND Compute $0.60 /vCPU/hour Data Transfer $0.10/gb in, $0.10/gb out 4,400/mo 10M 20M 100M 300M 1B Monthly Events AND Compute $0.50 /vCPU/hour Data Transfer $0.50/gb in, $0.10/gb out 4,400/mo 10M 20M 100M 300M 1B Monthly Events AND Compute $0.50 /vCPU/hour Data Transfer $0.50/gb in, $0.10/gb out Start Free Trial Enterprise scale Streaming SQL pipelines and the industry\u2019s fastest change data capture Access to over 150 streaming connectors Fully dedicated and secure compute, storage, and network infrastructure with customer managed keys. HIPAA, GDPR Compliance Enterprise Support Need a self-hosted platform? Striim supports on-premise and self hosted cloud deployments Contact Us Usage-based pricing that scales with your business needs Scale up and down your consumption as needed. Whether your processing billions of events per hour or on stand-by for new events, Striim meters exactly what you use. \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/pricing/", "title": "Striim pricing. Unbeatable performance and flexible price models.", "description": "Unbeatable Price Performance. Flexible Models for Every Business.", "language": "en-US"}} {"page_content": " Comprehensive List of Data Sources and Targets Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Connectors Data Sources and Targets Use Striim to connect hundreds of enterprise sources and targets with real-time data. Free Trial Get started with your sources and targets right now! Sources Microsoft SQL Server MongoDB MySQL Oracle PostgreSQL Salesforce -- Choose from more Sources -- Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon RDS for MariaDB Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon S3 AMQP Apache Log AVRO Azure Database for MySQL Azure Database for PostgreSQL / Hyperscale Azure SQL Database Batch Files Binary Files Cisco Netflow CollectD Common Event Format (CEF) Delimited Files DHCP Log Flume Free Text Files Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server HDFS HPE NonStop Enscribe HPE NonStop SQL/MP HPE NonStop SQL/MX HTTP JMS JMX JSON Kafka Log Files Mail Log MapR FS MariaDB Microsoft SQL Server Microsoft Teams MongoDB MQTT MySQL Name/Value OPC UA Oracle Oracle Exadata Oracle GoldenGate Trail Files PCAP PostgreSQL Salesforce ServiceNow Slack SNMP Sys Log System Files TCP Teradata UDP WCF Windows Event Log XML Files Zipped Files Targets Azure Synapse Analytics Databricks Google BigQuery Kafka PostgreSQL Snowflake -- Choose from more Targets -- AlloyDB Amazon Aurora MySQL Amazon Aurora PostgreSQL Amazon Kinesis Amazon RDS for MySQL Amazon RDS for Oracle Amazon RDS for PostgreSQL Amazon Redshift Amazon S3 AMQP AVRO Azure Blob Storage Azure Cosmos DB Azure Cosmos DB (Cassandra API) Azure Cosmos DB (MongoDB API) Azure Data Lake Storage Azure Database for MySQL Azure Database for PostgreSQL Azure Event Hubs Azure HDInsight Azure SQL Database Azure Synapse Analytics Cassandra Cloudera CockroachDB Data Warehouses Databases Databricks Delimited Files Files Google BigQuery Google Cloud Pub/Sub Google Cloud Spanner Google Cloud SQL for MySQL Google Cloud SQL for PostgreSQL Google Cloud SQL for SQL Server Google Cloud Storage Hazelcast HBase HDFS Hive (Cloudera & Hortonworks) HPE NonStop SQL/MP HPE NonStop SQL/MX Impala JMS JSON Kafka Kudu MapR DB MapR FS MapR Streams MariaDB Microsoft SQL Server MongoDB MQTT MySQL Oracle Parquet PostgreSQL Salesforce SAP Hana SingleStore (MemSQL) Snowflake Template Teradata XML Files Yellowbrick Learn More Please choose a source and target to continue. All Sources Databases OracleMicrosoft SQL ServerMySQLPostgreSQLMongoDBHPE NonStop SQL/MXHPE NonStop SQL/MPHPE NonStop EnscribeMariaDBOthers via JDBC Amazon Web Services Amazon RDS for MariaDBAmazon RDS for MySQLAmazon RDS for OracleAmazon RDS for PostgreSQLAmazon RDS for SQLServerAmazon Aurora MySQLAmazon Aurora PostgreSQLAmazon S3 Big Data HiveHDFS Files Log FilesSystem FilesBatch Files Microsoft Azure Azure Database for PostgreSQL / HyperscaleAzure Database for MySQLAzure Database for MariaDBAzure Event HubsAzure SQL Managed InstanceAzure SQL Database Supported Data Formats Delimited FilesJSONXMLFree TextBinaryName/ValueZippedAVROOracle GoldenGate Trail FilesApache LogSys LogWindows Event LogsMail LogSNMPCollectDCEFDHCP LogWCF+Others Google Cloud Cloud SQL for MySQLCloud SQL for PostgreSQLCloud SQL for SQL ServerCloud SpannerBigQuery Cloud Applications SalesforceServiceNowMicrosoft TeamsSlack Messaging Systems KafkaFlumeJMSAMQP Data Warehouses Oracle ExadataTeradata Network Protocols HTTPMQTTPCAPTCPUDP All Targets Databases OracleSQLServerMySQLPostgreSQLMariaDBAlloyDBCockroachDB Data Warehouses Microsoft SQL ServerHPE NonStop SQL/MXHPE NonStop SQL/MPMemSQLTeradataSAP HanaSingleStore (MemSQL)Others via JDBC Cloud Data Services DatabricksMongoDBSnowflakeYellowbrickConfluent Cloud Amazon Webservices Amazon KinesisAmazon RedshiftAmazon RDS for MySQLAmazon RDS for OracleAmazon RDS for PostgreSQLAmazon RDS for SQLServerAmazon S3Amazon Aurora MySQLAmazon Aurora PostgreSQL Big Data HBASEHiveHDFSKuduImpalaClouderaHazelcastHortonworks Microsoft Azure Azure Blob StorageAzure Cosmos DBAzure Data Lake Storage (Gen1 and Gen2)Azure Database for MySQLAzure Database for PostgreSQLAzure Database for MariaDBAzure Event HubsAzure HDInsightAzure SQL DatabaseAzure SQL Managed InstanceAzure Synapse Analytics Google Cloud Google BigQueryCloud Pub/SubCloud SQL for MySQLCloud SQL for PostgreSQLCloud SQL for SQL ServerCloud SpannerCloud StorageGoogle Cloud Dataproc Messaging Systems KafkaJMSAMQPMapR Streams Data Formats AVRODelimitedJSONParquetTemplateXML \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/connectors/", "title": "Comprehensive List of Data Sources and Targets", "description": "A comprehensive list of Striim's supported sources and targets. A broad range of out-of-the-box solutions for real-time data movement and processing.", "language": "en-US"}} {"page_content": "\n\n\n\n\n\n\nStriim's Latest Data Sheets, Videos and White Papers\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n\n\n\n\n\n \n \nProducts\n\nStriim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake\n \n\n\n\n\n\n\n\nStriim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. \n\n\n\nStriim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. \n\n\n\n\n\nStriim for BigQuery \n\n\n\nStriim for Databricks \n\n\n\nStriim for Snowflake \n\n\n\n\n\n\nPricing \n\n\n\n \nPricing that is just as flexible as our products \n\n\n\n\n\n\n\nLearn More\n\n\n\n\n\n\n\n\n\n\n\nSolutions\n\nStriim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media\n \n\n\n\n\n\n\nTECHNOLOGIES \n\n\n\n\n\n\nAWSDeliver real-time data to AWS, for faster analysis and processing. \n\n\n\nGoogle CloudUnify data on Google Cloud and power real-time data analytics in BigQuery.\n \n\n\n\nMicrosoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. \n\n\n\n\n\nDatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. \n\n\n\nSnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. \n\n\n\n\n\n\n\n\n\n\nINDUSTRIES \n\n\n\n\n\nFinancial Services \n\n\n\nRetail & CPG \n\n\n\nHealthcare & Pharma \n\n\n\nTravel, Transport & logistics \n\n\n\nManufacturing & Energy \n\n\n\nTelecommunications \n\n\n\nTechnology \n\n\n\nMedia \n\n\n\n\n\n\n\nPricing\n\nPricing\n \n\n\n\n\n\n\n\nUnbeatable Price Performance.Flexible Models for Every Business. \n\n\n\n\n\n\n\n\nConnectors\n\nData Sources and Targets\n \n\n\n\n\n\n\n\nConnectorsStriim can connect hundreds of source and target combinations. View a complete list. \n\n\n\n\n\n\n\n\nResources\n\nLearning Blog Community Events Support Documentation\n \n\n\n\n\n\n\nLEARN \n\n\n\nBlogRead the latest blogs from our experts\n\n \n\n\n\nLearningSearch all our latest recipes, videos, podcasts, webinars and ebooks\n \n\n\n\n\n\nCONNECT \n\n\n\nEventsFind the latest webinars, online, and face-to-face events \n\n\n\nThe Striim CommunityStay up to date on new product updates & join the discussion. \n\n\n\n\n\nSUPPORT \n\n\n\nSupport & ServicesLet Striim\u2019s services and support experts bring your Data Products to life \n\n\n\nDocumentationFind the latest technical information on our products \n\n\n\n\n\n\n\nCompany\n\nAbout Careers Customers Partners Striim Newsroom Contact\n \n\n\n\n\n\n\nAbout StriimLearn all about Striim, our heritage, leaders and investors \n\n\n\nCareersLooking to work for Striim? Find all the available job options \n\n\n\nCustomersSee how our customers are implementing our solutions \n\n\n\n\n\nPartnersFind out more about Striim's partner network \n\n\n\nNewsroomFind all the latest news about Striim\n\n \n\n\n\nContact UsConnect with the experts at Striim \n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\nX \n\n\n\n\n\n\n\n\nView a Demo\n\n\n\n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLearning Resources\u200b\n \n\n\n\n\n\n\n\n\nBlog\n\n\n\n\n\n\n\n\n\n\nEbooks and Papers\n\n\n\n\n\n\n\n\n\n\nVideos and Podcasts\n\n\n\n\n\n\n\n\n\n\nRecipes and Tutorials\n\n\n\n\n\n\n\n\n\n\nOn-demand Webinars\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUse Case\nAnalytics\nArtificial Intelligence\nChange Data Capture\nCloud Integration\nData Fabric / Data Mesh\nData Ingestion\nData Lake\nData Migration\nData Pipeline\nData Replication\nData Transformation\nData Warehouse\nDataOps\nETL / ELT\nHybrid Cloud\nIoT\nNo Category\nReal-time Data\nStreaming Data Integration\nStriim\n\n\n \n\n\n\n\n\n\n\nSources/Targets\nAmazon\nAzure Synapse\nCassandra\nDatabricks\ndbt\nGoogle Cloud\nHPE NonStop\nKafka\nMicrosoft Azure\nMySQL\nOracle\nPostgres\nSalesforce\nSnowflake\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\t\t\t\tDriving HOPTEK\u2019s AI-powered system with streaming data pipelines\t\t\t\n\n\nHOPTEK Builds Rapid and Intelligent Data Pipelines with Striim Cloud HOPTEK, a digital venture nestled within Kearney, is a dynamic start-up focused on developing dispatch\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tUnleashing the Power of Striim: Oracle to Snowflake Data Stream with Real-time CDC\t\t\t\n\n\nOn-Demand Technical Webinar\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tReal-Time, High Throughput, and Low Latency Data Streaming from Oracle to Snowflake using Snowpipe Streaming\t\t\t\n\n\nWhite Paper\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tA Comprehensive Guide to Migrating On-Premise Oracle Data to Databricks Unity Catalog with Python and Databricks Notebook\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tHow American Airlines Powers Global TechOps with a Real-Time Data Hub\t\t\t\n\n\nRead Case Study\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tOracle to Snowflake Initial Load\t\t\t\n\n\n\u00a0 1. In this video tutorial, we will show you how to complete the initial load from Snowflake to Oracle using Striim\u2019s Flow Designer. 2.\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tSnowflake to Oracle Initial Load\t\t\t\n\n\n1. In this video tutorial, we will show you how to complete the initial load from Snowflake to Oracle using Striim\u2019s Flow Designer. 2. To\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tIntroducing Striim for Databricks\t\t\t\n\n\nStriim is excited to introduce to you our fully-managed and purpose-driven service for Databricks. In this demo, you will see how simple overall data pipeline\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim for Snowflake: Stream data in real-time to Snowflake\t\t\t\n\n\nTutorial Striim for Snowflake: Stream data in real-time to Snowflake How to replace batch ETL by event-driven distributed stream processing Benefits Operational Analytics\u00a0Use non-intrusive CDC\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tMicrosoft Fabric and Striim\t\t\t\n\n\nMicrosoft Fabric and Striim from Striim on Vimeo. Using Microsoft Fabric, we created real time dashboards in minutes using CDC from MongoDB. See how simple\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tBuild Smart, Real-Time Data Pipelines for OpenAI using Striim\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim Real-Time Analytics Intro Recipe\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStreaming Synthetic Data to Snowflake with Striim\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStreaming SQL on Kafka with Striim\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tBuilding a Real-Time Lakehouse with Data Streaming\t\t\t\n\n\nWatch On-demand\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tThe Future of Streaming Data: Technology, Use Cases, and Opportunities\t\t\t\n\n\nWatch On-demand\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tUnleash the power of real-time data streaming to Google BigQuery\t\t\t\n\n\nOn-demand Webinar\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStreaming Change Data Capture from MongoDB to ADLS Gen2 Parquet\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\nLoading...\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\n\n\n\n\n\n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\n\n\n\n\n\n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\n\n\n\n\n\n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\n\n\n\n\n\n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\nCopyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy \n\n\n\n\n\n\n\nWe're Hiring\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\nLinkedin\n \n\n\n\nFacebook\n \n\n\n\nTwitter\n \n\n\n\nYoutube\n \n\n\n\nRss\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "metadata": {"source": "https://www.striim.com/resources/", "title": "Striim's Latest Data Sheets, Videos and White Papers", "description": "Striim's latest data sheets, videos, white papers, and webinars on Striim's streaming integration software for moving real-time data to the Cloud.", "language": "en-US"}} {"page_content": "\n\n\n\n\n\n\nStreaming Data Integration and Operational Intelligence Blog\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n\n\n\n\n\n \n \nProducts\n\nStriim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake\n \n\n\n\n\n\n\n\nStriim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. \n\n\n\nStriim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. \n\n\n\n\n\nStriim for BigQuery \n\n\n\nStriim for Databricks \n\n\n\nStriim for Snowflake \n\n\n\n\n\n\nPricing \n\n\n\n \nPricing that is just as flexible as our products \n\n\n\n\n\n\n\nLearn More\n\n\n\n\n\n\n\n\n\n\n\nSolutions\n\nStriim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media\n \n\n\n\n\n\n\nTECHNOLOGIES \n\n\n\n\n\n\nAWSDeliver real-time data to AWS, for faster analysis and processing. \n\n\n\nGoogle CloudUnify data on Google Cloud and power real-time data analytics in BigQuery.\n \n\n\n\nMicrosoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. \n\n\n\n\n\nDatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. \n\n\n\nSnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. \n\n\n\n\n\n\n\n\n\n\nINDUSTRIES \n\n\n\n\n\nFinancial Services \n\n\n\nRetail & CPG \n\n\n\nHealthcare & Pharma \n\n\n\nTravel, Transport & logistics \n\n\n\nManufacturing & Energy \n\n\n\nTelecommunications \n\n\n\nTechnology \n\n\n\nMedia \n\n\n\n\n\n\n\nPricing\n\nPricing\n \n\n\n\n\n\n\n\nUnbeatable Price Performance.Flexible Models for Every Business. \n\n\n\n\n\n\n\n\nConnectors\n\nData Sources and Targets\n \n\n\n\n\n\n\n\nConnectorsStriim can connect hundreds of source and target combinations. View a complete list. \n\n\n\n\n\n\n\n\nResources\n\nLearning Blog Community Events Support Documentation\n \n\n\n\n\n\n\nLEARN \n\n\n\nBlogRead the latest blogs from our experts\n\n \n\n\n\nLearningSearch all our latest recipes, videos, podcasts, webinars and ebooks\n \n\n\n\n\n\nCONNECT \n\n\n\nEventsFind the latest webinars, online, and face-to-face events \n\n\n\nThe Striim CommunityStay up to date on new product updates & join the discussion. \n\n\n\n\n\nSUPPORT \n\n\n\nSupport & ServicesLet Striim\u2019s services and support experts bring your Data Products to life \n\n\n\nDocumentationFind the latest technical information on our products \n\n\n\n\n\n\n\nCompany\n\nAbout Careers Customers Partners Striim Newsroom Contact\n \n\n\n\n\n\n\nAbout StriimLearn all about Striim, our heritage, leaders and investors \n\n\n\nCareersLooking to work for Striim? Find all the available job options \n\n\n\nCustomersSee how our customers are implementing our solutions \n\n\n\n\n\nPartnersFind out more about Striim's partner network \n\n\n\nNewsroomFind all the latest news about Striim\n\n \n\n\n\nContact UsConnect with the experts at Striim \n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\nX \n\n\n\n\n\n\n\n\nView a Demo\n\n\n\n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nStriim Blog \n\n\n\n\n\n\n\n\nBlog\n\n\n\n\n\n\n\n\n\n\nEbooks and Papers\n\n\n\n\n\n\n\n\n\n\nVideos and Podcasts\n\n\n\n\n\n\n\n\n\n\nRecipes and Tutorials\n\n\n\n\n\n\n\n\n\n\nOn-demand Webinars\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUse Case\nAnalytics\nArtificial Intelligence\nChange Data Capture\nCloud Integration\nData Fabric / Data Mesh\nData Ingestion\nData Lake\nData Migration\nData Pipeline\nData Replication\nData Transformation\nData Warehouse\nDataOps\nETL / ELT\nHybrid Cloud\nIoT\nNo Category\nReal-time Data\nStreaming Data Integration\nStriim\n\n\n \n\n\n\n\n\n\n\nSources/Targets\nAmazon\nAzure Synapse\nCassandra\nDatabricks\ndbt\nGoogle Cloud\nHPE NonStop\nKafka\nMicrosoft Azure\nMySQL\nOracle\nPostgres\nSalesforce\nSnowflake\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\nData Migration\n\n\n\n\t\t\t\tA Comprehensive Guide to Migrating On-Premise Oracle Data to Databricks Unity Catalog with Python and Databricks Notebook\t\t\t\n\n\nA Comprehensive Guide to Migrating On-Premise Oracle Data to Databricks Unity Catalog with Python and Databricks Notebook \u00a0 \u00a0 In today\u2019s data-driven world, businesses are\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tJuly 10, 2023\t\t\n\n\n\n\n\n\nArtificial Intelligence\n\n\n\n\t\t\t\tReal-Time Data Stories Powering Gen AI & Large Language Models (LLM)\t\t\t\n\n\nStriim may be pronounced stream, however the way Striim streams data is more than just classic streaming. \u2018Striiming\u2019 data ensures that the anytime, ever-changing story\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tJuly 10, 2023\t\t\n\n\n\n\n\n\nChange Data Capture\n\n\n\n\t\t\t\tWhen Change Data Capture Wins\t\t\t\n\n\nA guide on when real-time data pipelines are the most reliable way to keep production databases and warehouses in sync. Photo by\u00a0American Public Power Association\u00a0on\u00a0Unsplash\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tJuly 7, 2023\t\t\n\n\n\n\n\n\nStriim\n\n\n\n\t\t\t\tHow to Build and Deploy a Custom Striim Image to Google Cloud Platform with HashiCorp Packer\t\t\t\n\n\nIn this article, I\u2019ll share how to create a custom Striim CentOS image with Packer, deploy it, and incorporate it into your infrastructure and DevOps\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tMarch 22, 2023\t\t\n\n\n\n\n\n\nStreaming Data Integration\n\n\n\n\t\t\t\tHow to Use Terraform to Automate the Deployment of a Striim Server\t\t\t\n\n\nDeploying a server can be a time-consuming process, but with the help of Terraform, it\u2019s easier than ever. Terraform is an open-source tool that automates\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tFebruary 17, 2023\t\t\n\n\n\n\n\n\nStreaming Data Integration\n\n\n\n\t\t\t\tDemocratizing Data Streaming with Striim Developer\t\t\t\n\n\nEveryone wants real-time data\u2026in theory. You see real-time stock tickers on TV, you use real-time odometers when you\u2019re driving to gauge your speed, when you\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tFebruary 15, 2023\t\t\n\n\n\n\n\n\nAnalytics\n\n\n\n\t\t\t\t5 Real-world Examples of Companies Using Striim for Real-Time Data Analytics\t\t\t\n\n\nAccording to a recent study by KX, US businesses could see a total revenue uplift of $2.6 trillion through investment in real-time data analytics. From\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tFebruary 10, 2023\t\t\n\n\n\n\n\n\nStriim\n\n\n\n\t\t\t\t5 Reasons Low Code Developers Should Join Striim\t\t\t\n\n\nLow code development is a powerful tool for businesses looking to streamline their processes and improve efficiency. Striim is a low-code platform that provides users\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tJanuary 31, 2023\t\t\n\n\n\n\n\n\nDataOps\n\n\n\n\t\t\t\tA Guide to Data Contracts\t\t\t\n\n\nCompanies need to analyze large volumes of datasets, leading to an increase in data producers and consumers within their IT infrastructures. These companies collect data\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tJanuary 4, 2023\t\t\n\n\n\n\n\n\nStriim\n\n\n\n\t\t\t\tIntroducing the Striim Community and Discord Server\t\t\t\n\n\nAs a data architect, business intelligence professional, or Chief Technical Officer, you know how important it is to have access to real-time data streaming to\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tDecember 21, 2022\t\t\n\n\n\n\n\n\nAnalytics\n\n\n\n\t\t\t\tStriim Cloud on AWS: Unify your data with a fully managed change data capture and data streaming service\t\t\t\n\n\nBusinesses of all scales and industries have access to increasingly large amounts of data, which need to be harnessed effectively. According to an IDG Market\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tNovember 30, 2022\t\t\n\n\n\n\n\n\nAnalytics\n\n\n\n\t\t\t\tHow Real-time Healthcare Analytics Helps Improve Patient Care\t\t\t\n\n\nIt\u2019s a Tuesday night. A nurse in the emergency department (ED) receives an alert on her smartphone: the ED will be overcrowded after 1.5 hours.\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tNovember 18, 2022\t\t\n\n\n\n\n\n\n\u00ab Previous\nNext \u00bb \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\nLoading...\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\n\n\n\n\n\n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\n\n\n\n\n\n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\n\n\n\n\n\n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\n\n\n\n\n\n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\nCopyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy \n\n\n\n\n\n\n\nWe're Hiring\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\nLinkedin\n \n\n\n\nFacebook\n \n\n\n\nTwitter\n \n\n\n\nYoutube\n \n\n\n\nRss\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "metadata": {"source": "https://www.striim.com/blog/", "title": "Streaming Data Integration and Operational Intelligence Blog", "description": "Streaming data integration and operational intelligence. Check out how the streaming BI world is changing!", "language": "en-US"}} {"page_content": "\n\n\n\n\n\n\nStriim's Latest Data Sheets, Videos and White Papers\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n\n\n\n\n\n \n \nProducts\n\nStriim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake\n \n\n\n\n\n\n\n\nStriim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. \n\n\n\nStriim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. \n\n\n\n\n\nStriim for BigQuery \n\n\n\nStriim for Databricks \n\n\n\nStriim for Snowflake \n\n\n\n\n\n\nPricing \n\n\n\n \nPricing that is just as flexible as our products \n\n\n\n\n\n\n\nLearn More\n\n\n\n\n\n\n\n\n\n\n\nSolutions\n\nStriim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media\n \n\n\n\n\n\n\nTECHNOLOGIES \n\n\n\n\n\n\nAWSDeliver real-time data to AWS, for faster analysis and processing. \n\n\n\nGoogle CloudUnify data on Google Cloud and power real-time data analytics in BigQuery.\n \n\n\n\nMicrosoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. \n\n\n\n\n\nDatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. \n\n\n\nSnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. \n\n\n\n\n\n\n\n\n\n\nINDUSTRIES \n\n\n\n\n\nFinancial Services \n\n\n\nRetail & CPG \n\n\n\nHealthcare & Pharma \n\n\n\nTravel, Transport & logistics \n\n\n\nManufacturing & Energy \n\n\n\nTelecommunications \n\n\n\nTechnology \n\n\n\nMedia \n\n\n\n\n\n\n\nPricing\n\nPricing\n \n\n\n\n\n\n\n\nUnbeatable Price Performance.Flexible Models for Every Business. \n\n\n\n\n\n\n\n\nConnectors\n\nData Sources and Targets\n \n\n\n\n\n\n\n\nConnectorsStriim can connect hundreds of source and target combinations. View a complete list. \n\n\n\n\n\n\n\n\nResources\n\nLearning Blog Community Events Support Documentation\n \n\n\n\n\n\n\nLEARN \n\n\n\nBlogRead the latest blogs from our experts\n\n \n\n\n\nLearningSearch all our latest recipes, videos, podcasts, webinars and ebooks\n \n\n\n\n\n\nCONNECT \n\n\n\nEventsFind the latest webinars, online, and face-to-face events \n\n\n\nThe Striim CommunityStay up to date on new product updates & join the discussion. \n\n\n\n\n\nSUPPORT \n\n\n\nSupport & ServicesLet Striim\u2019s services and support experts bring your Data Products to life \n\n\n\nDocumentationFind the latest technical information on our products \n\n\n\n\n\n\n\nCompany\n\nAbout Careers Customers Partners Striim Newsroom Contact\n \n\n\n\n\n\n\nAbout StriimLearn all about Striim, our heritage, leaders and investors \n\n\n\nCareersLooking to work for Striim? Find all the available job options \n\n\n\nCustomersSee how our customers are implementing our solutions \n\n\n\n\n\nPartnersFind out more about Striim's partner network \n\n\n\nNewsroomFind all the latest news about Striim\n\n \n\n\n\nContact UsConnect with the experts at Striim \n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\nX \n\n\n\n\n\n\n\n\nView a Demo\n\n\n\n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nLearning Resources\u200b\n \n\n\n\n\n\n\n\n\nBlog\n\n\n\n\n\n\n\n\n\n\nEbooks and Papers\n\n\n\n\n\n\n\n\n\n\nVideos and Podcasts\n\n\n\n\n\n\n\n\n\n\nRecipes and Tutorials\n\n\n\n\n\n\n\n\n\n\nOn-demand Webinars\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUse Case\nAnalytics\nArtificial Intelligence\nChange Data Capture\nCloud Integration\nData Fabric / Data Mesh\nData Ingestion\nData Lake\nData Migration\nData Pipeline\nData Replication\nData Transformation\nData Warehouse\nDataOps\nETL / ELT\nHybrid Cloud\nIoT\nNo Category\nReal-time Data\nStreaming Data Integration\nStriim\n\n\n \n\n\n\n\n\n\n\nSources/Targets\nAmazon\nAzure Synapse\nCassandra\nDatabricks\ndbt\nGoogle Cloud\nHPE NonStop\nKafka\nMicrosoft Azure\nMySQL\nOracle\nPostgres\nSalesforce\nSnowflake\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\t\t\t\tDriving HOPTEK\u2019s AI-powered system with streaming data pipelines\t\t\t\n\n\nHOPTEK Builds Rapid and Intelligent Data Pipelines with Striim Cloud HOPTEK, a digital venture nestled within Kearney, is a dynamic start-up focused on developing dispatch\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tUnleashing the Power of Striim: Oracle to Snowflake Data Stream with Real-time CDC\t\t\t\n\n\nOn-Demand Technical Webinar\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tReal-Time, High Throughput, and Low Latency Data Streaming from Oracle to Snowflake using Snowpipe Streaming\t\t\t\n\n\nWhite Paper\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tA Comprehensive Guide to Migrating On-Premise Oracle Data to Databricks Unity Catalog with Python and Databricks Notebook\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tHow American Airlines Powers Global TechOps with a Real-Time Data Hub\t\t\t\n\n\nRead Case Study\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tOracle to Snowflake Initial Load\t\t\t\n\n\n\u00a0 1. In this video tutorial, we will show you how to complete the initial load from Snowflake to Oracle using Striim\u2019s Flow Designer. 2.\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tSnowflake to Oracle Initial Load\t\t\t\n\n\n1. In this video tutorial, we will show you how to complete the initial load from Snowflake to Oracle using Striim\u2019s Flow Designer. 2. To\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tIntroducing Striim for Databricks\t\t\t\n\n\nStriim is excited to introduce to you our fully-managed and purpose-driven service for Databricks. In this demo, you will see how simple overall data pipeline\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim for Snowflake: Stream data in real-time to Snowflake\t\t\t\n\n\nTutorial Striim for Snowflake: Stream data in real-time to Snowflake How to replace batch ETL by event-driven distributed stream processing Benefits Operational Analytics\u00a0Use non-intrusive CDC\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tMicrosoft Fabric and Striim\t\t\t\n\n\nMicrosoft Fabric and Striim from Striim on Vimeo. Using Microsoft Fabric, we created real time dashboards in minutes using CDC from MongoDB. See how simple\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tBuild Smart, Real-Time Data Pipelines for OpenAI using Striim\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim Real-Time Analytics Intro Recipe\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStreaming Synthetic Data to Snowflake with Striim\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStreaming SQL on Kafka with Striim\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tBuilding a Real-Time Lakehouse with Data Streaming\t\t\t\n\n\nWatch On-demand\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tThe Future of Streaming Data: Technology, Use Cases, and Opportunities\t\t\t\n\n\nWatch On-demand\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tUnleash the power of real-time data streaming to Google BigQuery\t\t\t\n\n\nOn-demand Webinar\n\n\n\n\n\n\n\n\n\n\n\t\t\t\tStreaming Change Data Capture from MongoDB to ADLS Gen2 Parquet\t\t\t\n\n\nRecipe\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\nLoading...\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\n\n\n\n\n\n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\n\n\n\n\n\n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\n\n\n\n\n\n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\n\n\n\n\n\n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\nCopyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy \n\n\n\n\n\n\n\nWe're Hiring\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\nLinkedin\n \n\n\n\nFacebook\n \n\n\n\nTwitter\n \n\n\n\nYoutube\n \n\n\n\nRss\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "metadata": {"source": "https://www.striim.com/resources/", "title": "Striim's Latest Data Sheets, Videos and White Papers", "description": "Striim's latest data sheets, videos, white papers, and webinars on Striim's streaming integration software for moving real-time data to the Cloud.", "language": "en-US"}} {"page_content": " Events | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Events Join us for an in-person, virtual, or on-demand event. On demand Webinars Podcasts Google Next | San Francisco August 29-31, 2023San Francisco Learn More Big Data LDN September 20-21, 2023London Learn More What's New in Data? Listen to Striim\u2019s Podcast.Latest Guest: Armon Petrossian, Coalesce Automation Learn More \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/company/events/", "title": "Events | Striim", "language": "en-US"}} {"page_content": " Striim Services and Support Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Support & SERVICES Let Striim\u2019s services and support experts bring your Data Products to life.\u00a0 Turn your data project into real-time measurable outcomes 24/7 Support for Every Striim Customer You can get the most out of your investment in Striim\u2019s software by taking advantage of our services and support. We can help you begin, accelerate, and expand your journey with real-time data.Discover how Striim\u2019s offerings are designed to help you do more.Our standard support includes:Easy access to 24/7 support for all of your team via our customer support portalEnsure quick resolutions with guaranteed response times based on the severity of your issueSupported by our team of qualified and experienced Striim experts Learn More Customer Support ensures your success from Day 1 Deliver measurable business results quickly with a Striim deployment supported by our team of experts. Striim\u2019s professional customer services team can help you get the most impact from your investment. Our packages are designed to create the best Striim Onboarding Methodology, allowing you to deploy smart data pipelines consistently and confidently.\u00a0 Striim CS Architects will help design the ideal solutions for your use cases and help you choose the best services package from choosing to work with our experts, training your team on our Striim solutions, or getting new team members trained. Immediate Success With Our Onboarding Packages Quickstart Package Expert onboarding by team with Deep technical expertiseArchitecture design & technical implementation consulting90-day implementation period, includes dedicated training with add-on packages available Learn More Training Refresher Address turnover in your organization through refresher trainingStriim Platform Training + Full set of bespoke use cases\u00a0Virtual sessions with unlimited access to all recordings\u00a0 Learn More Technical Account Management A key resource who understands Striim and your systems and advises on customer journey and architectureConnects your objectives to resources throughout the Striim organizationSupports your strategy and implementation goals Learn More \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/services-support/", "title": "Striim Services and Support", "description": "Let Striim\u2019s services and support experts bring your Data Products to life.", "language": "en-US"}} {"page_content": " Careers | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Careers Powering digital transformation Innovative enterprises use Striim to monitor business events across any environment, build applications that drive digital transformation, and leverage true real-time analytics to provide a superior experience to their customers Our Mission Striim was founded with the simple goal of helping companies make data useful the instant it\u2019s born. Managing large-scale data is a challenge for every enterprise. Real-time, integrated data is a requirement to stay competitive, but modernizing data architecture can be an overwhelming task. We built Striim to handle the volume, complexity, and velocity of enterprise data by connecting legacy systems to modern cloud applications on a scalable platform.Our customers don\u2019t have to pause operations to migrate data or juggle different tools for every data source\u2014they simply connect legacy systems to newer cloud applications and get data streaming in a few clicks.Come and be part of the Striim Team and Mission. Benefits Join the Striim Team and enjoy the culture and benefits that come from working at a truly great start-up. Competitive salary and pre-IPO stock options Company-wide adoption of distributed-first working practices and asynchronous collaboration Emphasis on building a great culture of teamwork and growing together Generous paid medical/dental/vision coverage 401K Medical and dependent FSA Our Core Values One Striim We strive to support a human-first: employee-second work environment while holding high standards of customer satisfaction and data security. Unlimited Potential We encourage employee collaboration, access to leadership, transparency, and empathy to promote continuous growth, learning, development and innovation. Dignity We hold high standards of ethics, treat our clients, partners and employees with respect and support our diverse workforce. Employee Testimonials \" My favorite thing about working at Striim is the excitement around the category-creating solution we are building for our customers every day and the chance to do that alongside some of the brightest minds in the industry. Our Striim core value of supporting a \u201chuman-first, employee-second work environment\u201d is what makes me so grateful to be part of this team. We support working moms, we support DJs, and we support aviators. We all have lives outside of work, and we bring equal amounts of passion to our lives inside and outside of work. \" Dianna Spring Director of Product Marketing - PLG HQ - Palo Alto, CA \" My summer internship at Striim helped me grow as a professional and a developer. Everything was smooth and streamlined from start to finish, allowing me to focus all my energy on work. I was tasked with revamping the homepage interface, which kept me excited and engaged with the product. Since the homepage has direct end-user visibility, I was always motivated to put my best foot forward. Throughout this journey, I was provided with the necessary resources to learn new skills and concurrently apply them. With the guidance and mentorship of my manager, Aswin Yamuzala, I successfully took the project over the finish line for the final showcase. I was impressed with what I had accomplished during my internship at Striim and couldn't wait to come back as a full-time employee to work on other thrilling tasks.. \" Samay Gandhi Software Engineer - UI HQ - Palo Alto, CA \" An inclusive culture where everyone is approachable and everyone's opinion and contribution is highly valued. Apart from being a technically strong company with a highly focused leadership team, one of the most important things that make me feel that this is my company every day is that everyone is allowed to contribute in the areas we love and approach anyone to provide or receive suggestions. The culture we have thrives to make sure everyone is in a good professional and personal space of mind. And we have the flexibility to manage our personal preferences and get the work done at the same time. We work hard and play hard TOGETHER. \" Ganesh Bushnam Senior SW Engineer - Adapters Chennai, India Previous Next We\u2019re backed by the best. Striim was launched by executive and technical members of pioneering organizations like GoldenGate Software (acquired by Oracle in 2009), Informatica, Oracle, Embarcadero Technologies, and BEA/WebLogic. As such, Striim is fortunate to be backed by some of the top investors in the world.Learn more about our latest round of funding. The Striim Team \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/careers/", "title": "Careers | Striim", "language": "en-US"}} {"page_content": " Enabling and Accelerating Analytics at Scale in Critical Operations Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Our Customers Striim empowers the world\u2019s organizations Connect your employees, suppliers, and customers and infuse real-time data into every decision and process. Request a Demo Customer Case Studies Hoptek\u2019 SaaS software for the trucking industry enables real-time synchronization with the ever-changing status of trucks on the road. With real-time data, Hoptek\u2019s users can gain valuable insights, react swiftly to changing circumstances, and drive efficiency. Striim\u2019s powerful pipelines ensure that Hoptek\u2019s AI system remains constantly fueled with up-to-date information, facilitating accurate predictions and actionable intelligence.View Case Study Video > American Airlines uses a real-time data hub consisting of MongoDB, Striim, Azure, and Databricks to ensure a seamless, real-time operation and massive scale. This architecture leverages change data capture from MongoDB to get operational data in real time, process and model the data for downstream usage, and stream it to consumers in real-time. View Case Study > Ciena is a networking systems, services and software company that provides best-in-class networking solutions that support 85 percent of the world\u2019s largest communications service providers.Ciena replicates 100 millionevents to Snowflake per day with Striim\u2019s powerful,autonomous data pipelines. View Case Study > As retailers strive to meet the growing expectations of shoppers, they are turning to Google Cloud and Striim to transform their businesses and tackle opportunities in an increasingly challenging industry.\u00a0 View Case Study Video > Blume Global is a leading provider of supply chain technology. Blume selected Striim Platform on Google Cloud to provide live streaming of data from the on-premises Oracle source to the MySQL target on Google Cloud, minimizing disruption and risk.\u00a0 View Case Study > MineralTree \u2013 an innovative Silicon Valley Fintech company \u2013 provides their customers with rich reports using a modern data stack powered by Snowflake,\u00a0Striim, dbt, and Looker. View Case Study > Global Financial Data Provider Accelerates Strategic Journey to the Cloud With Striim View Case Study > Featured Customers \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/customers/", "title": "Enabling and Accelerating Analytics at Scale in Critical Operations", "description": "Striim is enabling and accelerating analytics at scale in critical operations across airlines, digital gaming companies, healthcare providers, and global retailers.", "language": "en-US"}} {"page_content": " Leading Technology Companies and SIs Partner with Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Striim Technology Partners Accelerate with Striim Partners Striim partners with leading technology platform and service providers. Together, we provide comprehensive solutions for real-time data integration and streaming analytics. Find a Partner Have an opportunity you\u2019d like to discuss with Striim? Let's Talk Featured Partners Learn More Learn More Learn More Learn More Learn More Our Partners Learn More Learn More Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/partners/", "title": "Leading Technology Companies and SIs Partner with Striim", "description": "Striim is proud to partner with leading technology and service providers to provide the easiest path to real-time data integration and streaming analytics.", "language": "en-US"}} {"page_content": "\n\n\n\n\n\n\nStriim Newsroom | Latest Press Releases and News Coverage from Striim\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n\n\n\n\n\n \n \nProducts\n\nStriim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake\n \n\n\n\n\n\n\n\nStriim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. \n\n\n\nStriim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. \n\n\n\n\n\nStriim for BigQuery \n\n\n\nStriim for Databricks \n\n\n\nStriim for Snowflake \n\n\n\n\n\n\nPricing \n\n\n\n \nPricing that is just as flexible as our products \n\n\n\n\n\n\n\nLearn More\n\n\n\n\n\n\n\n\n\n\n\nSolutions\n\nStriim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media\n \n\n\n\n\n\n\nTECHNOLOGIES \n\n\n\n\n\n\nAWSDeliver real-time data to AWS, for faster analysis and processing. \n\n\n\nGoogle CloudUnify data on Google Cloud and power real-time data analytics in BigQuery.\n \n\n\n\nMicrosoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. \n\n\n\n\n\nDatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. \n\n\n\nSnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. \n\n\n\n\n\n\n\n\n\n\nINDUSTRIES \n\n\n\n\n\nFinancial Services \n\n\n\nRetail & CPG \n\n\n\nHealthcare & Pharma \n\n\n\nTravel, Transport & logistics \n\n\n\nManufacturing & Energy \n\n\n\nTelecommunications \n\n\n\nTechnology \n\n\n\nMedia \n\n\n\n\n\n\n\nPricing\n\nPricing\n \n\n\n\n\n\n\n\nUnbeatable Price Performance.Flexible Models for Every Business. \n\n\n\n\n\n\n\n\nConnectors\n\nData Sources and Targets\n \n\n\n\n\n\n\n\nConnectorsStriim can connect hundreds of source and target combinations. View a complete list. \n\n\n\n\n\n\n\n\nResources\n\nLearning Blog Community Events Support Documentation\n \n\n\n\n\n\n\nLEARN \n\n\n\nBlogRead the latest blogs from our experts\n\n \n\n\n\nLearningSearch all our latest recipes, videos, podcasts, webinars and ebooks\n \n\n\n\n\n\nCONNECT \n\n\n\nEventsFind the latest webinars, online, and face-to-face events \n\n\n\nThe Striim CommunityStay up to date on new product updates & join the discussion. \n\n\n\n\n\nSUPPORT \n\n\n\nSupport & ServicesLet Striim\u2019s services and support experts bring your Data Products to life \n\n\n\nDocumentationFind the latest technical information on our products \n\n\n\n\n\n\n\nCompany\n\nAbout Careers Customers Partners Striim Newsroom Contact\n \n\n\n\n\n\n\nAbout StriimLearn all about Striim, our heritage, leaders and investors \n\n\n\nCareersLooking to work for Striim? Find all the available job options \n\n\n\nCustomersSee how our customers are implementing our solutions \n\n\n\n\n\nPartnersFind out more about Striim's partner network \n\n\n\nNewsroomFind all the latest news about Striim\n\n \n\n\n\nContact UsConnect with the experts at Striim \n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\nX \n\n\n\n\n\n\n\n\nView a Demo\n\n\n\n\n\n\n\n\n\n\nFree Trial\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nStriim Newsroom\n \n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\t\t\t\tStriim Announces Streaming Integration Platform for Snowflake to Enable Industry Adoption Of Real-Time Data\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tJune 27, 2023\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim Announces Fully Managed Real-Time Streaming and Integration Service for Analytics on the Databricks Lakehouse\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tJune 23, 2023\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim Announces a Fully-Managed Real-Time Enterprise Data Integration Service for Snowflake\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tApril 4, 2023\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim Achieves Google Cloud Ready \u2013 AlloyDB Designation\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tMarch 29, 2023\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim appoints Nadim Antar as SVP and GM of EMEA to accelerate its growth in the region\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tFebruary 16, 2023\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim Introduces Striim Cloud on Amazon Web Services for Real-Time Streaming Data to accelerate AWS Cloud Modernization and Analytics\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tNovember 30, 2022\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tStriim Announces the First High Performance, Fully Managed Real-Time Streaming and Integration Service for Google BigQuery\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tOctober 13, 2022\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tData Integration and Streaming Data Leader Striim adds Former AWS and Salesforce Industry Leaders to its Engineering Team\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tOctober 3, 2022\t\t\n\n\n\n\n\n\n\n\n\n\t\t\t\tReal-Time Streaming Data Leader Striim Continues its Global Expansion into the United Kingdom and Europe\t\t\t\n\n\n\t\t\tRead More \u00bb\t\t\n\n\n\n\t\t\tSeptember 21, 2022\t\t\n\n\n\n\n\n\n\u00ab Previous\nPage1\nPage2\nPage3\n\u2026\nPage5\nNext \u00bb \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\n\n\n\n\n\n\n\n\n\n\n\u00d7\n\nLoading...\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\nProducts\nStriim Platform\nStriim Cloud\nStriim for BigQuery\nGoogle Cloud\nMicrosoft Azure\nDatabricks\nSnowflake\nAWS\n \n\n\n\n\n\n\n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\nData Mesh\nReal Time Operations\nData Modernization\nDigital Customer Experience\nData Fabric\u200b\nReal-Time Analytics\nIndustries\n \n\n\n\n\n\n\n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\nDocumentation\nBlog\nRecipes\nResources\nVideos\nSupport\nEvents\nCommunity\n \n\n\n\n\n\n\n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\nCustomers\nPartners\nPricing\nConnectors\nCompare\nContact\n \n\n\n\n\n\n\n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\nCompany\nNewsroom\nCareers\nEthics Hotline\n \n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\nCopyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy \n\n\n\n\n\n\n\nWe're Hiring\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\nLinkedin\n \n\n\n\nFacebook\n \n\n\n\nTwitter\n \n\n\n\nYoutube\n \n\n\n\nRss\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\nProducts\nUse Cases\n\nData Modernization\nOperational Analytics\nCustomer 360\nData Mesh\nMulti-Cloud Data Fabric\nDigital Customer Experience\nIndustries\n\n\nConnectors\nResources\n\ntest\n\n\nCompany\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "metadata": {"source": "https://www.striim.com/company/newsroom/", "title": "Striim Newsroom | Latest Press Releases and News Coverage from Striim", "description": "Check out Striim's latest press releases and news coverage, and learn why Striim has become a leader in real-time streaming integration to the cloud.", "language": "en-US"}} {"page_content": " Contact | Striim Products Striim Cloud Striim Platform Striim for BigQuery Striim For Databricks Striim for Snowflake Striim CloudA fully managed SaaS solution that enables infinitely scalable unified data integration and streaming. Striim PlatformOn-premise or in a self-managed cloud to ingest, process, and deliver real-time data. Striim for BigQuery Striim for Databricks Striim for Snowflake Pricing Pricing that is just as flexible as our products Learn More Solutions Striim on AWS Striim Cloud Striim and Microsoft Azure Databricks and Striim Striim and Snowflake Financial Services Retail and CPG Striim Solutions for Healthcare and Pharmaceuticals Striim Solutions for Travel, Transportation, and Logistics Striim Solutions for Manufacturing and Energy Striim Solutions for Telecommunications Striim Technology Striim Media TECHNOLOGIES AWSDeliver real-time data to AWS, for faster analysis and processing. Google CloudUnify data on Google Cloud and power real-time data analytics in BigQuery. Microsoft AzureQuickly move data to Microsoft Azure and accelerate time-to-insight with Azure Synapse Analytics and Power BI. DatabricksUnleash the power of Databricks AI/ML and Predictive Analytics. SnowflakeFulfill the promise of the Snowflake Data Cloud with real-time data. INDUSTRIES Financial Services Retail & CPG Healthcare & Pharma Travel, Transport & logistics Manufacturing & Energy Telecommunications Technology Media Pricing Pricing Unbeatable Price Performance.Flexible Models for Every Business. Connectors Data Sources and Targets ConnectorsStriim can connect hundreds of source and target combinations. View a complete list. Resources Learning Blog Community Events Support Documentation LEARN BlogRead the latest blogs from our experts LearningSearch all our latest recipes, videos, podcasts, webinars and ebooks CONNECT EventsFind the latest webinars, online, and face-to-face events The Striim CommunityStay up to date on new product updates & join the discussion. SUPPORT Support & ServicesLet Striim\u2019s services and support experts bring your Data Products to life DocumentationFind the latest technical information on our products Company About Careers Customers Partners Striim Newsroom Contact About StriimLearn all about Striim, our heritage, leaders and investors CareersLooking to work for Striim? Find all the available job options CustomersSee how our customers are implementing our solutions PartnersFind out more about Striim's partner network NewsroomFind all the latest news about Striim Contact UsConnect with the experts at Striim Free Trial X View a Demo Free Trial Get in Touch Customer Support Current Striim customers can submit a technical support ticket via the portal. Visit Striim Support Portal Careers Please visit our careers page for opportunities with Striim. Visit Striim Careers Contact Sales Our Offices Palo Alto (HQ) 575 Middlefield Road Palo Alto, CA 94301 United States +1 650 241 \u00a00680 Chennai The Executive Center 5th Floor, Fortius block Olympia Tech parkSidco Industrial EstateGuindy, Chennai, Tamil Nadu - 600032 Bangalore WeWork Galaxy 43, Residency RdShanthala NagarAshok NagarBengaluru, Karnataka 560025 London 71-73 Carter Lane, London, EC4V 5EQ +44 (20) 45180078 \u00d7 \u00d7 Loading... Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Products Striim Platform Striim Cloud Striim for BigQuery Google Cloud Microsoft Azure Databricks Snowflake AWS Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Data Mesh Real Time Operations Data Modernization Digital Customer Experience Data Fabric Real-Time Analytics Industries Documentation Blog Recipes Resources Videos Support Events Community Documentation Blog Recipes Resources Videos Support Events Community Customers Partners Pricing Connectors Compare Contact Customers Partners Pricing Connectors Compare Contact Company Newsroom Careers Ethics Hotline Company Newsroom Careers Ethics Hotline Copyright\u00a92012-2023 Striim\u00a0| Legal |\u00a0Privacy Policy We're Hiring Linkedin Facebook Twitter Youtube Rss Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company Products Use Cases Data Modernization Operational Analytics Customer 360 Data Mesh Multi-Cloud Data Fabric Digital Customer Experience Industries Connectors Resources test Company ", "metadata": {"source": "https://www.striim.com/contact/", "title": "Contact | Striim", "language": "en-US"}} {"page_content": "<p> </p>\n<p>In 4.1.2.1E, a new DDL parser is adopted. Some edge DDL cases are now supported:</p>\n<p><br>1. MySQL \"alter table ... algorithm ... lock ...\"<br>e.g.,<br>ALTER TABLE test1 ADD COLUMN c1 ALGORITHM=INSTANT;<br>Alter table test1 add column c1 algorithm=inplace, lock=none;</p>\n<p> </p>\n<p>2. MySQL \"create table ... like ...\"<br>e.g.,<br>create table test1 like test2;</p>"} {"page_content": "<h3>Issue:</h3>\n<p>Oracle BLOB datatype to Postgres JSON datatype mapping during INTIAL LOAD fails with following error</p>\n<pre><code>ERROR: invalid input syntax for type json Detail: Token \"7B226964223A317D\" is invalid. <br>Where: JSON data, line 1: ...7B226964223A317D unnamed portal parameter $2 = '...'</code></pre>\n<p> </p>\n<h3>Cause:</h3>\n<p>BLOB is returned as a binary/ raw data from source</p>\n<p> </p>\n<h3>Solution:</h3>\n<p><span>(a) If source is oracle 19c and above use JSON_SERIALIZE function to convert the value to string in QUERY property of DatabaseReader</span><br><span> </span></p>\n<pre><code>query: 'select c1 , JSON_SERIALIZE (c2 ) as c2 from striim.t_json;'</code></pre>\n<p>Here c2 is BLOB column<br><span></span></p>\n<p><span>(b) For version of Oracle that doesn't support </span><span> JSON_SERIALIZE function use java function</span><br><span>in CQ to convert the hex back to character array and then to string.</span><br><span> </span></p>\n<pre><code>SELECT putuserdata(d,<br>'clobString', new String(org.apache.commons.codec.binary.Hex.decodeHex(to_string(data[1]).toCharArray()), \"UTF-8\")) FROM OCNT_Initial_Load_B5_OutputStream d;</code></pre>\n<p><span> Here data[1] is BLOB column</span></p>\n<p><span>This userdata value can then be mapped to target column using COLUMNMAP</span></p>"} {"page_content": "<h3>Issue:</h3>\n<p>ADLSGen2writer configured with ParquetFormatter fails with below exception</p>\n<pre>com.webaction.common.exc.AdapterException: Exception in Parquet Formatter: org.apache.avro.AvroRuntimeException: Not a record schema: null. Cause: Not a record schema: null<br>at com.webaction.proc.ParquetFormatter.format(ParquetFormatter.java:323) ~[?:?]<br>at com.webaction.source.lib.rollingpolicy.outputstream.RollOverOutputStream.writeEvent(RollOverOutputStream.java:537) ~[SourceCommons-4.1.2.1.jar:?]<br>at com.striim.io.target.commons.LocalFileWriter.writeEvent(LocalFileWriter.java:443) ~[SourceCommons-4.1.2.1.jar:?]<br>at com.striim.io.target.commons.LocalFileWriter.receiveImpl(LocalFileWriter.java:415) ~[SourceCommons-4.1.2.1.jar:?]<br>at com.striim.io.target.commons.LocalFileWriter.receive(LocalFileWriter.java:397) ~[SourceCommons-4.1.2.1.jar:?]<br>at com.webaction.runtime.components.Target.receive(Target.java:298) ~[Platform-4.1.2.1.jar:?]<br>at com.webaction.runtime.components.Target.receive(Target.java:298) ~[Platform-4.1.2.1.jar:?]<br>at com.webaction.runtime.DistributedRcvr.doReceive(DistributedRcvr.java:234) ~[Platform-4.1.2.1.jar:?]<br>at com.webaction.runtime.DistributedRcvr.onMessage(DistributedRcvr.java:112) ~[Platform-4.1.2.1.jar:?]<br>at com.webaction.jmqmessaging.InprocAsyncSender.processMessage(InprocAsyncSender.java:53) ~[Platform-4.1.2.1.jar:?]<br>Caused by: org.apache.avro.AvroRuntimeException: Not a record schema: null<br>at org.apache.avro.generic.GenericData$Record.&lt;init&gt;(GenericData.java:227) ~[avro-striim11-1.12.0.jar:striim11-1.12.0]<br>at com.striim.formatter.schema.TableWAEventBinaryFormatter.generateRecordFromEvent(TableWAEventBinaryFormatter.java:52) ~[SourceCommons-4.1.2.1.jar:?]<br>at com.webaction.proc.ParquetFormatter.format(ParquetFormatter.java:316) ~[?:?]</pre>\n<p> </p>\n<p><strong>Cause: </strong></p>\n<p>The source is configured with multiple tables and ADLSGen2Writer is configured to write the data for all the tables in the same directory. Striim supports dynamic naming for directories and filenames. However, we cannot specify dynamic naming for the schema filename. Since the same static schema file is used for multiple tables it leads to \"Not a record schema\" exception.</p>\n<pre> schemaFileName: 'STATIC_SCHEMA_FILE'</pre>\n<p><strong> </strong></p>\n<p><strong>Solution:</strong></p>\n<p>Specify dynamic naming for the directory to include a sub-directory for each table. This will generate the schemafile in a separate directory for each table.</p>\n<p>As an example,</p>\n<pre><span>SELECT PutUserdata(<strong>s,'schemaDirectory','/opt/striim/' + TO_STRING(META(s,'TableName')).split('\\\\\\\\.')[1]</strong>,'filename',TO_STRING(META(s,'TableName')).split('\\\\\\\\.')[1]+'_'+ TO_STRING(DYEARS(DNOW())) +'-'+ TO_STRING(DMONTHS(DNOW()))+'-'+ TO_STRING(DDAYS(DNOW())) + '-'+ TO_STRING(DHOURS(DNOW()))+'-'+ TO_STRING(DMINS(DNOW()))+'-'+ TO_STRING(DSECS(DNOW()))) FROM Oracle2ADLSGen2_CDC_dev_OutputStream s;</span></pre>\n<pre>schemaFileName: '%@userdata(<span><strong>schemaDirectory</strong></span>)%'</pre>"} {"page_content": "<h3>Goal</h3>\n<p>The goal is to define a static consumer in Striim's KafkaReader/ KafkaWriter</p>\n<p> </p>\n<h3>Solution</h3>\n<p>The default consumer name created by Striim is dynamic based on the app config like</p>\n<pre><span>&lt;nameSpace&gt;_TARGET_&lt;STRIIM_KAFKA_PRODUCER_NAME&gt;_&lt;TOPIC_NAME&gt;_&lt;EPOCH_TIME&gt; <br><br>e.g. admin_TARGET_CDC_KAFKA_WRITER_STRIIM_TOPIC_1674032820535</span></pre>\n<p>If the requirement is to use a static consumer following can be set in the KafkaConfig of reader/ writer</p>\n<p><span>group.id=&lt;any string&gt;</span></p>\n<p><span>eg.,</span></p>\n<pre><code>KafkaConfig: 'request.timeout.ms=60001;session.timeout.ms=60000;group.id=striim', </code></pre>"} {"page_content": "<h3 id=\"h_01H8YAKT2SRQ8DSFQVKS6WFTNH\">Goal:</h3>\n<p>The default cert provided by Striim part of the installed needs to be replaced with either self-signed or custom CA signed cert for HTTPS connection</p>\n<p> </p>\n<h3 id=\"h_01H8YAKWTYVWPEDRGWVPJPN18C\">Steps:</h3>\n<p>The steps given below are for using self-signed cert in non-prod environments to enable HTTPS connection for Striim login page</p>\n<p> </p>\n<p>1) Generate a keystore which is a public-private key pair and it stores each machine's own identity</p>\n<p> </p>\n<p>The keystore file containts the private key of the cert and a pubc cert<br>Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server. The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not the malicious one. The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine.<br>use CN=localhost or FQDN</p>\n<p> </p>\n<pre>/Users/rajesh/app $ keytool -keystore server.keystore.jks -alias localhost -keyalg RSA -validity 365 -genkey<br>Enter keystore password: <br>Re-enter new password: <br>What is your first and last name?<br>[Unknown]: localhost<br>What is the name of your organizational unit?<br>[Unknown]: support<br>What is the name of your organization?<br>[Unknown]: striim<br>What is the name of your City or Locality?<br>[Unknown]: palo alto<br>What is the name of your State or Province?<br>[Unknown]: ca<br>What is the two-letter country code for this unit?<br>[Unknown]: us<br>Is CN=localhost, OU=support, O=striim, L=palo alto, ST=ca, C=us correct?<br>[no]: yes<br>Enter key password for &lt;localhost&gt;<br>(RETURN if same as keystore password): <br>Warning:<br>The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using \"keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12\".</pre>\n<p> </p>\n<p>2) Creating your own CA</p>\n<p> </p>\n<p>The generated CA is simply a public-private key pair and certificate, and it is intended to sign other certificates.</p>\n<p> </p>\n<pre>$ openssl req -new -x509 -keyout ca-key -out ca-cert -days 365<br><br>Country Name (2 letter code) [AU]:US<br>State or Province Name (full name) [Some-State]:CA<br>Locality Name (eg, city) []:PALO ALTO<br>Organization Name (eg, company) [Internet Widgits Pty Ltd]:STRIIM<br>Organizational Unit Name (eg, section) []:CA-ROOT<br>Common Name (e.g. server FQDN or YOUR name) []:CA-ROOT<br>Email Address []:</pre>\n<p> </p>\n<p>3) Export the unsigned cert from the keystore</p>\n<pre>$ keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file<br><br></pre>\n<p> </p>\n<p>4) Sign the cert using the root ca</p>\n<pre>/Users/rajesh/app $ openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:secret<br><br></pre>\n<p> </p>\n<p>5) Import the root ca to key store</p>\n<pre>/Users/rajesh/app $ keytool -keystore server.keystore.jks -alias CA-ROOT -import -file ca-cert<br>Trust this certificate? [no]: yes<br>Certificate was added to keystore</pre>\n<p> </p>\n<p>6) Import the signed cert back to key store</p>\n<pre>/Users/rajesh/app $ keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed<br><br></pre>\n<p> </p>\n<p>7) Import the root ca to java trust store</p>\n<p>The truststore of a client stores all the certificates that the client should trust.</p>\n<pre><br>/Users/rajesh/app $ cd $JAVA_HOME/jre/lib/security<br><br>/Library/Java/JavaVirtualMachines/jdk1.8.0_211.jdk/Contents/Home/jre/lib/security $ sudo keytool -import \\<br>&gt; -keystore $JAVA_HOME/jre/lib/security/cacerts \\<br>&gt; -storepass changeit -noprompt \\<br>&gt; -alias ROOTCA -file /Users/rajesh/app/ca-cert<br>Certificate was added to keystore</pre>\n<p> </p>\n<p>8) Optionally verify the cert from command line</p>\n<pre>$ openssl s_client -debug -connect localhost:9093 -tls1<br>$ openssl s_client -debug -connect localhost:9081<br>$ curl --insecure -v https://localhost:9081 2&gt;&amp;1 | awk 'BEGIN { cert=0 } /^\\* SSL connection/ { cert=1 } /^\\*/ { if (cert) print }'</pre>\n<p> </p>\n<p>9) connect to Striim login page via browser </p>\n<p> </p>\n<p><a href=\"https://&lt;hostname\">https://&lt;hostname</a> or ip or localhost&gt;:9081/</p>\n<p> </p>\n<p> </p>"} {"page_content": "<h3>Issue:</h3>\n<p>On Striim versions v4.1.0.2 and higher following error is seen in DWHWriter (SnowflakeWriter in this example)</p>\n<pre><span>Timestamp '2023-06-06 ??:??:??.000000000' is not recognized</span><br><span>File 'TRIIMUNDERSCOREDEV.SALESFORCE.Contact0.csv.gz', line 2, character 553</span><br><span>Row 2, column \"__STRIIM_ADMIN_PROD_SALESFORCE_INITIALLOAD_SNOWFLAKE_TARGET_CONTACT_STAGEONE\"[\"LASTACTIVITYDATE\":39]</span></pre>\n<p>The COLUMN name , value and error could differ depending the source although the error is seen at the target writer side</p>\n<p> </p>\n<h3>Cause:</h3>\n<p data-renderer-start-pos=\"148\">The issue happens while ingesting the source DATE value to target TIMESTAMP variation. This leads to the timestamp value getting corrupted string like '2023-01-01 ��:��:��.000000000' , '2023-06-06 ??:??:??.000000000' etc.</p>\n<p data-renderer-start-pos=\"552\"> </p>\n<h3 data-renderer-start-pos=\"552\">Soluion:</h3>\n<p data-renderer-start-pos=\"552\">1. Changing target datatype to DATE or equivalent without timestamp will avoid the issue</p>\n<p data-renderer-start-pos=\"552\">or</p>\n<p data-renderer-start-pos=\"552\">2. Striim version 4.2.0 and higher has the fix</p>"} {"page_content": "<h3>Problem : </h3>\n<p>DatabaseReader is slow when it's processing table that contains Long / Long Raw data type . </p>\n<p>E.g : </p>\n<pre><span><br>SYS@lhrdb&gt; desc hshi.test_long<br>Name Null? Type<br>----------------------------------------------------- -------- ------------------------------------<br>C1 NUMBER(38)<br>C2 VARCHAR2(100)<br>C3 VARCHAR2(100)<br>C4 LONG<br><br>Table with long data type :</span><br><span> </span><br><span>Total rows : 1 million rows</span><br><span>Average row size : </span><code>801</code><span> bytes</span><br><span>Total table size : 800MB<br></span></pre>\n<p>- The throughput shows 800+ msgs / sec for entire table.</p>\n<p>- By excluding the long / long raw data type , the throughput shows 50k msgs / sec </p>\n<p> </p>\n<p><strong>Solution :</strong></p>\n<p>Add \"<span>useFetchSizeWithLongColumn=true\" into jdbc connection URL </span></p>\n<article id=\"comment-15131390977047\" class=\"sc-i0djx2-0 fwLKxM\" data-test-id=\"omni-log-comment-item\" data-support-suite-trial-tour-aw-id=\"message\" data-support-suite-trial-onboarding-id=\"message\" data-simplified-get-started-tour-id=\"message\">\n<div class=\"sc-i0djx2-2 bNUTLN\">\n<div class=\"sc-1o8vn6d-0 fcCUeL\">\n<div class=\"sc-1m2sbuc-0 daqTDG\" data-test-id=\"omni-log-item-message\">\n<div class=\"sc-1m2sbuc-1 glubzr\" dir=\"auto\">\n<div class=\"sc-1qvpxi4-1 lvXye\">\n<div data-test-id=\"omni-log-message-content\">\n<div class=\"zd-comment\" dir=\"auto\">\n<pre><code>ConnectionURL: 'jdbc:oracle:thin:@localhost:1521/lhrdb?useFetchSizeWithLongColumn=true', </code></pre>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</div>\n</article>\n<p><span>(After adding the option , the throughput can be increased to 50k msgs/ sec . )</span></p>\n<p> </p>\n<p>To set it globally , we can add it into server.sh script : </p>\n<p>$JAVA_SYSTEM_PROPERTIES \\<br><span class=\"wysiwyg-color-red90\">-Doracle.jdbc.useFetchSizeWithLongColumn=true \\</span><br>-cp \"$WA_HOME/conf:$WA_HOME/lib/*:$CLASSPATH\" \\</p>\n<p> </p>\n<p> </p>\n<p> </p>\n<p> </p>"} {"page_content": "<h3>Problem : </h3>\n<p>CDC app: ( PostgreSQL -&gt; PostgreSQL ) failed with below error due to <strong>generated</strong> column(s) : </p>\n<p><em>ERROR: cannot insert a non-DEFAULT value into column \"c3\"</em><br><em>Detail: Column \"c3\" is a generated column..</em></p>\n<p>e.g :</p>\n<pre><code>&lt;Source Table :&gt;<br>test=# create table test_123 (c1 int , c2 character varying(50), c3 character varying(50) generated always as (upper(c2)) stored) ; <br>CREATE TABLE<br><br>&lt;target table:&gt; <br>test=# <br>test=# create table test_123_t (c1 int , c2 character varying(50), c3 character varying(50) generated always as (upper(c2)) stored) ; <br>CREATE TABLE<br><br>ALTER TABLE test_123 ADD PRIMARY KEY (c1);<br><br>ALTER TABLE test_123_t ADD PRIMARY KEY (c1);</code></pre>\n<p> </p>\n<h3>Solution : </h3>\n<p>Exclude generated column(s) from source <span class=\"s1\">PostgreSQLReader</span> </p>\n<pre>Tables: 'public.test_123 <strong>EXCLUDECOLS</strong>(c3);',</pre>\n<p> </p>\n<pre>test=# <br>test=# insert into test_123 values ( 1, 'hao');<br>INSERT 0 1<br>test=# select * from public.test_123_t;<br>c1 | c2 | c3 <br>----+-----+-----<br>1 | hao | HAO<br>(1 row)<br><br>test=# update test_123 set c2='new' where c1=1;<br>UPDATE 1<br>test=# select * from public.test_123_t;<br>c1 | c2 | c3 <br>----+-----+-----<br>1 | new | NEW<br>(1 row)<br><br>test=# delete from test_123;<br>DELETE 1<br>test=# select * from public.test_123_t;<br>c1 | c2 | c3 <br>----+----+----<br>(0 rows)</pre>"} {"page_content": "<h3><span>Goal:</span></h3>\n<p><span>The goal is to provide a working TQL to run CDC against MariaDB older versions like say 5.5</span></p>\n<h3>\n<span>Solution:</span><span></span>\n</h3>\n<p>1. MariaDB older versions like 5.5 doesn't support GTID (Global Transaction Identifier), Striim CDC application will display an error related to GTID not being enabled for example </p>\n<pre class=\"p1\"><span class=\"s1\">Caused by: com.striim.mariaMysqlCommon.exception.ExternalSQLException: Problem with the configuration of MariaDB:</span><br><span class=\"s1\">Gtid is not enabled for current node</span></pre>\n<p><span>2. Striim's MariadbReader does not support MariaDB 5.5 and versions before that. Please use MysqlReader instead </span><br><span> </span><br><span>3. Before running the CDC app please execute following in the Source MariaDB database</span></p>\n<pre><code>SET GLOBAL server_id = 1;</code></pre>\n<p>And add <code>server_id = 1</code> entry in <code>[mysqld]</code>section of your configuration file and restart the service. <span>If CDC runs off slave instead of primary db set it to 2.</span><span></span></p>\n<p>4. Here is a sample TQL that was tested in Striim <span>version 4.1.0.4 </span></p>\n<pre>CREATE OR REPLACE APPLICATION App_mariaDB_<span>CDC</span>;<br><br>CREATE OR REPLACE SOURCE src_mariaDB USING Global.MysqlReader ( <br>StartPosition: '2023-MAY-03 12:00:00', <br>Tables: 'striim.test', <br>Compression: false, <br>ConnectionURL: 'jdbc:mariadb://localhost:3306/striim', <br>FilterTransactionBoundaries: true, <br>SendBeforeImage: true, <br>Username: 'root', <br>Password: 'secret' ) <br>OUTPUT TO src_mariaDB_out;<br><br>END APPLICATION App_mariaDB_<span>CDC</span>;</pre>\n<p> </p>"} {"page_content": "<h3>Problem : </h3>\n<p>SalesforceReader hit \"Session expired or invalid\" error and app got stuck in Running status. </p>\n<p>In striim.server.log file , there are large number of below errors : </p>\n<pre>2023-03-02 10:21:12,034 @xx_xx_xx_xx @&lt;app_name&gt; -ERROR ParallelExecutor-1 com.webaction.queryexecutor.IncrementalQueryExecutor.executeCQuery (IncrementalQueryExecutor.java:59) Query for Incremental Load [SELECT ... FROM ... WHERE ... failed with error [{\"message\":\"Session expired or invalid\",\"errorCode\":\"INVALID_SESSION_ID\"}] for object ...<br>2023-03-02 10:21:12,089 @xx_xx_xx_xx @&lt;app_name&gt; -ERROR ParallelExecutor-1 com.webaction.queryexecutor.IncrementalQueryExecutor.execute (IncrementalQueryExecutor.java:34) Query for Incremental Load [SELECT ... FROM ... WHERE ... failed with error [{\"message\":\"Session expired or invalid\",\"errorCode\":\"INVALID_SESSION_ID\"}] for object ...</pre>\n<p> </p>\n<h3>Cause : </h3>\n<p><a href=\"https://webaction.atlassian.net/browse/DEV-35148\">DEV-35148</a> SalesforceReader got stuck while IncrementalQueryExecutor hitting \"Session expired or invalid\"</p>\n<p> </p>\n<h3>Solution : </h3>\n<p>Upgrade striim to 4.1.2.1 version which contains the fix . </p>\n<p> </p>"} {"page_content": "<p><strong><span class=\"wysiwyg-font-size-large\">Problem:</span></strong></p>\n<p>After creating a vault, it works fine. However, after restarting the Striim server, it stops working.</p>\n<p> </p>\n<p><strong><span class=\"wysiwyg-font-size-large\">Affected versions: </span></strong></p>\n<p>4.1.x</p>\n<p> </p>\n<p><strong><span class=\"wysiwyg-font-size-large\">Cause:</span></strong></p>\n<p>In versions prior to 4.1, the properties of a Vault MetaObject (ClientSecret, etc) are not encrypted. With encryption introduced in 4.1, upon server restart, the salt is not saved. Thus encrypted data could not be decrypted, the error “Client Secret is invalid.” is encountered.</p>\n<p> </p>\n<p><strong><span class=\"wysiwyg-font-size-large\">Workaround in version 4.1:</span></strong></p>\n<p>Drop and recreate the vault.</p>\n<p> </p>\n<p><strong><span class=\"wysiwyg-font-size-large\">Fix:</span></strong></p>\n<p>This will be fixed in version 4.2.0 and up.</p>"} {"page_content": "<h3>Problem : </h3>\n<p>Datatype of \"_id\" got changed from Object to String in replication from CosmosDB to MongoDB</p>\n<p> </p>\n<h3>Cause : </h3>\n<p>Starting from 4.1.2 , we introduced new hidden property _h_useIdAsString. <span>By default ,it’s set to true.</span></p>\n<p><span>In MongoCosmosDBWriter,AtlasMongoWriter</span></p>\n<ul dir=\"auto\">\n<li>When \"_h_useIdAsString\" is set to false in target side, the data type will be preserved.</li>\n<li>When \"_h_useIdAsString\" is set to true in target side, the object data type not be preserved, \"_id\" will be converted to String data type</li>\n</ul>\n<h3><span>Solution : </span></h3>\n<p><span>Use striim version 4.1.2 and higher version and add below hidden property into target (MongoDBWriter) which will preserver the data type . </span></p>\n<pre><code>_h_useIdAsString: 'false',</code></pre>\n<p> </p>"} {"page_content": "<p>Attached pdf file shows how to configure Azure AD as Identity Provider (IDP) for Striim.</p>"} {"page_content": "<p>The attached pdf file shows how to connecting to Salesforce in Striim version 4.1.2 and up, using JWT BEARER token for OAuth 2.0.</p>\n<p> </p>\n<p> </p>"} {"page_content": "<h3>Problem: </h3>\n<p>Oracle Database is running in 19c or higher version.</p>\n<p>OracleReader failed with below error message : </p>\n<p><span>Message: 2034 : Start Failed: SQL Query Execution Error ; ;ErrorCode : 44609;SQLCode : 99999;SQL Message : ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR. ORA-06512: at \"SYS.DBMS_LOGMNR\", line 72 ORA-06512: at line 2 . Component Name: &lt;source_name&gt;. Component Type: SOURCE. Cause: 2034 : Start Failed: SQL Query Execution Error ; ;ErrorCode : 44609;SQLCode : 99999;SQL Message : ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR. ORA-06512: at \"SYS.DBMS_LOGMNR\", line 72 ORA-06512: at line 2</span></p>\n<p> </p>\n<h3>Cause :</h3>\n<p>Starting from Oracle19c , continuous logminer is no longer supported in Oracle database and striim introduced ALM mode ( It's default mode since 3.10 version ) </p>\n<p>In TQL file , below hidden property is used for using classic mode . </p>\n<pre><span>_h_useClassic:true,</span></pre>\n<h3> </h3>\n<h3>Solution : </h3>\n<p>If Oracle Database version is lower than 19c , you can use either classic mode or ALM mode . </p>\n<p>If Oracle Database version is 19c or higher , you must use ALM mode. Make sure \"<span>_h_useClassic:true\" is not set in TQL file. </span></p>\n<p> </p>"} {"page_content": "<p>Q. Does Striim support archived log only mode for Oracle CDC?</p>\n<p>A. Yes, this may be achieved in one of following ways:</p>\n<p>1. Oracle Reader on Physical Dataguard</p>\n<p> see Doc at: https://www.striim.com/docs/platform/en/oracle-database-cdc.html#requirements-117</p>\n<p>2. Oracle Reader on Primary database </p>\n<p>In version 4.1.2 and later, add following hidden property:</p>\n<p>_h_ArchiveLogExclusive : true,</p>\n<p> </p>\n<p>3. OJet on Downstream capture database</p>\n<p> see Doc at: https://www.striim.com/docs/platform/en/oracle-database-cdc.html#requirements-117</p>\n<p> </p>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p> Bigquerywriter configured in APPENDONLYMODE with streamingUpload is set to false and it fails with below exception</p>\n<pre><span>Error during BigQuery Integration. Cause: Quota exceeded: Your table exceeded quota for Number of partition modifications to a column partitioned table</span><br><span>I have the table Partitioned by DATETIME field</span></pre>\n<p> </p>\n<p><strong>Cause:</strong></p>\n<p> This is the quota limitation at Bigquery side. It exceeds the number of partition modifications that are allowed by Bigquery. </p>\n<p><strong>Solution:</strong></p>\n<p>Bigquery suggested several ways to avoid the problems that are documented in below link</p>\n<p><a href=\"https://cloud.google.com/bigquery/docs/troubleshoot-quotas\" rel=\"noopener noreferrer\">https://cloud.google.com/bigquery/docs/troubleshoot-quotas</a></p>\n<p>From striim perspective </p>\n<p><strong>Option 1:</strong></p>\n<p>Set the below property in Bigquery writer. This is recommended for APPENDONLY mode.</p>\n<p><strong> streamingUpload: 'true'</strong></p>\n<p> </p>\n<p><strong>Option 2:</strong></p>\n<p>BatchPolicy: 'eventCount:1000000, Interval:300',<strong><br></strong></p>\n<p> </p>"} {"page_content": "<h3>Issue:</h3>\n<p>Following error is seen while setting up cluster rebalance</p>\n<pre><span>-- Processing - SET CLUSTER REBALANCE ON POLICY applicationCount</span><br><span>-&gt; FAILURE</span><br><span>java.util.concurrent.ExecutionException: java.lang.NullPointerException<br><br>-- Processing - SET CLUSTER REBALANCE CONFIG<br>(checkpointAge: '30m',<br>bounceProtectionInterval: '1h')<br>-&gt; FAILURE<br>java.util.concurrent.ExecutionException: java.lang.NullPointerException</span></pre>\n<p> </p>\n<h3>Cause:</h3>\n<p><code class=\"code css-z5oxh7\" data-renderer-mark=\"true\">SET CLUSTER REBALANCE ...</code><span> works by broadcasting to all nodes in the cluster via remote calls and the node that is currently the AppManager executes and replies to the command. </span></p>\n<p><span>This error happens when the command is executed while the nodes are coming up and not fully available.</span></p>\n<p> </p>\n<h3><span>Resolution:</span></h3>\n<p>Re-run the command when all the nodes are available. This can be verified by connecting to the UI of all nodes</p>"} {"page_content": "<h3>Issue:</h3>\n<p>OJet app fails with following error during a fresh run</p>\n<pre>2023-03-02 11:49:46,829<br>Application_Terminated, Medium: WEB, Message: <br>Application admin.Wizard_Oracle_to_Kafka: Application terminated - Message: Failed to 'create' OJetServer 'OJET$ADMIN$WIZARD_ORACLE_TO_KAFKA'.<br>Error 2001 : SQL Query Execution Error ErrorCode: 6502; SQLCode: 65000; SQL Message: ORA-06502: PL/SQL: numeric or value error<br><br>Contact Striim support if the problem persists.. <br>Component Name: OJetServer. <br>Component Type: SourceAdapter. <br><br></pre>\n<p> </p>\n<h3>Cause:</h3>\n<p>The cause of the issue is due to Oracle <span>pl/sql parameter limit of 32k. The number of tables resolved</span></p>\n<p><span>part of the capture list exceeds the 32k limit</span></p>\n<p> </p>\n<h3><span>Resolution:</span></h3>\n<p>Split the capture table list in such a way the table names concatenated doesn't exceed the 32k limit</p>"} {"page_content": "<h3>Problem : </h3>\n<p>Failed to start striim and striim.server.log file show below error message : </p>\n<pre><code>2022-12-23 11:09:57,496 @S127_0_0_1 @ -ERROR main com.webaction.runtime.Server.main (Server.java:2480) Striim Server caught unhandled throwable and will shut down<br>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];<br>at org.elasticsearch.cluster.block.ClusterBlocks.indicesBlockedException(ClusterBlocks.java:229) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.admin.indices.mapping.put.TransportPutMappingAction.checkBlock(TransportPutMappingAction.java:81) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.admin.indices.mapping.put.TransportPutMappingAction.checkBlock(TransportPutMappingAction.java:46) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:173) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:164) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:141) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:59) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:391) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1262) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46) ~[elasticsearch-6.8.23.jar:6.8.23]<br>at com.webaction.wactionstore.elasticsearch.WActionStore.setDataType(WActionStore.java:546) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.wactionstore.elasticsearch.WActionStore.setDataType(WActionStore.java:32) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.persistence.WactionStore.getOrCreateWActionStoreType(WactionStore.java:347) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.persistence.WactionStore.getPersistentDataType(WactionStore.java:329) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.persistence.WactionStore.init(WactionStore.java:942) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.persistence.WactionStore.get(WactionStore.java:215) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.runtime.BaseServer.createWActionStore(BaseServer.java:288) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.runtime.BaseServer.initExceptionstores(BaseServer.java:687) ~[Platform-4.1.2-RC1.jar:?]<br>at com.webaction.runtime.Server.main(Server.java:2408) ~[Platform-4.1.2-RC1.jar:?]</code></pre>\n<p> </p>\n<h3>Cause :</h3>\n<p><span>When space usage of filesystem where elasticsearch directory located is above 95%, indices are set to read-only-mode only which caused striim to crash during start. </span></p>\n<p> </p>\n<h3><span>Workaround : </span></h3>\n<ul>\n<li><span>Add more storage to / Clear space from filesystem where elasticsearch is located .</span></li>\n<li><span>Delete &lt;striim_home&gt;/elasticsearch/data directory</span></li>\n<li>\n<span>Restart striim </span><span></span>\n</li>\n</ul>"} {"page_content": "<h3>Problem : </h3>\n<p>App crashed with error :</p>\n<p dir=\"auto\">\"org.zeromq.ZMQException: Errno 156384819 : errno 156384819\"</p>\n<h3 dir=\"auto\">Cause : </h3>\n<p dir=\"auto\"><span>ZMQMaxSockets has default value 1024 </span><br><br><span>We are using ZMQ messaging system to handle the communication among the components, if there’s a large number of apps or the apps contain a large number of Streams, it may exceed the limitation of the ZMQ socket amount(1024 by default)</span></p>\n<h3 dir=\"auto\">Solution : </h3>\n<p dir=\"auto\">To check how many sockets are used by Striim process : </p>\n<p dir=\"auto\"><span>lsof -p &lt;striim pid&gt; |grep -i listen|wc -l</span></p>\n<p dir=\"auto\"> </p>\n<p dir=\"auto\"><span>Increase <code>ZMQMaxSockets</code> in startup.properties file and restart the striim : <br></span></p>\n<pre><code>ZMQMaxSockets=10240</code></pre>"} {"page_content": "<h3>Goal</h3>\n<p>The Goal of this doc is to explain on enabling alerts for all apps </p>\n<p> </p>\n<h3>Solution</h3>\n<p>On certain Striim versions prior to 4.1.2 the \"Alert on All apps in all namespaces\" is visible in UI under the \"Alert Manager\" section. However starting version 4.1.2 these are moved to system alerts and no longer visible in the UI but nevertheless <code>HALT</code><span> / </span><code>TERMINATE</code><span> / </span><code>CRASH</code><span> alerts on all apps are enabled by default.</span></p>\n<p> </p>\n<p><span>The list of those alerts (smartalerts as they are called) can be seen using following console command</span></p>\n<pre><span>W (admin) &gt; list smartalerts<br><br>SYSALERTRULE 1 =&gt; Source_Idle<br>SYSALERTRULE 2 =&gt; Application_Halted<br>SYSALERTRULE 3 =&gt; Application_AutoResumed<br>SYSALERTRULE 4 =&gt; Application_RebalanceFailed<br>SYSALERTRULE 5 =&gt; Target_HighLee<br>SYSALERTRULE 6 =&gt; Application_CheckpointNotProgressing<br>SYSALERTRULE 7 =&gt; Target_Idle<br>SYSALERTRULE 8 =&gt; Server_HighCpuUsage<br>SYSALERTRULE 9 =&gt; Application_Backpressured<br>SYSALERTRULE 10 =&gt; Server_HighMemoryUsage<br>SYSALERTRULE 11 =&gt; Application_Terminated<br>SYSALERTRULE 12 =&gt; Server_NodeUnavailable<br>SYSALERTRULE 13 =&gt; Application_Rebalanced</span></pre>\n<p> </p>\n<p><span>These smartalerts are treated as System alerts and sent to UI <strong>alert bell</strong> by default. If needing to modify these alerts to send to <code class=\"code\">EMAIL</code>, <code class=\"code\">SLACK</code>, <code class=\"code\">TEAMS</code> following page describes how to modify using<code class=\"code\">ALTER SMARTALERT</code>option.</span><br><span> </span><br><a href=\"https://www.striim.com/docs/platform/en/managing-system-alerts.html\" rel=\"noopener noreferrer\">https://www.striim.com/docs/platform/en/managing-system-alerts.html</a></p>\n<p> </p>"} {"page_content": "<h3>Goal:</h3>\n<p>The goal of this doc is to document the changes needed to support KafkaReader/Writer with different environment setups like SSL, Kerberos and Plaintext</p>\n<p> </p>\n<h3>Solution</h3>\n<p> </p>\n<h3 data-renderer-start-pos=\"58\">Use Kafka SASL (Kerberos) authentication with SSL encryption</h3>\n<p data-renderer-start-pos=\"120\">To use SASL authentication with SSL encryption, include the following properties in your Kafka Reader or Kafka Writer KafkaConfig, adjusting the paths to match your environment and using the passwords provided by your Kafka administrator.</p>\n<pre data-renderer-start-pos=\"120\">KafkaConfigPropertySeparator: ':',<br>KafkaConfigValueSeparator: '==',<br>KafkaConfig:'security.protocol==SASL_SSL:<br>sasl.mechanism==GSSAPI:<br>sasl.jaas.config==com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true doNotPrompt=true serviceName=\"kafka\" client=true keyTab=\"/etc/krb5.keytab\" principal=\"striim@REALM.COM\";:<br>sasl.kerberos.service.name==kafka:<br>ssl.truststore.location==/opt/striim/kafka.truststore.jks:<br>ssl.truststore.password==secret:<br>ssl.keystore.location==/opt/striim/kafka.keystore.jks:<br>ssl.keystore.password==secret:<br>ssl.key.password==secret'</pre>\n<h3 data-renderer-start-pos=\"942\"> </h3>\n<h3 data-renderer-start-pos=\"942\">Use Kafka SASL (Kerberos) authentication without SSL encryption</h3>\n<p data-renderer-start-pos=\"1007\">To use SASL authentication without SSL encryption, include the following properties in your Kafka Reader or Kafka Writer KafkaConfig</p>\n<pre class=\"code-block css-t27nqu\">KafkaConfigPropertySeparator: ':',<br>KafkaConfigValueSeparator: '==',<br>KafkaConfig:'security.protocol==SASL_PLAINTEXT:<br>sasl.mechanism==GSSAPI:<br>sasl.jaas.config==com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true doNotPrompt=true serviceName=\"kafka\" client=true keyTab=\"/etc/krb5.keytab\" principal=\"striim@REALM.COM\";:<br>sasl.kerberos.service.name==kafka'</pre>\n<p> </p>\n<h3 data-renderer-start-pos=\"1525\">Using Kafka SSL encryption without SASL (Kerberos) authentication</h3>\n<p data-renderer-start-pos=\"1592\">To use SSL encryption without SASL authentication, include the following properties in your Kafka stream property set or KafkaReader or KafkaWriter KafkaConfig, adjusting the paths to match your environment and using the passwords provided by your Kafka administrator.</p>\n<pre data-renderer-start-pos=\"1592\">KafkaConfigPropertySeparator: ':',<br>KafkaConfigValueSeparator: '==',<br>KafkaConfig:'security.protocol==SASL_SSL: <br>ssl.truststore.location==/opt/striim/kafka.truststore.jks:<br>ssl.truststore.password==secret:<br>ssl.keystore.location==/opt/striim/kafka.keystore.jks:<br>ssl.keystore.password==secret:<br>ssl.key.password==secret'</pre>"} {"page_content": "<h3>Issue:</h3>\n<p>Striim server startup fails with following error in the striim.server.log</p>\n<pre>2023-02-23 18:58:30,670 @S10_10_60_118 @ -ERROR main com.webaction.runtime.Server.main (Server.java:2427) Striim Server caught unhandled throwable and will shut down<br>java.lang.NullPointerException: null<br>at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:273) ~[?:1.8.0_352]<br>at org.elasticsearch.common.io.PathUtils.get(PathUtils.java:60) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at org.elasticsearch.monitor.os.OsProbe.readSysFsCgroupCpuAcctCpuAcctUsage(OsProbe.java:274) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at org.elasticsearch.monitor.os.OsProbe.getCgroupCpuAcctUsageNanos(OsProbe.java:261) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at org.elasticsearch.monitor.os.OsProbe.getCgroup(OsProbe.java:419) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at org.elasticsearch.monitor.os.OsProbe.osStats(OsProbe.java:464) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at org.elasticsearch.monitor.os.OsService.&lt;init&gt;(OsService.java:45) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at org.elasticsearch.monitor.MonitorService.&lt;init&gt;(MonitorService.java:45) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at org.elasticsearch.node.Node.&lt;init&gt;(Node.java:362) ~[elasticsearch-5.6.16.jar:5.6.16]<br>at com.webaction.wactionstore.elasticsearch.PluginConfigurableNode.&lt;init&gt;(PluginConfigurableNode.java:15) ~[Platform-4.1.0.1.jar:?]<br>at com.webaction.wactionstore.elasticsearch.WActionStoreManager.connectNodeClient(WActionStoreManager.java:295) ~[Platform-4.1.0.1.jar:?]<br>at com.webaction.wactionstore.elasticsearch.WActionStoreManager.connect(WActionStoreManager.java:837) ~[Platform-4.1.0.1.jar:?]<br>at com.webaction.wactionstore.elasticsearch.WActionStoreManager.getClient(WActionStoreManager.java:809) ~[Platform-4.1.0.1.jar:?]<br>at com.webaction.wactionstore.elasticsearch.WActionStoreManager.isConnected(WActionStoreManager.java:842) ~[Platform-4.1.0.1.jar:?]<br>at com.webaction.wactionstore.elasticsearch.WActionStoreManager.getNames(WActionStoreManager.java:399) ~[Platform-4.1.0.1.jar:?]<br>at com.webaction.wactionstore.WActionStores.startup(WActionStores.java:57) ~[Platform-4.1.0.1.jar:?]<br>at com.webaction.runtime.Server.main(Server.java:2333) [Platform-4.1.0.1.jar:?]</pre>\n<p> </p>\n<h3>Cause:</h3>\n<p>This is a file system level issue and check for readSysFsCgroupCpuAcctCpuAcctUsage fails when cpu,cpuacct are missing entries</p>\n<pre># ls -l /sys/fs/cgroup<br># cat /proc/self/cgroup<br># mount -t cgroup<br># grep cgroup /proc/self/mountinfo</pre>\n<h3> </h3>\n<h3>Fix:</h3>\n<p>It was resolved mounting the needed options</p>\n<pre><code># as root<br>mount -t cgroup -o rw,nosuid,nodev,noexec,relatime,cpu,cpuacct cgroup /sys/fs/cgroup/cpu,cpuacct</code></pre>"} {"page_content": "<h3>Scenario : </h3>\n<p>Source table has blob column which stores zip file and target is <span>Azure Blob storage. </span></p>\n<p><span>User wants to fetch the data from blob column only and write it to Azure blob with original zip file format. </span></p>\n<p> </p>\n<h3><span>Solution : </span></h3>\n<p><span>Use - BinaryDataFormatter ( See attached file ) which will decode HEX value to the binary array and will write it to the destination.</span></p>\n<p><span>1) Download attached file BinaryDataFormatter.scm and place it into &lt;striim_home&gt;/modules directory </span></p>\n<p><span>2) Restart the striim server(s) </span></p>\n<p><span>3) Example ( It applies to any File Based Writers) : </span><span></span></p>\n<p><span>Source Data : </span></p>\n<pre><span>[oracle@oracle19c tmp]$ cat test.log<br>abc<br><br>[oracle@oracle19c tmp]$ zip test.zip test.log<br>adding: test.log (stored 0%<br><br>SQL&gt;<br></span><br>SYS@lhrdb&gt; conn hshi/hshi<br>Connected.<br><span>SYS@lhrdb&gt; create table hshi.table_blob(c1 int , c2 blob);<br></span><br>HSHI@lhrdb&gt; DECLARE<br>oNew BLOB;<br>oBFile BFILE;<br>BEGIN<br>oBFile := BFILENAME('DUMP_HAO', 'test.zip');<br>DBMS_LOB.OPEN(oBFile, DBMS_LOB.LOB_READONLY);<br>DBMS_LOB.createtemporary(oNew,TRUE);<br>DBMS_LOB.LOADFROMFILE(oNew, oBFile, dbms_lob.lobmaxsize);<br>DBMS_LOB.CLOSE(oBFile);<br>INSERT INTO table_blob VALUES (1, oNew );<br>dbms_lob.freetemporary(oNew); <br>END;<br>/ 2 3 4 5 6 7 8 9 10 11 12 13<br><br>PL/SQL procedure successfully completed.<br><br>HSHI@lhrdb&gt; select * from hshi.table_blob;<br><br>C1 C2<br>---------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------<br>1 504B03040A0000000000C6463A564E818847040000000400000008001C00746573742E6C6F67555409000334CFD16376C6F36375780B00010431D400000431D400006162630A504B01021E030A000000</pre>\n<p><span><br>TQL file : </span></p>\n<pre><code>CREATE APPLICATION TRAN_BLOB;<br><br>CREATE OR REPLACE TYPE cq_Type (<br> data java.lang.Object,<br> userdata java.util.HashMap);<br><br>CREATE OR REPLACE SOURCE src_oracle USING Global.DatabaseReader ( <br> Password: 'striim', <br> FetchSize: 100, <br> ConnectionURL: 'jdbc:oracle:thin:@localhost:1521/lhrdb', <br> adapterName: 'DatabaseReader', <br> QuiesceOnILCompletion: false, <br> Tables: 'hshi.table_blob;', <br> DatabaseProviderType: 'Oracle', <br> Password_encrypted: 'false', <br> Username: 'striim' ) <br>OUTPUT TO src_output;<br><br>CREATE CQ cq_userdata <br>INSERT INTO cq_out <br>SELECT putuserdata(s,'ID',s.data[0],'FIELD_SEP','_','DATE_TIME',TO_STRING(DNOW(),'yyyy_MM_dd_HH_mm')) FROM src_output s;<br><br>CREATE OR REPLACE TARGET FileWriter_t USING Global.FileWriter ( <br> flushpolicy: 'EventCount:1,Interval:10s', <br> rolloveronddl: 'true', <br> encryptionpolicy: '', <br> adapterName: 'FileWriter', <br> rolloverpolicy: 'EventCount:1,Interval:30s', <br> filename: '%@userdata(ID)%_%@userdata(FIELD_SEP)%_%@userdata(DATE_TIME)%.zip' ) <br>FORMAT USING Global.BinaryDataFormatter ( <br> handler: 'com.striim.formatters.BinaryDataFormatter', <br> formatterName: 'BinaryDataFormatter', <br> DecodeBinary: 'true', <br> DataBlobFieldIndex: '1' ) <br>INPUT FROM cq_out;<br><br>END APPLICATION TRAN_BLOB;</code></pre>\n<p><span>Generated file : </span></p>\n<pre><span>-rw-r--r-- 1 haoshi staff 170 Feb 20 12:11 1_2023_02_20_12_11.00.zip<br><br>haoshi@Haos-MacBook-Pro Striim % unzip 1_2023_02_20_12_11.00.zip<br>Archive: 1_2023_02_20_12_11.00.zip<br>extracting: test.log <br>haoshi@Haos-MacBook-Pro Striim % cat test.log<br>abc<br><br></span></pre>\n<p> </p>\n<p><code>DataBlobFieldIndex</code> is the index of BLOB column ( starting with 0 ) . Only one BLOB column per table is supported . </p>\n<p><img src=\"https://support.striim.com/hc/article_attachments/12557684202647\" alt=\"mceclip2.png\"></p>"} {"page_content": "<h3 data-pm-slice=\"1 1 []\">Question:</h3>\n<p data-pm-slice=\"1 1 []\">MySQL by default will not allow you enter in a 0 date or invalid date like this 0000-00-00 or 0000-13-33 but you can set an environment variable to allow you to do this </p>\n<p data-pm-slice=\"1 1 []\"> </p>\n<pre data-pm-slice=\"1 1 []\">SET sql_mode = 'allow_invalid_dates';<br><span style=\"color: #739eca; font-weight: bold;\">CREATE</span> <span style=\"color: #739eca; font-weight: bold;\">TABLE</span> <span style=\"color: #9e9e9e;\">waction</span>.<span style=\"color: #c1aa6c; font-weight: bold;\">time</span>(<span style=\"color: #9e9e9e;\">COL3</span> <span style=\"color: #c1aa6c; font-weight: bold;\">TIMESTAMP</span>)<span style=\"color: #eecc64;\">;<br></span><span style=\"color: #739eca; font-weight: bold;\">insert</span> <span style=\"color: #739eca; font-weight: bold;\">into</span> <span style=\"color: #c1aa6c; font-weight: bold;\">time</span> <span style=\"color: #739eca; font-weight: bold;\">values</span> (<span style=\"color: #cac580;\">'0000-13-33 00:00:00'</span>)<span style=\"color: #eecc64;\">;</span></pre>\n<div style=\"background-color: #2f2f2f; padding: 0px 0px 0px 2px;\">\n<div style=\"color: #aaaaaa; background-color: #2f2f2f; font-family: ' Menlo' font-size:12pt; white-space: nowrap;\"></div>\n</div>\n<p data-pm-slice=\"1 1 []\">This is an example of an initial load app from mysql(source)---&gt; postgres(target)</p>\n<p data-pm-slice=\"1 1 []\"> </p>\n<p data-pm-slice=\"1 1 []\">Inserting that invalid zero date will fail the app like below</p>\n<pre data-pm-slice=\"1 1 []\"><span class=\"s1\">com.striim.exception.checked.AdapterExternalException: Message: DatabaseReader could<br>not execute query, and failed<span class=\"Apple-converted-space\"> </span>on table \"waction\".\"time\" having the following column <br>values [0000-00-00 00:00:00]due to the following reason: Zero date value prohibited. <br></span></pre>\n<p class=\"p1\"> </p>\n<p class=\"p1\"><span class=\"wysiwyg-font-size-large\"><strong><span class=\"s1\">Answer:</span></strong></span></p>\n<p class=\"p1\"><font size=\"2\">Mysql provides JDBC options like below to mask such values and this can be used in the ConnectionURL on source reader</font></p>\n<pre class=\"p1\"><span class=\"wysiwyg-font-size-medium\"><span class=\"s1\"><span>jdbc:mysql://localhost:3306/waction?zeroDateTimeBehavior=round</span></span></span><br><br>or<br><br><span class=\"wysiwyg-font-size-medium\"><span class=\"s1\"><span>jdbc:mysql://localhost:3306/waction?zeroDateTimeBehavior=convertToNull</span></span></span></pre>\n<p>Attached is TQL and a few screenshots of how the test case was carried out .</p>\n<p class=\"p1\"> </p>"} {"page_content": "<h3>Goal:</h3>\n<p>The goal is to help with the config changes needed to support ActiveDirectoryPassword Authentication for SQL Server databases</p>\n<p> </p>\n<h3>Solution:</h3>\n<p><span>SQL Server 2022 (16.x) introduces support for <a href=\"https://learn.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication\" data-linktype=\"absolute-path\">Azure Active Directory (Azure AD) authentication</a></span></p>\n<p>methods using Azure AD identities:</p>\n<ul>\n<li><strong>Azure Active Directory Password</strong></li>\n<li>Azure Active Directory Integrated</li>\n<li>Azure Active Directory Universal with Multi-Factor Authentication</li>\n<li>Azure Active Directory access token</li>\n</ul>\n<p><strong>Striim version 4.1.2+</strong> currently supports only the first option highlighted and following are the steps needed on the Striim server to support ActiveDirectoryPassword Authentication for SQL Server databases</p>\n<p> </p>\n<p>1. Keep following jars ready</p>\n<pre>a. <a href=\"https://repo1.maven.org/maven2/com/sun/mail/javax.mail/1.6.1/javax.mail-1.6.1.jar\" target=\"_self\">javax.mail-1.6.1.jar</a><br>b. <a href=\"https://repo1.maven.org/maven2/com/google/code/gson/gson/2.8.0/gson-2.8.0.jar\" target=\"_self\">gson-2.8.0.jar</a><br>c. <a href=\"https://repo1.maven.org/maven2/com/microsoft/azure/adal4j/1.6.4/adal4j-1.6.4.jar\" target=\"_self\">adal4j-1.6.4.jar</a><br>d. <a href=\"https://repo1.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/6.5/oauth2-oidc-sdk-6.5.jar\" target=\"_self\">oauth2-oidc-sdk-6.5.jar</a></pre>\n<p>2. stop the striim server and make following changes</p>\n<pre>a. remove or move &lt;striim home&gt;/lib/gson2.8.9.jar to &lt;path&gt;/striim-backup<br>b. remove or move &lt;striim home&gt;/lib/adal4j-1.0.0.jar to &lt;path&gt;/striim-backup<br>c. remove or move &lt;striim home&gt;/lib/oauth2-oidc-sdk-9.7.jar to &lt;path&gt;/striim-backup</pre>\n<pre>d. Add Step 1 jars (4 of them) to &lt;striim home&gt;/lib/</pre>\n<p>3. start striim server</p>\n<p>4. Test the adapter</p>\n<ul>\n<li>Snippet for the changes in username/ connection URL to support ActiveDirectoryPassword for CDC using MSSqlReader</li>\n</ul>\n<p> </p>\n<pre>CREATE OR REPLACE SOURCE mssql_read USING Global.MSSqlReader (<br>Username: 'striim.user@azureteststriim.onmicrosoft.com',<br>ConnectionURL:<br>'jdbc:sqlserver://striim2.database.windows.net:1433;authentication=ActiveDirectoryP<br>assword;hostNameInCertificate=*.database.windows.net;loginTimeout=100;',<br>..</pre>\n<ul>\n<li>Snippet for the changes in username/ connection URL to support ActiveDirectoryPassword for IL using DatabaseReader</li>\n</ul>\n<p> </p>\n<pre>CREATE OR REPLACE SOURCE ReadFromMSSQLADIL USING Global.DatabaseReader(<br>Username: 'striim.user@azureteststriim.onmicrosoft.com',<br>ConnectionURL:<br>'jdbc:sqlserver://striim2.database.windows.net:1433;authentication=ActiveDirectoryP<br>assword;hostNameInCertificate=*.database.windows.net;loginTimeout=100;',<br>..</pre>\n<p> </p>"} {"page_content": "<p>Ingesting data from on-prem source to Striim server running on SaaS needs connection to be established from SaaS to on-prem. In most of the cases, the source database is located behind firewall and cannot be accessed directly from SaaS. In such scenarios, we need a way to get connected from SaaS to on-prem database and reverse ssh tunnel is one of the ways. Attached documentation describes how to achieve that.</p>\n<p> </p>\n<p>Limitation : </p>\n<p>Currently , if the Striim Service is stopped and restart ,the jumpbox server will be lost and we have to reconfigure it. </p>\n<p>Please create a support ticket and provide the public key &amp; public IP address of <span>Bastion machine to reconfigure the reverse SSH tunnel. </span></p>\n<p> </p>\n<p>Enhancement request has been tracked under cloud ticket : CLOUD-8729 . </p>\n<p> </p>\n<p> </p>"} {"page_content": "<h2>Problem : </h2>\n<p>DatabaseWriter (MS SQL Server) - Crashed With below error message : </p>\n<p><span>Message: Incorrect syntax near '-'.. Suggested Actions: 1.If you wish to ignore this exception please set IgnorableExceptioncode to errorCode : 102. Component Name: Target_MSSQL. Component Type: TARGET. Cause: Incorrect syntax near '-'.</span></p>\n<p> </p>\n<p>- Database name contains '-' </p>\n<p>- Table(s) has identity column </p>\n<p>- \"<span>enableidentityInsert=true\" has been used already.</span></p>\n<p> </p>\n<h2>Cause : </h2>\n<p><a href=\"https://webaction.atlassian.net/browse/DEV-29758\">DEV-29758</a> Database Writer Support SQL Server Database Name Dashes ( Fixed Version : 4.1.1) </p>\n<p> </p>\n<h2>Solution : </h2>\n<p>Upgrade Striim to 4.1.2 or <a href=\"https://support.striim.com/hc/en-us/articles/229277848-Download-of-Latest-Version-of-Striim\" target=\"_self\">latest</a></p>"} {"page_content": "<h3>MSJet : </h3>\n<p>MSJet utilizes a file reader to read from MS SQL Server native replication logs. </p>\n<h3>Requirements : </h3>\n<div data-pm-slice=\"1 1 []\" data-en-clipboard=\"true\"><span>MSJet reads logical changes directly from SQL Server's transaction logs. Unlike MS SQL Reader, MSJet does not require SQL Server's CDC change tables, and CDC is automatically enabled on a per-table basis.</span></div>\n<div></div>\n<div><span>MSJet supports Microsoft SQL Server versions 2016 (SP2), 2017, and 2019 running on 64-bit Windows 10 or Windows Server 2012 or later. It is not compatible with SQL Server running on other operating systems or on Windows on ARM.</span></div>\n<div></div>\n<div><span>MSJet must be deployed on a Forwarding Agent on the SQL Server system on a Striim cluster running on 64-bit Windows 10 or Windows Server 2012 or later (Windows on ARM is not supported).</span></div>\n<div></div>\n<div>\n<span>Microsoft Visual C++ 2015-2019 Redistributable (x64) version 14.28.29914 or later (see </span><a href=\"https://docs.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-160#visual-studio-2015-2017-2019-and-2022\" rel=\"noreferrer\" rev=\"en_rl_none\"><span>Visual Studio 2015, 2017, 2019, and 2022</span></a><span>) must be installed in the Windows environment where MSJet is deployed..</span>\n</div>\n<p> </p>\n<h3>Steps : </h3>\n<ol class=\"orderedlist\" type=\"1\">\n<li class=\"listitem\">\n<p>Create a Windows user for use by Striim on the SQL Server host (the Windows system that hosts the SQL Server instance containing the databases to be read).</p>\n</li>\n<li class=\"listitem\">\n<p>Grant that user local Administrator privileges on the SQL Server host.</p>\n</li>\n<li class=\"listitem\">\n<p>Log in as that user and install a Forwarding Agent on the SQL Server host (see<span></span><a class=\"xref linktype-fork\" title=\"Striim Forwarding Agent installation and configuration\" href=\"https://www.striim.com/docs/platform/en/striim-forwarding-agent-installation-and-configuration.html\"><span class=\"xreftitle\">Striim Forwarding Agent installation and configuration</span></a>).</p>\n</li>\n<li class=\"listitem\">\n<p id=\"UUID-78d6fc72-6366-8cc7-d7b1-b2db9eb1cd8a_para-idm13273318483014\">If Microsoft Visual C++ 2015-2019 Redistributable (x64) version 14.28.29914 or later is not already available on the SQL Server host, install or upgrade it (see<span></span><a class=\"link\" href=\"https://docs.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-160#visual-studio-2015-2017-2019-and-2022\" target=\"_blank\" rel=\"noopener\">Visual Studio 2015, 2017, 2019, and 2022</a>).</p>\n</li>\n<li id=\"UUID-78d6fc72-6366-8cc7-d7b1-b2db9eb1cd8a_listitem-idm13215076949140\" class=\"listitem\">\n<p>In SQL Server, enable change data capture on each database to be read using the following commands, which require the sysadmin role:</p>\n<div class=\"paligocode-wrapper\">\n<pre class=\"programlisting hljs sql\"><span class=\"hljs-keyword\">USE</span> &lt;<span class=\"hljs-keyword\">database</span><span class=\"hljs-keyword\">name</span>&gt;\nEXEC sys.sp_cdc_enable_db</pre>\n<button class=\"btn btn-xs btn-primary\" title=\"Copy to clipboard\"></button>\n</div>\n</li>\n<li class=\"listitem\">\n<p>Stop the Capture and Cleanup jobs on each of those databases (see<span></span><a class=\"link\" href=\"https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/administer-and-monitor-change-data-capture-sql-server\" target=\"_blank\" rel=\"noopener\">Administer and Monitor Change Data Capture (SQL Server)</a>). This will stop SQL Server from writing to its CDC change tables, which MSJet does not require.</p>\n</li>\n<li id=\"UUID-78d6fc72-6366-8cc7-d7b1-b2db9eb1cd8a_listitem-idm13215077192804\" class=\"listitem\">\n<p><span class=\"emphasis\"><em>If using Windows authentication</em></span>, skip this step.</p>\n<p><span class=\"emphasis\"><em>If using SQL Server authentication</em></span>, create a SQL Server user for use by MSJet.</p>\n<p>For more information, see Microsoft's<span></span><a class=\"link\" href=\"https://docs.microsoft.com/en-us/sql/relational-databases/security/choose-an-authentication-mode\" target=\"_blank\" rel=\"noopener\">Choose an Authentication Mode</a><span></span>and the notes for MSJet's Integrated Security property in<span></span><a class=\"xref linktype-fork\" title=\"MSJet properties\" href=\"https://www.striim.com/docs/platform/en/msjet-properties.html\"><span class=\"xreftitle\">MSJet properties</span></a>.</p>\n</li>\n<li id=\"UUID-78d6fc72-6366-8cc7-d7b1-b2db9eb1cd8a_listitem-idm13215077588950\" class=\"listitem\">\n<p>Grant the SQL Server user (if using SQL Server authentication) or the Windows user (if using Windows authentication) the<span></span><code class=\"code\">db_owner</code><span></span>role for each database to be read using the following commands, which require the sysadmin role:</p>\n<div class=\"paligocode-wrapper\">\n<pre class=\"programlisting hljs sql\"><span class=\"hljs-keyword\">USE</span> &lt;<span class=\"hljs-keyword\">database</span><span class=\"hljs-keyword\">name</span>&gt;\nEXEC sp_addrolemember @rolename=db_owner, @membername=&lt;<span class=\"hljs-keyword\">user</span><span class=\"hljs-keyword\">name</span>&gt;</pre>\n<button class=\"btn btn-xs btn-primary\" title=\"Copy to clipboard\"></button>\n</div>\n</li>\n<li class=\"listitem\">\n<p>If you have not previously performed a full backup on each of the databases to be read, do so now (<a class=\"link\" href=\"https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/full-database-backups-sql-server\" target=\"_blank\" rel=\"noopener\">Full Database Backups (SQL Server)</a>). E.g : <span>Backup database HAO_MSSQL to disk = 'C:\\temp\\hao_mssql.bak' with format; </span></p>\n</li>\n<li class=\"listitem\">\n<p>Configure the following stored procedure to run every five minutes on each database that will be read. This will retain the logs read by this adapter for three days. If that is more than necessary or not enough, you may increase the<span></span><code class=\"code\">retentionminutes</code><span></span>variable. Note that the longer you retain the logs, the more disk space will be required by SQL Server.</p>\n<pre>declare @retentionminutes int = (3 * 24 * 60) --3 days in minute granularity<br><br>declare @trans table (begt binary(10), endt binary(10))<br>insert into @trans exec sp_repltrans<br><br>select dateadd(minute, -@retentionminutes, getdate())<br><br>declare @firstlsn binary(10) = null<br>declare @lastlsn binary(10) = null<br>declare @firstTime datetime<br>declare @lasttime datetime<br><br>select top (1) @lastTime = (select top(1) [begin time] <br>from fn_dblog(stuff(stuff(convert(char(24), begt, 1), 19, 0, ':'), 11, 0, ':'), default)),<br>@lastlsn = begt<br>from @trans<br>order by begt desc<br><br>--All transactions are older than the retention, no further processing required,<br>--everything can be discarded<br>if (@lasttime &lt; dateadd(minute,-@retentionminutes, getdate()))<br>begin<br>EXEC sp_repldone @xactid = NULL, @xact_seqno = NULL, @numtrans = 0, @time = 0, @reset = 1 <br>end<br>else<br>begin<br>--see if anything can be discarded<br>select top (1) @firstTime = (select top(1) [begin time] <br>from fn_dblog(stuff(stuff(convert(char(24), begt, 1), 19, 0, ':'), 11, 0, ':'), default)),<br>@firstlsn = isnull(@firstlsn, begt)<br>from @trans<br>order by begt asc<br><br>if (@firsttime &lt; dateadd(minute, -@retentionminutes, getdate()))<br>begin<br>--Since only full VLogs can be truncated we really only need to check the earliest LSN <br>--for every Vlog's date<br>select @firstlsn = substring(max(t.lsns), 1, 10), <br>@lastlsn = substring(max(t.lsns), 11, 10)<br>from (select min(begt + endt) as lsns <br>from @trans group by substring(begt, 1, 4)) as t<br>where (select top(1) [begin time] <br>from fn_dblog(stuff(stuff(convert(char(24), t.lsns, 1), 19, 0, ':'), 11, 0, ':'), default)<br>where Operation = 'LOP_BEGIN_XACT') &lt; dateadd(minute, -@retentionminutes, getdate())<br><br>exec sp_repldone @xactid = @firstlsn, @xact_seqno = @lastlsn, @numtrans = 0, @time = 0,<br>@reset = 0 <br>end<br>end</pre>\n</li>\n</ol>\n<h3>Limitations : </h3>\n<p>- Currently, each MSJET adapter requires an agent. To support multiple MSJet adapter , customer has to start multiple agents. </p>\n<p>In future , one agent should support multiple MSJet adapters , one MSJet adapter should support multiple DBs. </p>\n<p><a href=\"https://webaction.atlassian.net/browse/PFR-371\">https://webaction.atlassian.net/browse/PFR-371</a> </p>\n<p>- <span>Tables with XML columns are not supported.</span></p>\n<p><span>- </span><span>Reading from secondary databases is not supported.</span></p>\n<p><span>- </span><span>Reading from backups is supported only if they are accessible only in the location where they were taken.</span></p>\n<p> </p>\n<h3>References :</h3>\n<p><a href=\"https://www.striim.com/docs/platform/en/sql-server-setup-for-msjet.html\">https://www.striim.com/docs/platform/en/sql-server-setup-for-msjet.html</a> </p>"} {"page_content": "<p> </p>\n<h3 id=\"01GWA6DFDN6A5GP6SYNK9AR4KZ\"><strong>FETCH SIZE </strong></h3>\n<p>The fetch size is the number of rows that MSSQLReader can fetch at a time. With the default value of 0 the fetch size is controlled by SQL Server driver and optionally this value can be changed.</p>\n<p>Lower values will reduce memory usage while higher values may increase performance.<br>It relates to the JDBC Fetchsize call explained <a href=\"https://learn.microsoft.com/en-us/sql/connect/jdbc/reference/setfetchsize-method-sqlserverresultset?view=sql-server-ver16\">here</a> </p>\n<p>It is recommended to leave this to the default of 0 unless otherwise instructed.</p>\n<p> </p>\n<h3 id=\"01GWA6KCVQV212EZB2QY5ZD9QF\"><strong>POLLING INTERVAL</strong></h3>\n<p>The polling interval property has different functionality at the Database and Striim server</p>\n<p> </p>\n<p><strong>Database</strong></p>\n<p><strong>polling_interval</strong><br>Number of seconds between log scan cycles. polling_interval is bigint with a default of NULL, which indicates no change for this parameter. polling_interval is valid only for capture jobs when continuous is set to 1.</p>\n<p>A log scan will be carried out every five minutes, for instance at 9:00, 9:05, and 9:10,9:15 respectively, if the polling interval is set to 300 (5 minutes).If any insert, update, or deletion operations were made at 9:01, 9:03 and 9:04 as in the previous example, a log scan would be carried out at 9:05 and will fetch all new operations after last fetch (at 9.00) so all 9:01, 9:03 and 9:04 will be fetched at 9.05.</p>\n<p> </p>\n<p><strong>Striim</strong></p>\n<p><strong>polling_interval</strong><br>Time to wait between fetches; may be specified in seconds only, by default the thread will progressively sleep from 0 to 5 sec before checking back for records. It actually means \"Time to wait between fetches; may be specified in seconds only, by default the thread will progressively sleep from 0 to 5 sec before checking back for records\".</p>"} {"page_content": "<p>1. Where is Striim Server log?</p>\n<p>They are under &lt;installation_home&gt;/logs/ directory. For RPM installation, by default, it is /opt/striim/logs/. By default, up to 10 old server logs are kept in the directory, and default max size is 1GB, which can be configured by ./conf/<span class=\"s1\">log4j.server.properties.</span></p>\n<p><span class=\"s1\">When uploading a log to support ticket, please compress it first as the up to 50MB size file can be attached to the tickets.</span></p>\n<p>2. How to get tql file?</p>\n<p>(1) from UI, in app monitor main page, click the 3 dots for the app, and export.</p>\n<p>(2) from console to export multiple apps: <a href=\"https://www.striim.com/docs/archive/410/platform/en/export.html\">https://www.striim.com/docs/archive/410/platform/en/export.html</a></p>\n<p>3. How to change with new license keys?</p>\n<ul>\n<li>make a backup of ./conf/startUp.properties file</li>\n<li>modify the file with new product key and licence key values (assuming company and cluster names are not changed)</li>\n<li>schedule a time to restart the striim node.</li>\n<li>for multiple node cluster, the change has to be on all the nodes. then stop all the nodes first, before starting them.</li>\n</ul>\n<p> </p>"} {"page_content": "<p>Table mapping syntax for MSSQL Reader and MSSQL CDC.</p>\n<p>MSSQL Reader<br>table: &lt;database name&gt;.&lt;schema name&gt;.&lt;table name&gt;</p>\n<p> </p>\n<p>Note: For initial load we have to include database name in the table property.</p>\n<p><br>MSSQL CDC<br>table: &lt;schema name&gt;.&lt;table name&gt;</p>\n<p> </p>\n<p>if we specify incorrect table mapping will leads to discards all the records in the target.</p>\n<p> </p>\n<p><strong>EXAMPLE TQL for MSSQL READER.</strong></p>\n<p>CREATE APPLICATION test_001;</p>\n<p>CREATE OR REPLACE SOURCE scr_test_mssql_reader USING Global.DatabaseReader ( <br>DatabaseProviderType: 'Default', <br>FetchSize: 100, <br><strong>Tables: 'qatest.dbo.emp;',</strong> <br>Username: 'qatest', <br>adapterName: 'DatabaseReader', <br>QuiesceOnILCompletion: false, <br>Password_encrypted: 'true', <br>Password: 'c2X8NwMubgwPoFJC51peNg==', <br>ConnectionURL: 'jdbc:sqlserver://localhost:1433;DatabaseName=qatest' ) <br>OUTPUT TO str_test_mssql_reader;</p>\n<p>CREATE OR REPLACE TARGET tgt_test_mssql_reader USING Global.DatabaseWriter ( <br>Password: 'fco0eyTDCZinZcUsWcUaFw==', <br>ConnectionRetryPolicy: 'retryInterval=30, maxRetries=3', <br>BatchPolicy: 'EventCount:10,Interval:5', <br>CommitPolicy: 'EventCount:10,Interval:5', <br>CheckPointTable: 'CHKPOINT', <br>Password_encrypted: 'true', <br>CDDLAction: 'Process', <br><strong>Tables: 'qatest.dbo.emp,str.emp;',</strong> <br>ConnectionURL: 'jdbc:oracle:thin:@localhost:1521:orcl', <br>StatementCacheSize: '50', <br>DatabaseProviderType: 'Default', <br>Username: 'str', <br>PreserveSourceTransactionBoundary: 'false', <br>adapterName: 'DatabaseWriter' ) <br>INPUT FROM str_test_mssql_reader;</p>\n<p>END APPLICATION test_001;</p>\n<p> </p>\n<p> </p>\n<p><strong>EXAMPLE TQL for MSSQL CDC</strong>.<br>CREATE APPLICATION TEST;</p>\n<p>CREATE OR REPLACE SOURCE scr_db_name_in_table USING Global.MSSqlReader ( <br>TransactionSupport: false, <br>PollingInterval: 5, <br>FetchTransactionMetadata: false, <br>Compression: false, <br>connectionRetryPolicy: 'timeOut=30, retryInterval=30, maxRetries=3', <br>Password_encrypted: 'true', <br>Password: 'c2X8NwMubgwPoFJC51peNg==', <br>ConnectionURL: 'jdbc:sqlserver://localhost:1433;DatabaseName=qatest', <br><strong>Tables: 'dbo.emp;',</strong> <br>StartPosition: 'NOW', <br>adapterName: 'MSSqlReader', <br>ConnectionPoolSize: 10, <br>cdcRoleName: 'STRIIM_READER', <br>DatabaseName: 'qatest', <br>Username: 'qatest', <br>FetchSize: 0, <br>IntegratedSecurity: false, <br>FilterTransactionBoundaries: true, <br>SendBeforeImage: true, <br>AutoDisableTableCDC: false ) <br>OUTPUT TO str_db_name_in_table;</p>\n<p>CREATE OR REPLACE TARGET tgt_db_name_in_table USING Global.DatabaseWriter ( <br>Password: 'fco0eyTDCZinZcUsWcUaFw==', <br>ConnectionRetryPolicy: 'retryInterval=30, maxRetries=3', <br>BatchPolicy: 'EventCount:10,Interval:5', <br>CommitPolicy: 'EventCount:10,Interval:5', <br>CheckPointTable: 'CHKPOINT', <br>Password_encrypted: 'true', <br>CDDLAction: 'Process', <br><strong>Tables: 'dbo.emp,str.emp;',</strong> <br>ConnectionURL: 'jdbc:oracle:thin:@localhost:1521:orcl', <br>StatementCacheSize: '50', <br>DatabaseProviderType: 'Default', <br>Username: 'str', <br>PreserveSourceTransactionBoundary: 'false', <br>adapterName: 'DatabaseWriter' ) <br>INPUT FROM str_db_name_in_table;</p>\n<p>END APPLICATION TEST;</p>"} {"page_content": "<p>Recently, a security vulnerability has been identified on Apache Commons Arbitrary Code Execution (ACE) .<br>CVE-2022-42889<br>This document details the Striim Versions affected, as well as recommended actions.</p>\n<p><strong><span class=\"wysiwyg-font-size-large\">Impacted functionality:</span></strong><br>Up to Striim version 4.1.0.2, commons-text-1.9.jar is included in the libraries. However, the impacted methods are not used by Striim. <br><br><strong><span class=\"wysiwyg-font-size-large\">Conclusion/Resolution:</span></strong></p>\n<ol>\n<li>This vulnerability does not impact Striim.</li>\n<li>Concerned customers, or customers with a security mandate to remove this library, may manually replace lib/commons-text-1.9.jar with a copy of commons-text-1.10.0.jar or later downloaded from here <a href=\"https://commons.apache.org/proper/commons-text/download_text.cgi\" target=\"_blank\" rel=\"noopener\" data-saferedirecturl=\"https://www.google.com/url?q=https://commons.apache.org/proper/commons-text/download_text.cgi&amp;source=gmail&amp;ust=1668046984152000&amp;usg=AOvVaw3Iu2ip1Npc3M0lhaTijJHB\"><font color=\"#0052CC\"><span> https://commons.apache.org/<wbr></wbr>proper/commons-text/download_<wbr></wbr>text.cgi </span></font></a>\n</li>\n<li>Starting with Striim version 4.1.0.3, Commons Text library is upgraded to version 1.10.0 that contains the fix for CVE-2022-42889.</li>\n</ol>"} {"page_content": "<h3><span class=\"wysiwyg-font-size-large\">Problem : </span></h3>\n<p>OracleReader app is stuck due to network outage / after database restart.</p>\n<p> </p>\n<h3>Cause : </h3>\n<p>jdbc client doesn't detect the disconnection from server . </p>\n<p> </p>\n<h3>Workaround : </h3>\n<p>Add <span>(ENABLE=BROKEN) into connection string in one of the following ways:</span></p>\n<pre><code>ConnectionURL: '<span>jdbc:oracle:thin:@(DESCRIPTION=(ENABLE=BROKEN)(ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=10.0.0.26))(CONNECT_DATA=(SERVICE_NAME=oraprod)))</span>'</code></pre>\n<p>or</p>\n<p>Starting from striim version 4.2.0 , use below hidden property in the TQL file : </p>\n<pre class=\"code-block css-h39bcz\">_h_EnableBroken : 'true'</pre>\n<p>Then you can use below syntax : </p>\n<pre><code>ConnectionURL: '<span>jdbc:oracle:thin:@//10.0.0.26:1521/oraprod?ENABLE=BROKEN'</span></code></pre>\n<p> </p>"} {"page_content": "<p>Problem:</p>\n<p dir=\"auto\">Oracle CDC errors out with logging error</p>\n<p dir=\"auto\">Message: Please enable PK column logging for required tables i.e. :- ALTER TABLE \"OPTRN001\".\"EBIS\".\"INT\\_MESSAGE\\_ERROR\\_LOG\" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; OR ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; . Suggested Actions: 1.Minimal Supplemental logging is not enabled. Execute the below command to enable Minimal supplemental logging : ALTER DATABASE ADD SUPPLEMENTAL LOG DATA. Component Name: ora_cdc_source Component Type: SOURCE. Cause: Please enable PK column logging for required tables i.e. :- ALTER TABLE \"PDB1\".\"TABLE1\".\"INT\\_MESSAGE\\_ERROR\\_LOG\" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; OR ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;</p>\n<p dir=\"auto\"> </p>\n<p dir=\"auto\">Cause:</p>\n<p dir=\"auto\"><span>This is privilege missing issue. </span><br><span>For Oracle CDB environment , Striim connects to the CDB database. If the user cannot get table level supplemental logging information from CDB_LOG_GROUPS , it just returns the error even though the supplemental logging is enabled.</span></p>\n<p dir=\"auto\"> </p>\n<p dir=\"auto\"> </p>\n<p dir=\"auto\"><span>Solution:</span></p>\n<p dir=\"auto\"><span>Setting the container_data as documented at: <a href=\"https://www.striim.com/docs/archive/410/platform/en/creating-an-oracle-user-with-logminer-privileges.html\">https://www.striim.com/docs/archive/410/platform/en/creating-an-oracle-user-with-logminer-privileges.html</a></span></p>\n<p dir=\"auto\"><span>e.g.,</span></p>\n<p dir=\"auto\"><span>alter user c##striim set container_data = (cdb$root, pdb1) container=current;</span></p>"} {"page_content": "<p>For RDBMS databases, it is common to have open transactions - sometime the transaction may last for long time. This may impact Striim production, specifically on Oracle databases as source.</p>\n<p><span class=\"wysiwyg-font-size-x-large\"><strong>I. Oracle Reader</strong></span></p>\n<p><strong><span class=\"wysiwyg-font-size-large\">1. Restarting App may Go Back to Old SCN.</span></strong></p>\n<p>Striim app restart will be from its checkpoint. When there is open transaction, the checkpoint restarting position will be at or prior to the beginning of the oldest open transaction.</p>\n<p>For example, if a transaction lasts for 10 hours, when app is restarted, it will restart from archived logs 10hrs ago. The app will be in recovery stage until processing to the previous high water mark position.</p>\n<p> </p>\n<p>The open transactions in process by Striim can be viewed from console:</p>\n<pre><code class=\"language-plaintext\">(1) mon &lt;oracleReader&gt;;<br></code></pre>\n<div data-pm-slice=\"1 1 []\" data-en-clipboard=\"true\"><span>Sample output from Mon to find oldest open transaction</span></div>\n<div data-pm-slice=\"1 1 []\" data-en-clipboard=\"true\"><span></span></div>\n<pre data-pm-slice=\"1 1 []\" data-en-clipboard=\"true\"><span>│ O</span><strong><span>ldest Open Transactions │ [{\"5.21.21991\":{\"# of Ops\":2,\"CommitSCN\":\"null\",\"Sequence #\":\"1\",\"StartSCN\":\"60601569\",\"Rba </span></strong><strong><span> block #\":\"3847\",\"Thread #\":\"1\",\"TimeStamp\":\"2023-03-02T00:47:10.000+05:30\"}}] </span></strong></pre>\n<p> </p>\n<p><code class=\"language-plaintext\">2) SHOW &lt;oraclereader or ojetReader&gt; OPENTRANSACTIONS</code></p>\n<p> Sample output of show opentransactions command</p>\n<pre data-pm-slice=\"1 1 []\" data-en-clipboard=\"true\"><span>╒═════════════════╤════════════╤════════════╤═════════════════╤═════════════╤════════════╤════════════════════════════════╕</span><br><span>│ Transaction ID │ # of Ops │ Sequence # │ StartSCN │ Rba block # │ Thread # │ TimeStamp |</span><br><span>├─────────────────┼────────────┼────────────┼─────────────────┼─────────────┼────────────┼─────────────────────────────── |</span><br><span>│ 5.21.21991 │ 2 │ 1 │ 60601569 │ 3847 │ 1 │ 2023-03-02T00:47:10.000+05:30 │</span><br><span>│ 7.13.22231 │ 2 │ 1 │ 60603611 │ 3848 │ 1 │ 2023-03-02T00:58:46.000+05:30 │</span><br><span>└─────────────────┴────────────┴────────────┴─────────────────┴─────────────┴────────────┴────────────────────────────────┘</span></pre>\n<p>(3) Following sql lists current open transaction from oracle db:</p>\n<pre><code class=\"language-plaintext\">select * from\n(\nselect t.START_SCN SCN, t.start_time,t.inst_id,s.sid, s.serial# ,s.machine , s.sql_id,s.username,s.last_call_et,t.XIDUSN||'.'||t.XIDSLOT||'.'||t.XIDSQN XID\nfrom gv$transaction t, gv$session s\nwhere t.addr=s.taddr\nand t.inst_id=s.inst_id\norder by t.START_SCN desc\n)\nwhere rownum &lt;11;</code></pre>\n<p>If the transaction is commited/rolledback you may not find the transaction at db level. However OracleReader considers it as open transaction until it processes the archive log file containing the commit/rollback.</p>\n<p> </p>\n<p>(4) The Striim app checkpoint may be seen from console:</p>\n<pre><code class=\"language-plaintext\">describe &lt;app&gt;;</code></pre>\n<p> </p>\n<p>As a best practice, the long open transactions may be checked regularly and to decide if they can be killed from DB level, or ignored from Striim side.</p>\n<p> </p>\n<p>In Striim version 4.1.1 and up, ALM mode oracle reader may discard open transaction manually from console.</p>\n<p>e.g.,</p>\n<p>transaction id: 10.24.2983513</p>\n<pre><code class=\"language-plaintext\">DISCARD TRANSACTION &lt;oracleReader&gt; '10.24.2983513';</code></pre>\n<p>Please note that this may cause data loss, and is NOT recommended. Striim support may be contacted.</p>\n<p> </p>\n<p><span class=\"wysiwyg-font-size-large\"><strong>2. Increase Memory Usage.</strong></span></p>\n<p>For OracleReader (not OJet), the open transactions will be temporarily hold at Striim side, until they are committed or rolled back.</p>\n<p>This may increase the memory usage. By default, spilling to disk is enabled, to avoid using too much memory.</p>\n<p> </p>\n<p><span class=\"wysiwyg-font-size-large\"><strong>3. Initial Load -&gt; CDC</strong></span></p>\n<p>(1) get oldest transaction start scn</p>\n<pre><code class=\"language-plaintext\">select min(start_scn) from gv$transaction;</code></pre>\n<p>(2) start IL</p>\n<p>(3) when IL complete, start cdc app from SCN obtained from (1).</p>\n<p> </p>\n<p>if starting cdc app from a point after the SCN of (1), there may be data loss.</p>\n<p> </p>\n<p><span class=\"wysiwyg-font-size-large\"><strong>4. Quiesce may Fail with OracleReader Having Open Transaction</strong></span></p>\n<p>Quiesce will send all the pending events to target, and set checkpoint to next position.</p>\n<p>With open transaction, the checkpoint cannot be moved. Thus, quiesce may fail.</p>\n<p> </p>\n<p><span class=\"wysiwyg-font-size-x-large\"><strong>II. OJet</strong></span></p>\n<p>Ojet also captures the transactions from Oracle redo/archived logs. Besides higher performance, it handles the open transactions differently from Oracle Reader.</p>\n<p><span class=\"wysiwyg-font-size-large\"><strong>1. Where is long open transactions temporarily stored, and how to tune it?</strong></span></p>\n<p>For Oracle Reader, they are stored on Striim side, while OJet stores them on Oracle database side. This is the reason why some database parameters have to be tuned, mainly</p>\n<p>(1) streams_pool_size (oracle init parameter)</p>\n<p>(2) MAX_SGA_SIZE for OJet (this is NOT Oracle init parameter for database SGA)</p>\n<p>e.g., to set it at 15GB,</p>\n<pre><code>OJetConfig: '{\\\"APPLY\\\":[\\\"MAXSGASIZE:15000\\\"]}',</code></pre>\n<p>When the open transactions fill up the memory, they may be spilled to the disk (temporary table inside the database). Following console command may help to check if spilling to disk happens:</p>\n<p>Console&gt; show ojet3_cdb_src status;</p>\n<pre>╒══════════════════════╤══════════════════════╤══════════════════════╤══════════════════════╤══════════════════════════════════════════╤══════════════════════╤══════════════════════════════╕<br>│ ServerStatus │ Enqueue │ Dequeue │ CaptureStatus │ CaptureState │ SpillCount │ Error │<br>├──────────────────────┼──────────────────────┼──────────────────────┼──────────────────────┼──────────────────────────────────────────┼──────────────────────┼──────────────────────────────┤<br>│ OJET$ADMIN$OJET3_CDB │ Q$ADMIN$OJET3_CDB_SR │ Q$ADMIN$OJET3_CDB_SR │ C$ADMIN$OJET3_CDB_SR │ C$ADMIN$OJET3_CDB_SRC is waiting for │ None │ None │<br>│ _SRC is atached │ C is enabled │ C is enabled │ C is enabled │ more transactions │ │ │<br>└──────────────────────┴──────────────────────┴──────────────────────┴──────────────────────┴──────────────────────────────────────────┴──────────────────────┴──────────────────────────────┘</pre>\n<p>In this example, there is no SpillCount.</p>\n<p><strong><span class=\"wysiwyg-font-size-large\">2. How to display the open transactions?</span></strong></p>\n<p>This is similar to Oracle Reader with \"SHOW &lt;ojet&gt; OPENTRANSACTIONS;' console command.</p>\n<p><span class=\"wysiwyg-font-size-large\"><strong>3. How do open transactions affect restarting a Striim app with OJet?</strong></span></p>\n<p>(1) Stop -&gt; Restart app, without undeploying the app</p>\n<p>As the open transactions are stored inside the database, restarting here will pick up from the stopped position, instead of going back to the beginning of the open transactions (unlike Oracle Reader). Thus it is unlikely to hit 'missing archived log' error.</p>\n<p>(2) Stop -&gt; undeploy -&gt; deploy -&gt; restart</p>\n<p>There are two factors in this scenario:</p>\n<p> (2A) Checkpoint: recovery checkpoint of the OJet and app.</p>\n<p> (2B) dictionary build point</p>\n<p> At restart, the app will find the recovery checkpoint SCN first, then find the most recent dictionary build SCN that is prior to the recover checkpoint.</p>\n<p> For example, we have dictionary built at following SCNs: 100, 200, 300, and the recovery checkpoint is at 250. the required archived logs will be from SCN 200 and up.</p>\n<p><strong><span class=\"wysiwyg-font-size-large\">4. Why do I need to build the dictionary regularly (e.g., every 4 hours)?</span></strong></p>\n<p> From 3(2), we know that when an app is restarted after undeployment, it will be started from a most recent dictionary build point (that should also be prior to the recovery checkpoint). Regular dictionary build will reduce the recovery time in this scenario. Please note that simple stop and restart (without undployment) will not require a dictionary build.</p>\n<p><strong><span class=\"wysiwyg-font-size-large\">5. What is the best procedure when I plan to make a change and undeploy an OJet app?</span></strong></p>\n<p>(1) build a dictionary and get its SCN</p>\n<pre>execute DBMS_LOGMNR_D.BUILD( options =&gt; DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);</pre>\n<pre>set pages 999<br>\nalter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';\n<br>select thread#,sequence#, name,first_change# dictionary_begin, dictionary_end , first_time FROM v$archived_log\nWHERE (dictionary_begin = 'YES' or dictionary_end ='YES')\nAND standby_dest = 'NO' AND name IS NOT NULL AND status = 'A'\norder by first_change# ;</pre>\n<p>(2) monitor the OJet app and make sure its recovery checkpoint SCN is larger than above SCN</p>\n<pre>describe &lt;app&gt;;\n<br>mon &lt;ojet_source_name&gt;;</pre>\n<p>(3) stop, undeploy the app and make the planned changes</p>\n<p>(4) restart the app.</p>\n<p> </p>\n<p> </p>\n<p> </p>\n<p> </p>"} {"page_content": "<p>This may may happen in following conditions:</p>\n<p>1. when cDDL is not enabled, DDL change may cause the error.</p>\n<p>e.g.,</p>\n<pre><code class=\"language-plaintext\">Oracle.SRC_GG_READER_CBS_RECOVERY_GG_SOURCE_Type.EXTERNAL_APP of type {java.lang.String} but it is suppose to be {null} Colume type mismatch, can not proceed</code></pre>\n<p>Solution: Drop the related type and start the app from a point after DDL change.</p>\n<p>2. when source oracle DB contains hidden columns (such as virtual columns).</p>\n<p>e.g.,</p>\n<pre><code>Caused by: java.lang.RuntimeException: com.webaction.common.exc.ColumnTypeMismatchException: Expected <code class=\"language-plaintext\">Oracle.SRC_GG_READER_CBS_RECOVERY_GG_SOURCE_Type</code>.SYS_STS70GHVUZ7ILB5BK2I_IX8R_C of type {java.lang.String} but it is suppose to be {null} Colume type mismatch, can not proceed</code></pre>\n<p>Here the columns name starts with SYS_STS.</p>\n<p>Solution: use EXCLUDEHIDDENCOLUMNS to exclude them from OGG extract process (not Striim).<br> (For details see oracle Doc ID 2292517.1)</p>"} {"page_content": "<h4>Step 1. click on '+' symbol which is next to the Alerts as shown in the below screenshot.</h4>\n<p><img src=\"https://support.striim.com/hc/article_attachments/8491630423959/mceclip0.png\" alt=\"mceclip0.png\" width=\"714\" height=\"421\"></p>\n<h4>Step 2. New Alert screen will be appeared on right side of the screen as shown below.</h4>\n<p><img src=\"https://support.striim.com/hc/article_attachments/8491909914263/mceclip0.png\" alt=\"mceclip0.png\"></p>\n<h4>Step 3. We can create the alerts on Application, Server, CQ, Source, Stream and target. Select on which component you have to create an alert.</h4>\n<h4>Step 4. Give the Appreciate Name for the alert.</h4>\n<h4>Step 5. Select the Alert Condition and give the time interval for the Snooze After Alert option.</h4>\n<h4>Step 6. Select the Alert Type (Email or In App) and Click on the save button on the right side of the bottom.</h4>\n<h3>Note : We can also follow the above steps to recreate the alert which is dropped already.</h3>\n<p>Refer to the attachment : Recreate-alerts_CPU_Memory.mp4.zip</p>"} {"page_content": "<p><strong>Problem:</strong><br>Striim app reads from GGTrails and writes to BQ.</p>\n<p>it hit following error often:<br>com.google.cloud.bigquery.BigQueryException: Access Denied: BigQuery BigQuery: Permission denied while globbing file pattern.</p>\n<p><strong>Cause:</strong><br>The \"Permission denied while globbing file pattern.\" was occurring because the service accounts on our cerner GGTrail striim instances needed “storage.objects.get” permission in the GCP.</p>\n<p><strong>Solution:</strong><br>Grant the permission to the service account.</p>"} {"page_content": "<h3>Issue:</h3>\n<h4>Environment:</h4>\n<p>Oracle Database 19c Standard Edition: version 19.13.0.0.0<br>DB Character Set : KO16MSWIN949</p>\n<p><span>OracleReader failed with following message</span></p>\n<pre><span>2022-07-01 01:07:50,768 @SPROD @acdc.CDC_DEV_Ora_to_Mysql_001 -ERROR StartSources-CDC_DEV_Ora_to_Mysql_001 -ERROR com.webaction.runtime.components.Source.start (Source.java:362)<br>Message: User does not have privileges to query system tables.<br>Component Name: CDC_DEV_V1_OracleSource. Component Type: SOURCE.<br>Cause: User does not have privileges to query system tables</span></pre>\n<p><span>Checking the permission via the UI Wizard shows following,</span></p>\n<pre><span>-ERROR pool-7-thread-3 com.striim.schema.conversion.SchemaConverter.fetchTablesWildcard <br>(SchemaConverter.java:274) Exception while fetching the tables for the schema SBB using wildcard %</span><br><span>java.sql.SQLException: Non supported character set (add orai18n.jar in your classpath): KO16MSWIN949</span><br><span>at oracle.sql.CharacterSetUnknown.failCharsetUnknown(CharacterSetFactoryThin.java:240) <br></span></pre>\n<p> </p>\n<h3>Resolution:</h3>\n<p>The issue is due to missing Oracle library <span>orai18n-19.3.0.0.jar</span></p>\n<h5 id=\"Q12\" role=\"heading\">\n<span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 15px;\">Please download the required JDBC jar (</span><code class=\"ocode ocode-initialized\" style=\"font-size: 15px;\">orai18n.jar</code><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 15px;\"> in this case) </span><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 15px;\">from the Oracle Technology Network</span><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 15px;\"> </span><a style=\"background-color: #ffffff; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 15px;\" tabindex=\"0\" href=\"https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html\" aria-hidden=\"false\">JDBC Download Page</a><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 15px;\">.</span>\n</h5>\n<p><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; font-size: 15px;\">Add the jar to &lt;Striim home&gt;/lib and restart Striim server</span></p>"} {"page_content": "<pre><span>Customer report below error</span><br><br><span>SQL Reader fails with Caused by: com.webaction.source.mssqlcommon.MSSqlExternalException: 2533 : Invalid StartPosition format. StartPosition can be specified as NOW or it can be a specific time in the format 'TIME: YYYY-MM-DD hh:mm:ss:nnn' (for example, TIME:2014-10-03 13:32:32.917) or SQL Server log sequence number (for example, LSN:0x00000A85000001B8002D). But the value specified: lsn:000F11890000FEB00001<br>at com.webaction.source.mssqlcommon.UtilityValidateParams.validateAndInitalizePositionParam(UtilityValidateParams.java:255)<br>at com.webaction.source.mssqlcommon.UtilityValidateParams.initialiseParams(UtilityValidateParams.java:276)<br>at com.webaction.proc.MSSqlReader_1_0.init(MSSqlReader_1_0.java:166)<br>at com.webaction.runtime.components.Source.start(Source.java:343)<br>... 3 more<br>Command START failed so application admin.MCSQuickStaging_SQL_SERVER_TEST_CDC is put in HALTED state<br>com.striim.exception.checked.AdapterExternalException: Message: 2535 : <strong>User Role is not appropriate to proceed</strong>. Possible User Role be db_owner or STRIIM_READER or a custom role with needed privilege. <strong>cdcRoleName specified is not the actual user role or the user does not have any role</strong>. Suggested Actions: 1.. Component Name: SRC_MCSQuickStaging_CDC_TEST. Component Type: SOURCE. Cause: 2535 : User Role is not appropriate to proceed. Possible User Role be db_owner or STRIIM_READER or a custom role with needed privilege. cdcRoleName specified is not the actual user role or the user does not have any role<br>at com.striim.exception.checked.AdapterExternalException.fromBuilder(AdapterExternalException.java:22)<br>at com.striim.exception.checked.builders.AdapterExternalExceptionBuilder.build(AdapterExternalExceptionBuilder.java:12)<br>at com.webaction.errorhandling.detailer.ExternalExceptionDetailer.toPlatformException(ExternalExceptionDetailer.java:16)<br>at com.webaction.errorhandling.transitioner.BaseExceptionTransitioner.transform(BaseExceptionTransitioner.java:49)<br>at com.webaction.proc.BaseProcess.transitionException(BaseProcess.java:597)<br>at com.webaction.runtime.components.Source.start(Source.java:347)<br>at com.webaction.runtime.components.Flow.startSources(Flow.java:603)<br>at com.webaction.runtime.components.Flow$3.run(Flow.java:1761)<br>at java.lang.Thread.run(Thread.java:748)<br>Caused by: com.webaction.source.mssqlcommon.MSSqlExternalException: 2535 : <strong>User Role is not appropriate to proceed</strong>. <strong>Possible User Role be db_owner or STRIIM_READER or a custom role with needed privilege</strong>. cdcRoleName specified is not the ac</span><br><br><span>\"componentType\" : \"SOURCE\" , \"exception\" : \"com.striim.exception.checked.AdapterInternalException\" , \"message\" : \"Message: 2505 : Failure in executing queries Exception;Thread :4; Cause:null; Message : null. Suggested Actions: 1.. Component Name: SRC_NetworkTracker_CDC_TEST. Component Type: SOURCE. Cause: 2505 : Failure in executing queries Exception;Thread :4; Cause:null; Message : null\" , \"relatedEvents\" : \"[]\"</span></pre>\n<h4><strong><span class=\"wysiwyg-font-size-large\">Cause:</span></strong></h4>\n<p>The cdc enabled with different role and role used in TQL is different.<br><br><span class=\"wysiwyg-font-size-large\"><strong>Solution:</strong></span></p>\n<p>1. First we need to find the Role used to enable the CDC</p>\n<pre>select * from cdc.change_tables</pre>\n<p>2. For example If the role is STRIIM_CDC then Give the STRIIM_CDC role to Striim_user.</p>\n<pre>exec sp_addrolemember @rolename=STRIIM_CDC, @membername=Striim_user; </pre>\n<p>3. In the TQL, change cdc role name to STRIIM_CDC.</p>\n<p> </p>"} {"page_content": "<h3>Goal:</h3>\n<p>How to measure the heap memory usage of Striim JVM process</p>\n<p> </p>\n<h3>Solution:</h3>\n<p>Java native tools (jcmd, jmap) can be used to measure the memory footprint of any JVM process.</p>\n<p>These tools are installed by default part of Oracle JDK and can be optionally installed for OpenJDK</p>\n<p> </p>\n<p>a) # jcmd &lt;<a href=\"https://support.striim.com/hc/en-us/articles/4407528999703-How-to-find-the-Striim-Server-PID-Process-ID\" target=\"_self\">striim pid</a>&gt; GC.heap_info</p>\n<pre>bash-4.4$ jcmd 6274 GC.heap_info<br>6274:<br>garbage-first heap total 24707072K, used 18183036K<br>region size 8192K, 862 young (7061504K), 16 survivors (131072K)<br>Metaspace used 220621K, capacity 254357K, committed 254976K, reserved 256000K</pre>\n<p>In the example above 24.7 gb (24707072K) is the current heap size and 18.1 gb (18183036K) is the used size. The used size includes the memory that is garbage eligible and is not freed yet.</p>\n<p> </p>\n<p>b) # jmap -histo:live &lt;<a href=\"https://support.striim.com/hc/en-us/articles/4407528999703-How-to-find-the-Striim-Server-PID-Process-ID\" target=\"_self\">striim pid</a>&gt; </p>\n<pre> num #instances #bytes class name<br>----------------------------------------------<br>1: 207620 5811238552 [B<br>2: 839297 631091520 [Ljava.lang.Object;<br>...<br><br>Total 12500522 7279096832</pre>\n<p>The output shows the object allocations (count and size) in detailed view. Currently Striim is using</p>\n<p>7gb (7279096832) of the 18.1 gb (18183036K) and rest is eligible for automatic garbage collection by the GC process.</p>"} {"page_content": "<p>Question:</p>\n<p>I have an app with mssqlReader to capture sql server cdc. When a table had DDL change, mssqlReader will fail. How can I recover?</p>\n<p> </p>\n<p>Answer:</p>\n<p>First, if DDL capturing is required, please consider Striim MSJet to replace MSSQLReader.</p>\n<p>For the original question, <span>sql server cdc is using cdc tables that is static.</span><br><span>When you add a column to base table, the cdc table is NOT changed. so we are hitting the mismatch between base and cdc tables.</span><br><span> </span><br><span>To recover the Striim app, the cdc table may be disabled/reenabled.</span><br><span>example to disable and enable cdc on table level:</span><br><span> </span><br><span>schema=dbo</span><br><span>tablename=s1</span><br><span> </span><br><span>disable</span><br><span>EXEC sys.sp_cdc_disable_table </span><br><span>@source_schema = N'dbo', </span><br><span>@source_name = N's1', </span><br><span>@capture_instance = N'dbo_s1'</span><br><span> </span><br><span>enable</span><br><span>EXEC SYS.sp_cdc_enable_table @SOURCE_SCHEMA = dbo, @SOURCE_NAME = s1 , @ROLE_NAME = 'WEBACTION_READER'</span><br><span> </span><br><span>after disabling, make sure the cdc shadow table is gone. after reenable it and with dml changes, the new cdc table contains the new columns.</span><br><span>in my above example, the cdc table will be:</span><br><span>cdc.dbo_s1_CT</span><br><span> </span><br><span>after that, recreating the app and start it from a point after reenabling the cdc. </span></p>"} {"page_content": "<p><span class=\"wysiwyg-underline wysiwyg-font-size-large\"><strong>Problem:</strong></span></p>\n<p>My Striim monitoring is not updating (or may be very slow). How to troubleshooting and reset the monitoring?</p>\n<p><span class=\"wysiwyg-underline wysiwyg-font-size-large\"><strong>Diagnosis:</strong></span></p>\n<p>1. from console:</p>\n<p>W ()&gt; mon all<br>W ()&gt; status Global.MonitoringSourceApp<br>W ()&gt; status Global.MonitoringProcessApp<br>W ()&gt; select * from Global.MonitoringStream1; -- if no return, then hanging; if event is old, then likely slow.</p>\n<p>2. gather following information and submit support ticket.</p>\n<p>(1) switch user to striim before running following commands</p>\n<p>- jstack &lt;server_pid&gt; &gt;&gt; jstack_$(date +\"%m-%d-%Y\"-%T).txt (repeat 3 times with 10 sec interval<br>- top (output)</p>\n<p><br>(2) at console, turn on trace for a 3 minutes, and get striim.server.log and *debug* logs under &lt;striim home&gt;/logs/</p>\n<pre>W () &gt;SET LOGLEVEL = {'monitor' : 'debug'};<br><br>wait for 3 mins<br><br>W () &gt; SET LOGLEVEL = {'monitor' : 'off'};</pre>\n<p><span class=\"wysiwyg-underline wysiwyg-font-size-large\"><strong>Restarting Monitoring:</strong></span></p>\n<p>1. restarting the monitoring apps</p>\n<p>stop application Global.MonitoringSourceApp;<br>stop application Global.MonitoringProcessApp;</p>\n<p>stop System$Alerts.AlertingApp;</p>\n<p>undeploy application Global.MonitoringSourceApp;<br>undeploy application Global.MonitoringProcessApp;</p>\n<p class=\"p1\">undeploy application System$Alerts.AlertingApp;</p>\n<p class=\"p1\">deploy application Global.MonitoringSourceApp with global.monitoringsourceflow on all in default, global.MonitoringSourceFlowAgent on all in DefaultAgentMonitoring;<br>deploy application Global.MonitoringProcessApp;</p>\n<p class=\"p1\">deploy application System$Alerts.AlertingApp;</p>\n<p class=\"p1\">start application Global.MonitoringSourceApp;<br>start application Global.MonitoringProcessApp;</p>\n<p class=\"p1\">start application System$Alerts.AlertingApp;</p>\n<p class=\"p1\"> </p>\n<p>2. if #1 does not help, restart striim node</p>\n<p>- stop/undeply and drop the apps</p>\n<p>drop application Global.MonitoringSourceApp cascade;<br>drop application Global.MonitoringProcessApp cascade;</p>\n<p>- stop striim server on all nodes in the cluster<br>- delete the \"data\" folder under elasticsearch in striim home on all nodes in the cluster<br>- start striim cluster</p>"} {"page_content": "<h3>Question</h3>\n<p>The memory usage of Striim seen from \"top\" is not the same as the usage seen in Striim's WebUI Monitor page</p>\n<h3> </h3>\n<h3>Answer</h3>\n<p><span>Java memory usage seen in the OS is not the same as the Striim's metric shown in the WebUI</span><span></span></p>\n<p>Say the OS has 256gb and MEM_MAX is set to 200gb</p>\n<ul dir=\"auto\">\n<li>Unix OS will show the total memory requested (malloc'ed) by a process (java). In this case the Striim config (MAX_HEAP) allows up to 200GB for the Java process of the 256GB total system available RAM. </li>\n<li>Java will typically request memory from the OS as pressure exceeds 50% of current heap allocation (i.e. 32 -&gt; 64GB when 16GB utilized internal to the java process). </li>\n<li>Internal to Java's process memory space there are several memory areas that will fluctuate in usage depending on the demands of what is happening within Striim and the data workloads. When Garbage Collection (GC) kicks in this will lower the internal memory usage of the Java process (heap usage), but won't lower the OS memory allocation (this is by design of Java).</li>\n</ul>\n<p><span> </span><br><span>We can get the internal view of the Java memory usage with tools like VisualVM or Java command line utilities (jstat -gc etc), and the external view of the Java process through tools like top.</span><br><br><span>This <a href=\"https://www.betsol.com/blog/java-memory-management-for-java-virtual-machine-jvm/#Java_JVM_Memory_Structure\" target=\"_self\">article</a> can help you understand (and with complexity) the internal workings of Java's memory management. </span><br><br></p>"} {"page_content": "<p>By default, for source database date/timestamp values, they are captured as joda datetime that contains timezone. e.g., <span>2022-02-10 10:00:00.000-05:00.</span></p>\n<p><span>When source date/timestamp does not have TZ details, it is preferred to review the TZ in captured values. Attached UDF may help to achieve that:</span></p>\n<p><span>1. copy the UDF jar file to ./lib/ directory and restart the cluster</span></p>\n<p><span>2. add a CQ like following:</span></p>\n<p> </p>\n<pre><code>SELECT \ncom.udf.functions.ObjectConversionUDF.convert(w, \"org.joda.time.DateTime\", \"org.joda.time.LocalDateTime\") \nFROM source_cdc_out w;</code></pre>\n<p> </p>\n<p>to further confirm the conversion, another CQ may be used to check it.</p>\n<p>e.g., column 1 is DATE</p>\n<p>SELECT <br>data[1].getClass().getName()<br>FROM above_stream;</p>\n<p>Outputs example:</p>\n<p>(1) without conversion:</p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>data1getClassgetName: \"</span><span class=\"s1\">org.joda.time.DateTime</span><span class=\"s1\">\"</span></p>\n<p class=\"p1\"><span class=\"s1\">(2) with conversion:</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>data1getClassgetName: \"org.joda.time.LocalDateTime\"</span></p>\n<p> </p>"} {"page_content": "<p>When Striim server crashed with hazelcast time out error, please check following first:</p>\n<p>1. if there was network issue between nodes</p>\n<p>2. if there was enough resource (cpu and memory) in nodes.</p>\n<p> </p>\n<p>By default the timeout setting is 180 seconds, which may be adjusted as following:</p>\n<p> </p>\n<p><span>example, change from default 180 sec to 360 sec:</span><br><span> </span><br><span>1. server - modify a line</span><br><span>(1) startup.properties:</span><br><span>from: ClusterHeartBeatTimeout=180</span><br><span>to: ClusterHeartBeatTimeout=360</span><br><span> </span><span> </span><br><span>2. agent.sh - add a line</span><br><span>from: -Dhazelcast.event.queue.capacity=\"2000000\" \\</span><br><span>to: -Dhazelcast.event.queue.capacity=\"2000000\" \\</span><br><span> -Dhazelcast.client.heartbeat.timeout=360000 \\</span></p>"} {"page_content": "<h3>Question:</h3>\n<p>A sample mon metric from GGTrailReader shows a <span>RECORD_POSTION like below. What does these values mean ?</span></p>\n<p><span>RECORD_POSTION: </span>0,1648826544000,33563289,33563661,0,33563698,true,R1000003337</p>\n<h3> </h3>\n<h3>Answer:</h3>\n<p>The metric values mean following,</p>\n<div class=\"zd-indent\">\n<p dir=\"auto\"> </p>\n</div>\n<div class=\"zd-indent\">\n<p dir=\"auto\">seekPosition :0<br>creationTime :1648826544000 (epoch value)<br>recordBeginOffset :33563289<br>recordEndOffset :33563661<br>recordLength :0<br>bytesRead :33563698<br>recovery :true<br>sourceName :R1000003337 (trail being read)</p>\n</div>\n<div class=\"zd-indent\">\n<p dir=\"auto\"> </p>\n</div>\n<div class=\"zd-indent\">\n<p dir=\"auto\">Here the last read position is 33563289 - 33563661 which are RBA position (relative byte address) in the trail being read</p>\n</div>"} {"page_content": "<p>Problem:</p>\n<p>after upgrading the striim from 3.10.x to 4.0.x, the server could not be started. in server log, it ended with:</p>\n<p>...</p>\n<p><span>(NodeStartUp.java:496) Will search for initializer object with name: BIDMC_NONPROD in memory.</span><br><span>2022-03-09 18:28:28,996 @ @ -INFO main com.webaction.runtime.NodeStartUp.startingUpWithStartUpFile (NodeStartUp.java:498) Interfaces found in startup file:</span><br><span>2022-03-09 18:28:29,061 @ @ -WARN main com.webaction.metaRepository.MetaDataDBOps.loadByName (MetaDataDBOps.java:912) Tried loading by name, attempt 1 failed (total attempts: 3), waiting for 10 seconds before next operation...</span><br><span>2022-03-09 18:28:39,081 @ @ -WARN main com.webaction.metaRepository.MetaDataDBOps.loadByName (MetaDataDBOps.java:912) Tried loading by name, attempt 2 failed (total attempts: 3), waiting for 10 seconds before next operation...</span></p>\n<p> </p>\n<p><span>Cause:</span><span></span></p>\n<p><span>The upgrade scripts were run as superuser postgre, instead of the user used by Striim. Then the striim login user cannot access the newly created tables.</span></p>\n<p> </p>\n<p><span>Solution:</span></p>\n<p><span>run the update scripts with the same postgres login user that is used by striim.</span></p>"} {"page_content": "<p>This is supported in Striim version 3.10.3.7 and 4.0.4.2, and up.</p>\n<p> </p>\n<ol class=\"ak-ol\" data-indent-level=\"1\">\n<li>\n<p data-renderer-start-pos=\"132\">Edit or alter the Database Writer properties (see ALTER and RECOPMPILE) and add the following (if using Flow Designer, omit the quotes):</p>\n<div class=\"code-block gkbfow-0 cdKoEL\"><span class=\"prismjs css-1xfvm4v\" data-code-lang=\"\" data-ds--code--code-block=\"\"><code><span class=\"\">VendorConfiguration: 'EnableiDentityinsert=true;EnableiDentityupdate=true'</span></code></span></div>\n<p data-renderer-start-pos=\"346\">If a value for Vendor Configuration is already specified, add a semicolon to the end and append <code class=\"code css-9z42f9\" data-renderer-mark=\"true\">EnableiDentityinsert=true;EnableiDentityupdate=true</code>.</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"498\">Create the checkpoint table in the target.</p>\n<div class=\"code-block gkbfow-0 cdKoEL\"><span class=\"prismjs css-1xfvm4v\" data-code-lang=\"\" data-ds--code--code-block=\"\"><code><span class=\"\">CREATE TABLE CHKPOINT (\n</span> id VARCHAR(100) PRIMARY KEY NOT NULL,\n sourceposition IMAGE, pendingddl NUMERIC, ddl TEXT);\n</code></span></div>\n</li>\n<li>\n<p data-renderer-start-pos=\"670\">Install the JTDS driver.</p>\n<ol class=\"ak-ol\" data-indent-level=\"2\">\n<li>\n<p data-renderer-start-pos=\"698\">Download <code class=\"code css-9z42f9\" data-renderer-mark=\"true\">jtds-1.3.1-dist.zip</code> from <a class=\"sc-1ko78hw-0 fiVZLH\" title=\"https://sourceforge.net/projects/jtds/files/jtds/1.3.1/\" href=\"https://sourceforge.net/projects/jtds/files/jtds/1.3.1/\" data-renderer-mark=\"true\">sourceforge.net/projects/jtds/files/jtds/1.3.1</a> and unzip it.</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"796\">Copy <code class=\"code css-9z42f9\" data-renderer-mark=\"true\">jtds-1.3.1.jar</code> to the <code class=\"code css-9z42f9\" data-renderer-mark=\"true\">striim/lib</code> directory of every Striim server that will write to Sybase and restart the servers (see <a class=\"sc-1ko78hw-0 fiVZLH\" title=\"https://www.striim.com/docs/en/starting-and-stopping-striim.html\" href=\"https://www.striim.com/docs/en/starting-and-stopping-striim.html\" data-renderer-mark=\"true\">Starting and stopping Striim</a>).</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"956\">Copy <code class=\"code css-9z42f9\" data-renderer-mark=\"true\">jtds-1.3.1.jar</code> to the <code class=\"code css-9z42f9\" data-renderer-mark=\"true\">agent/lib</code> directory of every Striim Forwarding Agent that will write to Sybase and restart the agents (see <a class=\"sc-1ko78hw-0 fiVZLH\" title=\"https://www.striim.com/docs/en/starting-and-stopping-striim.html\" href=\"https://www.striim.com/docs/en/starting-and-stopping-striim.html\" data-renderer-mark=\"true\">Starting and stopping Striim</a>).</p>\n</li>\n</ol>\n</li>\n</ol>"} {"page_content": "<h3>Goal:</h3>\n<p>A sample json object in MongoDB document has collections like below</p>\n<p>[{<br>\"_id\": {<br>\"$oid\": \"6038d629b1a1e937438ac430\"<br>},<br>\"idType\": [<br>{<br>\"value\": \"143300\",<br>\"schemeID\": \"paymentTransactionId\"<br>}<br>],</p>\n<p>,{<br>\"_id\": {<br>\"$oid\": \"6008d629b1a1e937438ac000\"<br>},</p>\n<p>The \"idType\" may or may not be present for all _id types and requirement is to capture an array element (say schemeID) for those idType's that are not null and print null for the ones missing</p>\n<h3>Solution:</h3>\n<p>a) import the attached sample json</p>\n<p>b) use iterator like below</p>\n<pre>CREATE OR REPLACE CQ CQFindArray <br>INSERT INTO test_empty_iterator_out <br>SELECT data.get('_id').get('$oid') as ID, <br>case when data.get('idType') is not null then data.get('idType') <br>else TO_JSON_NODE(java.util.Arrays.asList(\"\").toArray()) <br>end AS idType <br>FROM stream1;<br><br>CREATE OR REPLACE CQ ExtractElements <br>INSERT INTO empty_nestedobject_out <br>SELECT s.ID as ID, r.get('schemeID') as idType FROM test_empty_iterator_out s, <br>iterator(s.idType) r;</pre>"} {"page_content": "<h2><strong>Problem:</strong></h2>\n<p>After upgrading to Striim 4.0.x, and created an app with oracleReader. I specified the TransactionBufferDiskLocation property from default ./striim/LargeBuffer to another directory with more disk space. when starting the app, it hit error:</p>\n<p> </p>\n<p class=\"p1\"><span class=\"s1\">2022-03-14 17:16:56,585 @S192_168_56_1 @admin.ora1 -WARN BaseServer_WorkingThread-7 com.webaction.runtime.components.FlowComponent.notifyAppMgr (FlowComponent.java:306) received exception from component :ora1_src, of exception type : com.striim.exception.checked.AdapterInternalException</span></p>\n<p class=\"p1\"><span class=\"s1\">com.striim.exception.checked.AdapterInternalException: Message: java.lang.IllegalStateException: Can't append to a read-only chronicle. Component Name: ora1_src. Component Type: SOURCE. Cause: java.lang.IllegalStateException: Can't append to a read-only chronicl</span></p>\n<p> </p>\n<h2><strong>Cause and Solution</strong></h2>\n<p>The user that started Striim node must have write permission on the specified TransactionBufferDiskLocation. If not, it will hit above error. Solution is to change permission at OS level.</p>"} {"page_content": "<h3>\n<br><strong>Symptoms :</strong>\n</h3>\n<p>OracleReader missing records when Extended statistics are gathered.</p>\n<p> </p>\n<p>Below cases are generally observed:</p>\n<p>- missing inserts<br>- missing updates</p>\n<p>and the striim.server.log shows messages like below</p>\n<pre><br>2022-01-25 08:49:39,504 @S10_0_0_107 @admin.ORCL_CDC -INFO com.striim.alm.parser.StriimParser.invalidStatus() : Ignoring invalid sqlRedo received for table : HSHI.TEST123 sqlRedo : insert into \"HSHI\".\"TEST123\"(\"COL 1\",\"COL 2\") values (HEXTORAW('c102'),HEXTORAW('c102')) at scn : 32884985</pre>\n<p> </p>\n<h3><strong>Root Cause :</strong></h3>\n<p><br>Extended statistics creates virtual columns in source table(s) which is considered as DDL and affects logmining in online catalog mode. Extensions are created automatically as part of gathering statistics based on usage of columns in the predicates in the workload.</p>\n<p> </p>\n<p><strong>Things to be checked :</strong></p>\n<p><br>If AUTO_STAT_EXTENSIONS is set to \"ON\" at the database level using the below query</p>\n<p>select dbms_stats.get_prefs('AUTO_STAT_EXTENSIONS') from dual;</p>\n<p><br>SYS_STSxxx might be created automatically when customer gathered statistics using Analyze table &lt;schema_name&gt;.&lt;table_name&gt; compute statistics;</p>\n<pre>select table_name, extension_name, extension from dba_stat_extensions where table_name = '&lt;&gt;'</pre>\n<h3>\n<br><strong>Workaround :</strong>\n</h3>\n<p><br>1. To disable extended statistics gathering related jobs</p>\n<pre><br># Disable the jobs with hidden parameter<br>alter system set \"_optimizer_enable_extended_stats\"=FALSE scope=both;</pre>\n<p>as this is a hidden parameter, please check with OracleSupport/ DBA team about its consequences.</p>\n<p> </p>\n<p>2. Oracle Doc ID 1964223.1</p>\n<p><br>In Oracle 12.1, column group statistics are created automatically as part of adaptive query optimization. In 12.2, the default is changed to OFF.</p>\n<pre># For other versions, it may be changed using the below query<br>SQL&gt; alter system set OPTIMIZER_ADAPTIVE_FEATURES=false;</pre>"} {"page_content": "<h2 data-pm-slice=\"1 1 []\"><strong>Problem:</strong></h2>\n<p data-pm-slice=\"1 1 []\">I have Oracle CDC -&gt; BQWriter in merge mode. The target BQ table has partition. From time to time I saw duplicates in target table.</p>\n<h2 data-pm-slice=\"1 1 []\"><strong>Example:</strong></h2>\n<p>Table has columns<br>CNTR_NO 100 (PK column)<br>EDW_CREATE_DT: 2021-sep-22 10:00:00 (partition column)</p>\n<p>After initial load source/ target matches</p>\n<p>On source do the following,</p>\n<p>1. delete the row – 100, 2021-sep-22 23:14:10.907000<br>2. insert again with different partition column value – 100, 2021-sep-22 23:18:26.579000</p>\n<p>After 1st iteration rows will match however after reprocessing above 2nd time we will INSERT the DELETE and target will have two records</p>\n<p>100, 2021-sep-22 23:14:10<br>100, 2021-sep-22 23:18:26</p>\n<p>The issue happens only under following conditions,</p>\n<p>a) BQ target has partition enabled and pruning is in use<br>b) batch policy includes both dmls together<br>c) merge/ optimized merge is in use</p>\n<p> </p>\n<p>Sample results</p>\n<p><u>Without partition pruning</u></p>\n<table data-number-column=\"false\" data-layout=\"default\" data-autosize=\"false\" data-table-local-id=\"849b9600-1f84-47ff-bae3-1c7c27db4737\">\n<tbody>\n<tr>\n<td class=\"pm-table-cell-content-wrap\">\n<p>BKG_NO</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>CNTR_NO</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>CRE_DT</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>EDW_CREATE_DT</p>\n</td>\n</tr>\n<tr>\n<td class=\"pm-table-cell-content-wrap\">\n<p>striim</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>100</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>2021-09-23T22:16:34</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>2021-09-22T23:14:10.907000</p>\n</td>\n</tr>\n</tbody>\n</table>\n<p><u>with partition pruning</u></p>\n<table data-number-column=\"false\" data-layout=\"default\" data-autosize=\"false\" data-table-local-id=\"8fd33ead-349a-48b7-89ea-4db1033298f4\">\n<tbody>\n<tr>\n<td class=\"pm-table-cell-content-wrap\">\n<p>BKG_NO</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>CNTR_NO</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>CRE_DT</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>EDW_CREATE_DT</p>\n</td>\n</tr>\n<tr>\n<td class=\"pm-table-cell-content-wrap\">\n<p>striim</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>100</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>2021-09-23T22:16:34</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>2021-09-22T23:17:06.240000</p>\n</td>\n</tr>\n<tr>\n<td class=\"pm-table-cell-content-wrap\">\n<p>striim</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>100</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>2021-09-23T22:16:34</p>\n</td>\n<td class=\"pm-table-cell-content-wrap\">\n<p>2021-09-22T23:18:26.579000</p>\n</td>\n</tr>\n</tbody>\n</table>\n<p> </p>\n<h2><strong>Causes:</strong></h2>\n<p>Since version 3.10.3.6, Striim BQWriter automatically detects partition column and prunes based on it. One condition is that this column value should be not be changed. in above example, the column value changed, so it may cause duplicates.</p>\n<h2> </h2>\n<h2><strong>Solutions:</strong></h2>\n<p>1. use KEYCOLUMNS to include pk columns AND partition column, in BQWriter. (Here, the partition column value can NOT be changed. e.g., may use creation_date, but not last_update_date)</p>\n<p>2. <span>use _h_PartitionFilterType: 'NO_PARTITION' in BQWriter, to disable partition pruning.</span></p>\n<p><span>#1 is recommended, as #2 may have performance impact.</span></p>"} {"page_content": "<h2>Problem Description : </h2>\n<p>App with BigQuery writer crashed with : </p>\n<pre>2022-03-07 17:38:05,826 @xxx @NULL_APPNAME -ERROR com.striim.bigquery.BigQueryIntegrationTask.execute() : Caught Exception in Integration task - For TargetTable:{xxx.xxx} , BatchSequence:{0} , WriteMode:{APPENDONLY}<br><strong>com.google.cloud.bigquery.BigQueryException: Quota exceeded: Your table exceeded quota for imports or query appends per table. For more information, see https://cloud.google.com/bigquery/docs/troubleshoot-quotas</strong><br> at com.google.cloud.bigquery.Job.reload(Job.java:419)<br> at com.google.cloud.bigquery.Job.waitFor(Job.java:252)<br> at com.striim.bigquery.fileupload.TransferCSVDataToBQ.transferToBQ(TransferCSVDataToBQ.java:157)<br> at com.striim.bigquery.fileupload.BigQueryFileIntegrationTask.transferData(BigQueryFileIntegrationTask.java:49)<br> at com.striim.bigquery.BigQueryIntegrationTask.execute(BigQueryIntegrationTask.java:109)<br> at com.striim.bigquery.fileupload.BigQueryFileIntegrationTask.execute(BigQueryFileIntegrationTask.java:43)<br> at com.striim.dwhwriter.integrator.IntegrationTask.call(IntegrationTask.java:78)</pre>\n<h2>Cause : </h2>\n<p>BigQuery writer is using Load method and Interval is set to 30 seconds in BatchPolicy :</p>\n<pre> BatchPolicy: 'eventCount:100000, Interval:30',<br> streamingUpload: 'false', </pre>\n<p>By default , BigQuery quota \"<strong>Table operations per day\" </strong>is set to 1500 and it cannot be changed from console. ( To make the change , you need to contact the google support team ) .</p>\n<p><a href=\"https://cloud.google.com/bigquery/quotas#standard_tables\">https://cloud.google.com/bigquery/quotas#standard_tables</a></p>\n<p>Since the Interval is set to 30 seconds, the load job will be executed 2880 times (24 x 60 x 2 ) which exceeds the google imposed limitation . </p>\n<p> </p>\n<h2>Troubleshoot : </h2>\n<p>To check how many jobs have been created against the table from particular project per day , run below query : </p>\n<div>\n<pre>SELECT<br>TIMESTAMP_TRUNC(creation_time, DAY),<br>job_type,<br>count(1)<br>FROM `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT<br>#Adjust time<br>WHERE creation_time &gt; \"2022-03-01 11:00:00\"<br>AND destination_table.project_id = \"striim-support\"<br>AND destination_table.dataset_id = \"Hao\"<br>AND destination_table.table_id = \"TEST\"<br>GROUP BY 1, 2<br>ORDER BY 1 DESC</pre>\n<div>To check all the <span>quota-related errors in the past one day , you could run : </span>\n</div>\n</div>\n<pre class=\"lang-sql\" dir=\"ltr\" translate=\"no\"><code dir=\"ltr\"><span class=\"kwd\">SELECT</span><span class=\"pln\"><br> job_id</span><span class=\"pun\">,</span><span class=\"pln\"><br> creation_time</span><span class=\"pun\">,</span><span class=\"pln\"><br> error_result<br></span><span class=\"kwd\">FROM</span><span class=\"pln\"> </span><span class=\"pun\">`</span><span class=\"pln\">region-us</span><span class=\"pun\">`.</span><span class=\"pln\">INFORMATION_SCHEMA</span><span class=\"pun\">.</span><span class=\"pln\">JOBS_BY_PROJECT<br></span><span class=\"kwd\">WHERE</span><span class=\"pln\"> creation_time </span><span class=\"pun\">&gt;</span><span class=\"pln\"> TIMESTAMP_SUB</span><span class=\"pun\">(</span><span class=\"kwd\">CURRENT_TIMESTAMP</span><span class=\"pun\">,</span><span class=\"pln\"> INTERVAL </span><span class=\"lit\">1</span><span class=\"pln\"> DAY</span><span class=\"pun\">)</span><span class=\"pln\"> </span><span class=\"kwd\">AND</span><span class=\"pln\"><br> error_result</span><span class=\"pun\">.</span><span class=\"pln\">reason </span><span class=\"kwd\">IN</span><span class=\"pln\"> </span><span class=\"pun\">(</span><span class=\"str\">'rateLimitExceeded'</span><span class=\"pun\">,</span><span class=\"pln\"> </span><span class=\"str\">'quotaExceeded'</span><span class=\"pun\">)</span></code></pre>\n<p><span>The </span><code dir=\"ltr\" translate=\"no\">REGION_NAME</code><span> part should be replaced with the region name including the </span><code dir=\"ltr\" translate=\"no\">region-</code><span> prefix. For example, </span><code dir=\"ltr\" translate=\"no\">region-us</code><span> , </span><code dir=\"ltr\" translate=\"no\">region-asia-south1</code><span>.</span></p>\n<p> </p>\n<p><em>Note</em>: For JOBS_BY_PROJECT, we need <strong>bigquery.jobs.listAll</strong> privilege for the project.</p>\n<p> </p>\n<h2>Solution : </h2>\n<p>1) Increase the Interval to 90 seconds.</p>\n<p>2) Use Streaming API instead.</p>\n<p><a href=\"https://cloud.google.com/bigquery/quotas#streaming_inserts\">https://cloud.google.com/bigquery/quotas#streaming_inserts</a></p>\n<p><a href=\"https://www.striim.com/docs/en/bigquery-writer.html\">https://www.striim.com/docs/en/bigquery-writer.html</a></p>\n<p>3) Contact Google team to increase the limitation of \"Table operations per day\".</p>\n<p> </p>"} {"page_content": "<p> </p>\n<h3>Goal:</h3>\n<p>The goal of this note is to generate AccessToken/ AuthToken using Salesforce REST API URL</p>\n<p> </p>\n<h3>Solution:</h3>\n<p> </p>\n<p>Get an access token using the Salesforce REST API URL like below</p>\n<pre>curl https://login.salesforce.com/services/oauth2/token -d \"grant_type=password\" \\<br>-d \"client_id=&lt;your consumer key&gt;\" -d \"client_secret=&lt;your consumer secret&gt;\" \\<br>-d \"username=&lt;your username&gt;\" -d \"password=<strong>&lt;your password&gt;&lt;security token&gt;</strong>\"</pre>\n<p>eg.,</p>\n<p>Login URL: <a href=\"https://d4w000007ocl6uag-dev-ed.my.salesforce.com/\">https://d4w000007ocl6ug-dev-ed.my.salesforce.com/</a></p>\n<p>username: <a href=\"mailto:dulcie@jjchoosetp.com\">striimuser@choosetp.com</a></p>\n<p>password: secret</p>\n<p>Consumer Key: 7dAZkQ6yXHb3HTGwwDjg</p>\n<p>Consumer Secret: N6fvtkj5TVyM3tAF7J9d</p>\n<p>Security Token: CyDbLCJyXRtWwycMVu</p>\n<p> </p>\n<p>$ curl https://d4w000007ocl6ug-dev-ed.my.salesforce.com/services/oauth2/token -d \"grant_type=password\" -d \"client_id=7dAZkQ6yXHb3HTGwwDjg\" -d \"client_secret=N6fvtkj5TVyM3tAF7J9d\" -d \"username=striimuser@choosetp.com\" -d \"password=<strong>secret</strong>CyDbLCJyXRtWwycMVu\"</p>\n<p> </p>\n<h3><span class=\"wysiwyg-underline\">Output</span></h3>\n<pre>{\"access_token\":\"<strong>00D4W000007OCl6!ARsAQIZw_HVoFuP4R1gjeBAcS6auON3202oFmaQMVqw</strong><strong>LY4k3edn6x<br>VrionvV3ZWg8pEQ</strong>\",\"instance_url\":\"https://d4w000007ocl6ug-dev-ed.my.salesforce.com\",\"id\"<br>:\"https://login.salesforce.com/id/00D4W000007OCl6AU/0054W00000BJMfeQAH\",\"token_type\":<br>\"Bearer\",\"issued_at\":\"1644878623042\",\"signature\":\"luNMqWqht8NMa0fNcWv068tbXjC2eJTYcY3t<br>PLnrE10=\"}</pre>"} {"page_content": "<p> </p>\n<h4><span class=\"wysiwyg-underline\"><strong>Issue Description:</strong></span></h4>\n<p>Following error is seen starting SalesforceReader</p>\n<pre>2022-02-14 18:01:29,456 @S192_168_1_217 @admin.app_salesforce -ERROR <br>com.webaction.runtime.components.Source.start() : <br>JSON exception while receiving access token {JSONObject[\"access_token\"] not found.}</pre>\n<h4><span class=\"wysiwyg-underline\"><strong>Cause: </strong></span></h4>\n<p>securityToken used in SalesForceReader is incorrect</p>\n<p> </p>\n<h4>\n<span class=\"wysiwyg-underline\"><strong>Fix: </strong></span> </h4>\n<p>Replace with the current securityToken or reset the securityToken as shown in the attachment. An email is sent to the registered id with the Security token (case-sensitive) </p>\n<p> </p>\n<p>Sample TQL</p>\n<pre>CREATE APPLICATION app_salesforce;<br><br>CREATE OR REPLACE SOURCE src_salesforce USING Global.SalesForceReader ( <br>autoAuthTokenRenewal: true, <br>securityToken: 'update with the current securityToken', <br>adapterName: 'SalesForceReader', <br>pollingInterval: '1 min', <br>Password_encrypted: 'true', <br>Username: 'replace with login name', <br>securityToken_encrypted: 'true', <br>mode: 'Incremental', <br>Password: 'replace with login password', <br>sObject: 'Account', <br>apiEndPoint: 'https://d4w000007ocl6uag-dev-ed.my.salesforce.com', <br>consumerSecret: 'replace with consumerSecret', <br>consumerKey: 'replace with consumerKey', <br>connectionRetryPolicy: 'retryInterval=30, maxRetries=3', <br>consumerSecret_encrypted: 'true' ) <br>OUTPUT TO src_salesforce_out;<br><br>END APPLICATION app_salesforce;</pre>"} {"page_content": "<p> </p>\n<p>Striim supports reading and writing from/to Oracle Autonomous DB with DatabaseReader/DatabaseWriter/IncrementalReader (but not Oracle CDC). Oracle Autonomous DB requires only Mutual TLS (mTLS) Authentication.</p>\n<p>1. What type of jdbc URL does Striim support?<br>(1) supported type:</p>\n<pre> <br>jdbc:oracle:thin:@hostname:ip:sid<br>jdbc:oracle:thin:@hostname:ip/service_name</pre>\n<p> </p>\n<p>(2) not supported types:</p>\n<p>(A) complicated description<br>e.g.,</p>\n<pre><br>jdbc:oracle:thin:@(description=(address=(protocol=tcps)(port=1522)(host=adb.example.oraclecloud.com))(connect_data=(service_name=dbname_high.oraclecloud.com))(security=(ssl_server_cert_dn=\"CN=adb.oraclecloud.com,OU=OracleUS,O=Oracle Corporation,L=Redwood City,ST=California,C=US\")))</pre>\n<p><br>(B) easy URL with TNS_ADMIN<br>e.g.,</p>\n<pre><br>jdbc:oracle:thin:@dbname_high?TNS_ADMIN=/Users/test/wallet_dbname</pre>\n<p> </p>\n<p>2. Which jdbc thin driver is required?<br>ojdbc8.jar or later version for Oracle DB 18.3 or later</p>\n<pre>$ java -jar ojdbc8.jar <br>Oracle 18.3.0.0.0 JDBC 4.2 compiled with javac 1.8.0_171 on Tue_Jun_26_11:06:40_PDT_2018<br>#Default Connection Properties Resource<br>#Thu Feb 03 14:12:12 PST 2022</pre>\n<p> </p>\n<p>3. Are Wallet files from Oracle Autonomous DB required?<br>Yes, they are required.<br>- download the wallet zip file and move to Striim installation host<br>- unzip the file, and make sure Striim OS user has access to those files</p>\n<p>4. How to config Striim DBReader/DBWriter?<br>(1) in URL, use the format in 1(1).<br>e.g.,</p>\n<pre><br>jdbc:oracle:thin:@adb.us-ashburn-1.oraclecloud.com:1522/g980299e7246d8e_db202201311433_high.adb.oraclecloud.com</pre>\n<p>(2) config SSLConfig property, with wallet files and passwords:<br>e.g.,<br>- JKS</p>\n<pre> 'javax.net.ssl.trustStore=/opt/striim/wallet/truststore.jks;javax.net.ssl.trustStorePassword=secret123;javax.net.ssl.keyStore=/opt/striim/wallet/keystore.jks;javax.net.ssl.keyStorePassword=secret123;javax.net.ssl.trustStoreType=JKS;javax.net.ssl.keyStoreType=JKS', </pre>\n<p>- P12</p>\n<pre>javax.net.ssl.trustStore=/opt/striim/wallet/ewallet.p12;javax.net.ssl.trustStoreType=pkcs12;javax.net.ssl.trustStorePassword=secret123;javax.net.ssl.keyStore=/opt/striim/wallet/ewallet.p12;javax.net.ssl.keyStorePassword=secret123</pre>\n<p> </p>\n<p>sample tql files are attached.</p>\n<p>5. Can I test the connection with a simple Java code to confirm the connection works from the host to DB?</p>\n<p>Yes. You may download the Conn.java template from</p>\n<p><a href=\"https://support.striim.com/hc/en-us/articles/115015682968-How-to-test-Oracle-JDBC-connection-string-\">https://support.striim.com/hc/en-us/articles/115015682968-How-to-test-Oracle-JDBC-connection-string-</a></p>\n<p> and test as following.</p>\n<p>- compile:</p>\n<pre><br>java -cp .:./ojdbc8.jar:$STRIIM/lib/* Conn.java</pre>\n<p>- test</p>\n<pre>$ java -cp .:./ojdbc8.jar:$STRIIM/lib/* -Djavax.net.ssl.trustStore=/opt/striim/wallet/ewallet.p12 -Djavax.net.ssl.trustStoreType=PKCS12 -Djavax.net.ssl.trustStorePassword=secret123 -Djavax.net.ssl.keyStore=/opt/striim/wallet/ewallet.p12 -Djavax.net.ssl.keyStorePassword=secret123 Conn \"jdbc:oracle:thin:@(description=(address=(protocol=tcps)(port=1522)(host=adb.us-ashburn-1.oraclecloud.com))(connect_data=(service_name=g980299e7246d8e_db202201311433_high.adb.oraclecloud.com))))\" admin password<br><br>....<br>....<br>AArray = [B@29d80d2b<br>AArray = [B@7486b455<br>Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production</pre>\n<p><br>6. Common Errors:</p>\n<p>(1) ErrorCode : 17002;SQLCode : 08006;SQL Message : IO Error: The Network Adapter could not establish the connection.</p>\n<p>potential causes:</p>\n<ul>\n<li>wrong ojdbc8.jar (the one for 18.3 works, default one for 12.2 fails)</li>\n<li>wrong hostname</li>\n<li>wrong oracle listener port</li>\n<li>wrong wallet password</li>\n<li>wrong wallet file (1) does not exist; (2) no permission or no access</li>\n</ul>\n<p> </p>\n<p>(2) ErrorCode : 12514;SQLCode : 08006;SQL Message : Listener refused the connection with the following error: ORA-12514, TNS:listener does not currently know of service requested in connect descriptor</p>\n<p>potential cause:</p>\n<ul>\n<li>wrong oracle db service name</li>\n</ul>\n<p> </p>\n<p>(3) ErrorCode : 1017;SQLCode : 72000;SQL Message : ORA-01017: invalid username/password;</p>\n<p>potential cause:</p>\n<ul>\n<li>wrong oracle db username/password</li>\n</ul>\n<p> </p>"} {"page_content": "<p>The output data type for sources that use change data capture readers is WAEvent (https://www.striim.com/docs/en/waevent-contents-for-change-data.html). This article presents examples of extracting, modifying, and filtering based on WAEvent, in CQ.</p>\n<p>OracleReader will be sued as example, while other sources will be similar.</p>\n<p>Source Table:<br>Create table USER1.S4 (col1 number primary key, col2 varchar2(100), varchar3 date);</p>\n<p>Sample DML: updae s4 set b=5 where a=1;</p>\n<pre>WAEvent{<br>data: [\"1\",\"5\"]<br>metadata: {\"RbaSqn\":\"1203\",\"AuditSessionId\":\"12425889\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"605129607\",\"SQLRedoLength\":61,\"BytesProcessed\":null,\"OperationCode\":3,\"ParentTxnID\":\"7.10.18031\",\"SRC_CON_NAME\":null,\"SessionInfo\":\"UNKNOWN\",\"RecordSetID\":\" 0x0004b3.0000bb23.0010 \",\"DBCommitTimestamp\":1636053206000,\"COMMITSCN\":605129608,\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"605129607\",\"Status\":\"0\",\"SegmentName\":\"S4\",\"OperationName\":\"UPDATE\",\"TimeStamp\":1636078406000,\"RbaBlk\":47907,\"SSN\":\"0\",\"TxnUserID\":\"USER1\",\"SegmentType\":\"TABLE\",\"TableName\":\"USER1.S4\",\"Serial\":\"1061\",\"TxnID\":\"7.10.18031\",\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1636078406000,\"OperationType\":\"DML\",\"ROWID\":\"AAAZRoAAEAAAAurAAA\",\"DBTimeStamp\":1636053206000,\"TransactionName\":null,\"SCN\":\"605129607\",\"Session\":\"10\"}<br>userdata: null<br>before: [\"1\",null]<br>dataPresenceBitMap: \"Aw==\"<br>beforePresenceBitMap: \"AQ==\"<br>typeUUID: {\"uuidstring\":\"01ec4307-c007-c711-bc26-6ad15785e819\"}<br>};</pre>\n<p> </p>\n<p>1. Extraction (the output will be user type, and no longer WAEvent type):<br>(1) get part of waevent, e.g., data, and metadata</p>\n<pre><br>select data, metadata from cdc_stream;</pre>\n<p>(2) get COL1 and table name<br>(A)) based on index</p>\n<pre><br>select x.data[0] col1, meta(x,'TableName').toString() tableName from cdc_stream x;</pre>\n<p>(B)) based on column name</p>\n<pre><br>select getdata(x, \"COL1\") col1, meta(x,'TableName').toString() tableName from cdc_stream x;</pre>\n<p>2. Modify<br>(1) convert all the operation to insert<br>(A)) use Strii function</p>\n<pre><br>select ChangeOperationToInsert(x) from cdc_stream x;</pre>\n<p>(B)) use Java put function to change the metadata field value (from UPDATE to INSERT)</p>\n<pre><br>CREATE OR REPLACE CQ update_cq<br>insert into WAYBILLPRICELINEUPDATES_STREAM<br>select <br>data as data,<br>-- metadata.put(\"OperationName\",\"INSERT\").toString() as metadata,<br>CASE WHEN metadata.put(\"OperationName\",\"INSERT\").toString()!=\"NULL\"<br>THEN metadata <br>ELSE metadata<br>END as metadata,<br>USERDATA as userdata,<br>NULL as before,<br>dataPresenceBitMap as dataPresenceBitMap,<br>NULL as beforePresenceBitMap,<br>typeUUID as typeUUID<br>from cdc_stream<br>WHERE META(cdc_stream, \"OperationName\").toString()=\"UPDATE\";</pre>\n<p>(2) change data value<br>(A) with 'replacedata'</p>\n<pre><br>CREATE OR REPLACE CQ ora_replace_cq <br>INSERT INTO ora_replace_cq_stream<br>SELECT replacedata(o,'b','replaced_value') FROM cdc_stream o;<br><br></pre>\n<p>(B) with 'modify'</p>\n<pre><br>CREATE OR REPLACE CQ ora_mask_cq <br>INSERT INTO ora_mask_cq_stream<br>SELECT * FROM cdc_stream<br>modify (data[1] = \"modified_data\");<br><br></pre>\n<p>(C) change DATE value to 1901-01-01 for the values prior to year 1900.</p>\n<pre><br>CREATE OR REPLACE CQ ora1_cq <br>INSERT INTO admin.ora1_cq_stream <br>SELECT CASE WHEN to_string(to_date(GETDATA(x,\"C\")),'yyyy') &lt; '1900' <br>THEN replacedata(x,\"C\",to_date('1901-01-01','yyyy-MM-dd')) <br>ELSE x END <br>FROM cdc_stream x; </pre>\n<p><br>3. Filter</p>\n<p>(1) get (col1=1 and tablename=USER1.S1) OR (col1=2 and tablename=USER1.S2)</p>\n<p> </p>\n<pre>CREATE OR REPLACE CQ ora_102_cq <br>INSERT INTO ora_102_cq_stream<br>SELECT * FROM cdc_stream o<br>where <br>(to_double(o.data[0]) =1<br>and <br>META(o,\"TableName\").toString() == 'USER1.S1'<br>)<br>or<br>(to_double(o.data[0]) =2<br>and <br>META(o,\"TableName\").toString() == 'USER1.S2'<br>);<br><br></pre>\n<p>(2) filter based on data field value:</p>\n<pre><br>CREATE OR REPLACE CQ ora1_cq <br>INSERT INTO admin.ora1_cq_stream <br>SELECT * FROM cdc_stream o <br>where GETDATA (o, \"B\") is not null ;</pre>\n<p>(3) to get only one table:</p>\n<pre><br>CREATE OR REPLACE CQ ora1_cq <br>INSERT INTO admin.ora1_cq_stream <br>SELECT * FROM cdc_stream o <br>where meta (o, \"TableName\").toString() = 'USER1.T4' ;</pre>"} {"page_content": "<p>Question:</p>\n<p>How to know the get the template for a source and target adapter and create an application using RESTAPI</p>\n<p> </p>\n<p>Answer</p>\n<p>Step 1: </p>\n<p>1. Get the authorization token.</p>\n<p><a href=\"https://www.striim.com/docs/en/getting-a-rest-api-authentication-token.html\">https://www.striim.com/docs/en/getting-a-rest-api-authentication-token.html</a></p>\n<p>eg:</p>\n<pre class=\"programlisting hljs csharp\">curl -X POST -d<span class=\"hljs-string\">'username=admin&amp;password=******'</span> http:<span class=\"hljs-comment\">//localhost:9080/security/authenticate</span></pre>\n<p> </p>\n<p>2. Use the following syntax to generate template definition. Below example is for MongoDBReader to S3Writer</p>\n<pre><em>curl --location --request POST 'http://localhost:9080/api/v2/applications/templates/definition' \\</em><br><em>--header 'authorization: STRIIM-TOKEN 01ec72ad-957d-a1b1-abd9-b6df740b4c40' \\</em><br><em>--header 'content-type: application/json' \\</em><br><em>--data-raw '{</em><br><em>\"sourceAdapter\" : \"MongoDBReader\",</em><br><em>\"parser\" : null,</em><br><em>\"targetAdapter\" : \"S3Writer\",</em><br><em>\"formatter\" : null</em><br><em>}'</em></pre>\n<p><em>Output will be like the following </em></p>\n<pre><span class=\"wysiwyg-color-green120\"><em>{<br>\"templateId\": null,<br>\"sourceParameters\": [<br>{<br>\"name\": \"connectionUrl\",<br>\"parameterType\": \"STRING\",<br>\"required\": true<br>},<br>{<br>\"name\": \"collections\",<br>\"parameterType\": \"STRING\",<br>\"required\": true<br>},<br>{<br>\"name\": \"excludeCollections\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"mode\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"userName\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"password\",<br>\"parameterType\": \"SECURESTRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"authType\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"authDB\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"readPreference\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"startTimestamp\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"sslEnabled\",<br>\"parameterType\": \"BOOL\",<br>\"required\": false<br>},<br>{<br>\"name\": \"connectionRetryPolicy\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>}<br>],<br>\"targetParameters\": [<br>{<br>\"name\": \"accesskeyid\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"secretaccesskey\",<br>\"parameterType\": \"SECURESTRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"bucketname\",<br>\"parameterType\": \"STRING\",<br>\"required\": true<br>},<br>{<br>\"name\": \"objectname\",<br>\"parameterType\": \"STRING\",<br>\"required\": true<br>},<br>{<br>\"name\": \"foldername\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"uploadpolicy\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"compressiontype\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"clientconfiguration\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"ParallelThreads\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"PartitionKey\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"objecttags\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"region\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"rolloveronddl\",<br>\"parameterType\": \"BOOL\",<br>\"required\": false<br>}<br>],<br>\"parserParameters\": [],<br>\"formatterParameters\": [<br>{<br>\"name\": \"jsonobjectdelimiter\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"jsonMemberDelimiter\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"EventsAsArrayOfJsonObjects\",<br>\"parameterType\": \"BOOL\",<br>\"required\": false<br>},<br>{<br>\"name\": \"charset\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>},<br>{<br>\"name\": \"members\",<br>\"parameterType\": \"STRING\",<br>\"required\": false<br>}<br>]<br>}</em></span></pre>\n<p> </p>\n<p><span class=\"wysiwyg-color-black\">3. Create the application properties using the above template. If required is True, then property should be present for the app creation. Note the app creation should include Template defintion</span></p>\n<p> </p>\n<div>\n<pre><span class=\"wysiwyg-color-black\">{</span><br><br><span class=\"wysiwyg-color-black\"> \"templateDefinition\" : {</span><br><span class=\"wysiwyg-color-black\"> \"sourceAdapter\" : \"MongoDBReader\",</span><br><span class=\"wysiwyg-color-black\"> \"parser\": null,</span><br><span class=\"wysiwyg-color-black\"> \"targetAdapter\" : \"S3writer\",</span><br><span class=\"wysiwyg-color-black\"> \"formatter\": \"JSONFormatter\"</span><br><span class=\"wysiwyg-color-black\">},</span><br><span class=\"wysiwyg-color-black\">\"applicationName\": \"admin.customerDB_API\",</span><br><span class=\"wysiwyg-color-black\">\"sourceParameters\": {</span><br><span class=\"wysiwyg-color-black\">\"Password_encrypted\":\"false\",</span><br><span class=\"wysiwyg-color-black\">\"Password\":\"w@ct10n\",</span><br><span class=\"wysiwyg-color-black\">\"authDB\":\"admin\",</span><br><span class=\"wysiwyg-color-black\">\"sslEnabled\": false,</span><br><span class=\"wysiwyg-color-black\">\"Username\":\"himachal\",</span><br><span class=\"wysiwyg-color-black\">\"collections\":\"customerDB.$\",</span><br><span class=\"wysiwyg-color-black\">\"connectionRetryPolicy\":\"retryInterval=30, maxRetries=3\", \"ConnectionURL\":\"localhost:27017\",</span><br><span class=\"wysiwyg-color-black\">\"mode\":\"InitialLoad\",</span><br><span class=\"wysiwyg-color-black\">\"readPreference\":\"primaryPreferred\",</span><br><span class=\"wysiwyg-color-black\">\"authType\":\"Default\"</span><br><span class=\"wysiwyg-color-black\">},</span><br><br><span class=\"wysiwyg-color-black\">\"targetParameters\": {</span><br><span class=\"wysiwyg-color-black\">\"objectname\": \"%@metadata(CollectionName)%.json\",</span><br><span class=\"wysiwyg-color-black\">\"bucketname\":\"ap140783-striim-dev\",</span><br><span class=\"wysiwyg-color-black\">\"uploadpolicy\":\"eventcount:10000,interval:5m\",</span><br><span class=\"wysiwyg-color-black\">\"handler\":\"com.webaction.proc.JSONFormatter\",</span><br><span class=\"wysiwyg-color-black\">\"jsonMemberDelimiter\":\"\\n\",</span><br><span class=\"wysiwyg-color-black\">\"members\":\"data\",</span><br><span class=\"wysiwyg-color-black\">\"EventsAsArrayOfJsonObjects\":\"true\",</span><br><span class=\"wysiwyg-color-black\">\"formatterName\":\"JSONFormatter\",</span><br><span class=\"wysiwyg-color-black\">\"jsonobjectdelimiter\":\"\\n\"</span><br><span class=\"wysiwyg-color-black\">}</span><br><span class=\"wysiwyg-color-black\">}</span></pre>\n<p>Output would be like the following</p>\n<pre><span class=\"wysiwyg-color-green110\">{</span><br><span class=\"wysiwyg-color-green110\">\"namespace\": \"admin\",</span><br><span class=\"wysiwyg-color-green110\">\"name\": \"customerDB_API\",</span><br><span class=\"wysiwyg-color-green110\">\"status\": \"CREATED\",</span><br><span class=\"wysiwyg-color-green110\">\"links\": [</span><br><span class=\"wysiwyg-color-green110\">{</span><br><span class=\"wysiwyg-color-green110\">\"rel\": \"self\",</span><br><span class=\"wysiwyg-color-green110\">\"allow\": [</span><br><span class=\"wysiwyg-color-green110\">\"GET\",</span><br><span class=\"wysiwyg-color-green110\">\"DELETE\"</span><br><span class=\"wysiwyg-color-green110\">],</span><br><span class=\"wysiwyg-color-green110\">\"href\": \"/api/v2/applications/admin.customerDB_API\"</span><br><span class=\"wysiwyg-color-green110\">},</span><br><span class=\"wysiwyg-color-green110\">{</span><br><span class=\"wysiwyg-color-green110\">\"rel\": \"deployment\",</span><br><span class=\"wysiwyg-color-green110\">\"allow\": [</span><br><span class=\"wysiwyg-color-green110\">\"POST\",</span><br><span class=\"wysiwyg-color-green110\">\"DELETE\"</span><br><span class=\"wysiwyg-color-green110\">],</span><br><span class=\"wysiwyg-color-green110\">\"href\": \"/api/v2/applications/admin.customerDB_API/deployment\"</span><br><span class=\"wysiwyg-color-green110\">},</span><br><span class=\"wysiwyg-color-green110\">{</span><br><span class=\"wysiwyg-color-green110\">\"rel\": \"sprint\",</span><br><span class=\"wysiwyg-color-green110\">\"allow\": [</span><br><span class=\"wysiwyg-color-green110\">\"POST\",</span><br><span class=\"wysiwyg-color-green110\">\"DELETE\"</span><br><span class=\"wysiwyg-color-green110\">],</span><br><span class=\"wysiwyg-color-green110\">\"href\": \"/api/v2/applications/admin.customerDB_API/sprint\"</span><br><span class=\"wysiwyg-color-green110\">}</span><br><span class=\"wysiwyg-color-green110\">]</span><br><span class=\"wysiwyg-color-green110\">}</span></pre>\n</div>"} {"page_content": "<p><span>1. Problem:</span><br><span>spanner connection failed:</span></p>\n<pre>com.webaction.exception.TableNotFoundException: Exception occurred while processing table com.webaction.exception.TableNotFoundException: Could not fetch Table Metadata for table {scott.CUSTOMER}: {com.google.cloud.spanner.SpannerException: UNAVAILABLE: com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception}</pre>\n<p><span>2. Troubleshooting:</span><br><span>with a standalone java code (attached), it will test spanner db connection.</span></p>\n<p><span>syntax: java -jar SpannerConnectionUtil.jar &lt;projectId&gt; &lt;instanceId&gt; &lt;ServiceAccountKeyPath&gt; &lt;dbName&gt;<br><br>expected example:<br></span></p>\n<pre>$ java -jar ./SpannerConnectionUtil.jar striim-id db123 /Users/xxx/u01/scripts/Striim/tql/cloud/gcp/striim-support/striim-support-286429beb74d.json db1 <br>Fetched current date: 2021-07-28<br>Connection was successfully established to database {db1}</pre>\n<p>in problem env, it was shown:</p>\n<pre>[striim@rofsii801a ~]$ java -jar /opt/striim/bin/SpannerConnectionUtil.jar myid mydb /opt/striim/creds/SBKey.json db1<br>Jul 28, 2021 2:17:07 PM io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader load<br>INFO: /tmp/libio_grpc_netty_shaded_netty_transport_native_epoll_x86_644546169618518636265.so exists but cannot be executed even when execute permissions set; check volume for \"noexec\" flag; use -Dio.grpc.netty.shaded.io.netty.native.workdir=[path] to set native working directory separately.<br>error:</pre>\n<p><span>It is not allowed to execute a file in /tmp.</span><br><br><span>3. Solution:</span><br><span>Removed the exec restriction from /tmp on STRIIM servers.</span></p>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p>rpm Installation fails with</p>\n<pre>[root@localhost oracle]# rpm -ivh striim-node-4.0.4.3-Linux.rpm <br>Verifying... ################################# [100%]<br>Preparing... ################################# [100%]<br>Updating / installing...<br>1:striim-node-4.0.4.3-1 ################################# [100%]<br>error: unpacking of archive failed on file /etc/init/striim-node.conf;61d8436b: cpio: Digest mismatch<br>error: striim-node-4.0.4.3-1.x86_64: install failed</pre>\n<p>If --nofiledigest option is used, installation will succeed.</p>\n<p><strong>[root@localhost oracle]# rpm -ivh --nofiledigest striim-node-4.0.4.3-Linux.rpm</strong><br>Verifying... ################################# [100%]<br>Preparing... ################################# [100%]<br>package striim-node-4.0.4.3-1.x86_64 is already installed<br><br>But the sksconfig.sh/ start of striim server fails with below error</p>\n<pre>bin/sksConfig.sh <br>Please enter the KeyStore password: ******<br>Creating the KeyStore.<br>java.security.KeyStoreException: JCEKS not found<br>at java.security.KeyStore.getInstance(KeyStore.java:851)<br>at com.webaction.security.KeyStoreImpl.createKeyStore(KeyStoreImpl.java:109)<br>at com.webaction.runtime.JKSOperations.create(JKSOperations.java:33)<br>at com.webaction.runtime.GenerateServerConfig.resolveKeyStore(GenerateServerConfig.java:137)<br>at com.webaction.runtime.GenerateServerConfig.&lt;init&gt;(GenerateServerConfig.java:65)<br>at com.webaction.runtime.GenerateServerConfig.main(GenerateServerConfig.java:476)<br>Caused by: java.security.NoSuchAlgorithmException: JCEKS KeyStore not available<br>at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)<br>at java.security.Security.getImpl(Security.java:725)<br>at java.security.KeyStore.getInstance(KeyStore.java:848)<br>... 5 more</pre>\n<p><br><strong><span class=\"wysiwyg-underline\">Cause</span>:</strong></p>\n<p><br>FIPS is enabled. <br>To check if FIPS is enabled use either of the following command</p>\n<pre><br>[root@localhost oracle]# fips-mode-setup --check<br><br>FIPS mode is enabled.<br>[root@localhost oracle]#<br><br>[root@localhost oracle]# cat /proc/sys/crypto/fips_enabled<br>1<br><br>0 is disabled and 1 is enabled.</pre>\n<p><strong><span class=\"wysiwyg-underline\">Solution</span>:</strong></p>\n<p><br>Disable the FIPS on the server and reboot the operating system.</p>\n<p>fips-mode-setup --disable</p>\n<p>and rerun sksconfig.sh and restart the striim server.</p>\n<p>Note: If you disable fips-mode and run the rpm install of striim server(<span>rpm -ivh striim-node-</span><span class=\"phrase\">4.0.4</span><span>-Linux.rpm)</span>. Then --nofiledigest option is not required.</p>\n<p> </p>\n<p><strong>Note:</strong> Enabaling FIPS=1 kernel option for some versions will disable required Cryptographic algorithms required by RPM and Striim used to verify software package integrity and to secure store passwords or encrypt data traffic over networks.</p>\n<p>It is advised to consult with your CISO or Security Architecture teams to ensure the OS is utilizing cryptographic standards suitable for every day business use and meeting Striim's minimum Cryptographic requirements.</p>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong>Reference:</strong></span></p>\n<p>For SysAdmins look for more information on FIPS and options on CentOS / RHEL 7 or 8 please see:<br>RHEL 7 :<span> </span><a href=\"https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/chap-federal_standards_and_regulations\" rel=\"nofollow noreferrer\">https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/chap-federal_standards_and_regulations</a></p>\n<p>RHEL 8 :<span> </span><a href=\"https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening\" rel=\"nofollow noreferrer\">https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening</a></p>\n<p> </p>"} {"page_content": "<h3>Environment:</h3>\n<p>Windows with Striim running as a Service</p>\n<h3>Issue Description:</h3>\n<p>Server logs shows one or more of the following errors and it is seen while configuring apps</p>\n<pre><span class=\"\">2021-09-07 22:38:02,402 @S10_253_202_10 @ -ERROR com.webaction.ser.EncryptDecrypt.initializeCiphers()<br>Failed to create cipher for AES/CBC/PKCS5Padding </span><span>java.lang.NullPointerException <br></span><span>at com.webaction.metaRepository.MetadataRepository.getSecurityInfoByUUID(MetadataRepository.java)</span></pre>\n<p>or</p>\n<pre><span class=\"\">2021-09-08 06:34:28,400 @S192_168_1_6 @ -ERROR com.webaction.web.api.MonitoringAPI.getMonitoringStatFromStore() : Failed to get Monitoring Data </span><span>com.webaction.runtime.monitor.<br>MonitoringServerException: Monitoring server is not running</span></pre>\n<p>or</p>\n<pre>Unable to encrypt using salt. Reason - Failed to create cipher for encryption.</pre>\n<p> </p>\n<h3>Cause:</h3>\n<p><span>The issue was essentially the persist property was not set when starting as a Windows Service. </span></p>\n<p><span>This issue doesn't happen when striim is started as a process since server.bat sets the persist property=true by default. </span></p>\n<p> </p>\n<h3>Fix:</h3>\n<p><span>The fix for this issue would be available in Striim version 4.0.5</span></p>\n<p> </p>\n<h3><span>Workaround:</span></h3>\n<p> </p>\n<p>For any striim version &lt; 4.0.5, the following workaround resolves the issue.</p>\n<p>1. Run following from existing striim version in Tungsten console</p>\n<pre>stop application Global.MonitoringSourceApp;<br>stop application Global.MonitoringProcessApp;<br>undeploy application Global.MonitoringSourceApp;<br>undeploy application Global.MonitoringProcessApp;<br>drop application Global.MonitoringSourceApp cascade;<br>drop application Global.MonitoringProcessApp cascade;</pre>\n<p>2. Stop striim &amp; derby windows service<br><br>3. Uninstall the services from command prompt (powershell is preferred)</p>\n<pre>PS:&gt; &lt;striim home&gt;\\conf\\windowsService\\yajsw_server\\bat&gt;uninstallService.bat<br>PS:&gt; &lt;striim home&gt;\\conf\\windowsService\\yajsw_derby\\bat&gt;uninstallService.bat</pre>\n<p><br>4. Delete the folders yajsw_derby &amp; yajsw_server from &lt;striim home&gt;\\conf\\windowsService path</p>\n<p><br>These two will be created again when running the setupWindowsService.ps1<br><br>5. Delete the data in &lt;striim home&gt;\\elasticsearch if present.<br><br>6. Edit &lt;striim home&gt;\\conf\\windowsService\\wrapper.conf.server and add following lines at the end</p>\n<pre>wrapper.java.additional.11 = -Dcom.webaction.config.persist=True<br>wrapper.java.additional.12 = -Dcom.webaction.config.enable-monitor=True</pre>\n<p>7. Setup the service again (powershell is preferred)</p>\n<pre>PS:&gt; &lt;striim home&gt;\\conf\\windowsService&gt; .\\setupWindowsService.ps1</pre>\n<p><br>8. Start the derby &amp; striim service</p>"} {"page_content": "<p>Recently, a series of security vulnerabilities have been identified on Apache Log4j2.</p>\n<p>CVE-2021-44228</p>\n<p>CVE-2021-45046</p>\n<p>CVE-2021-45105</p>\n<p>CVE-2021-44832</p>\n<p>This document details the Striim Versions affected, as well as recommended actions.</p>\n<p> </p>\n<p data-renderer-start-pos=\"3\"><strong data-renderer-mark=\"true\">Impacted functionality:-</strong></p>\n<ol>\n<li data-renderer-start-pos=\"32\">Striim Version 4.0.3 and later -\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li data-renderer-start-pos=\"69\">Apache Log4j2 JNDI features do not protect against attacker controlled LDAP and other JNDI related endpoints.</li>\n<li data-renderer-start-pos=\"182\">Specifically for Striim there are some features from log4j2.14.1 which are used namely the “Lookups” to inject extra information in log stream when the logs rollover.</li>\n<li data-renderer-start-pos=\"355\">This vulnerability is fixed in log4j version 2.16.0</li>\n</ul>\n</li>\n</ul>\n</li>\n<li data-renderer-start-pos=\"412\">Striim between 3.7.5 and 3.10.3.7 -\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li data-renderer-start-pos=\"446\">Striim uses log4j 1.2.17 in versions 3.10.3.7 and earlier.</li>\n<li data-renderer-start-pos=\"509\">One known impact is with the JMS Appender via log4j-jms. </li>\n<li data-renderer-start-pos=\"574\">The JMS appender is not used in the Striim codebase hence there is no known impact in versions 3.10.3.7 and earlier versions of Striim.</li>\n<li data-renderer-start-pos=\"713\">For releases between 3.10.x, there is a log4j 2.7 dependency in core and api, <span>which is required by one of the embedded components. This affects</span> Striim.</li>\n</ul>\n</li>\n</ul>\n</li>\n<li>Striim version 3.10.3.8\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>\n<span class=\"s1\">log4j-core-2.7.jar and </span><span class=\"s1\">log4j-api-2.7.jar are replaced by </span><span class=\"s1\">log4j-core-2.16.0.jar and </span><span class=\"s1\">log4j-api-2.16.0.jar, which handle the CVE-2021-44228, but not CVE-2021-45046.</span>\n</li>\n</ul>\n</li>\n</ul>\n</li>\n<li>CVE-2021-45105 Alert:\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>Striim does use the Context Map (MDC) and plugin values, but we do not use it as a lookup in our shipped log4j configuration files. Thus, the potential issue here does not affect us. i.e, it is fine to use log4j version 2.16.0 (no need to replace them with log4j version 2.17.0). </li>\n<li data-renderer-start-pos=\"1904\">The vulnerability is detected in the sub-system of log4j2 which causes uncontrolled recursion from self-referential lookups. Striim does not use self referential lookups in code or configuration.</li>\n<li data-renderer-start-pos=\"2103\">Recommendation for customers using log4j 2.16.0 is to check configuration files (log4j.server/console/agent.properties) for any customizations to the default LayoutPattern, LayoutHeader that Striim ships with and remove them.</li>\n<li data-renderer-start-pos=\"2332\"><strong data-renderer-mark=\"true\">E.g of LayoutHeader , LayoutPattern in default Striim log4j2 configs :- </strong></li>\n<li data-renderer-start-pos=\"2407\">This is a lookup but it’s not affected.</li>\n<li data-renderer-start-pos=\"2449\">appender.ServerFileAppender.layout.header=${StriimHeader:logHeader}</li>\n<li data-renderer-start-pos=\"2518\">The following @%X{ServerToken}, @%X{AppName} are MDC hash values but are not lookups.</li>\n<li data-renderer-start-pos=\"2607\">appender.ServerFileAppender.layout.pattern=%d @%X{ServerToken} @%X{AppName} -%p %t %C.%M (%F:%L) %m%n</li>\n</ul>\n</li>\n</ul>\n</li>\n<li>CVE-2021-44832 Alert (fixed in 2.17.1 jars):\n<ol>\n</ol>\n<ul>\n<li>This does not affect Striim, but it will be considered in future patchset.</li>\n</ul>\n</li>\n</ol>\n\n\n<p data-renderer-start-pos=\"625\"><strong data-renderer-mark=\"true\">Conclusion/Resolution :</strong></p>\n<ol>\n<li data-renderer-start-pos=\"654\">For Striim version 4.0.3 and later - Use log4j 2.16.0 or up (current latest one is 2.17.1)</li>\n</ol>\n<p class=\"wysiwyg-indent4\"><span>Steps to follow - </span></p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li data-renderer-start-pos=\"883\">Download log4j 2.16.0 or up (current latest one is 2.17.1) jars from: <a href=\"https://search.maven.org/search?q=org.apache.logging.log4j\" target=\"_self\">Maven Central </a> or <a href=\"https://logging.apache.org/log4j/2.x/download.html\" target=\"_self\">Apache</a>\n</li>\n<li data-renderer-start-pos=\"883\">Follow regular procedure to safely stopping the Striim server.</li>\n<li data-renderer-start-pos=\"966\">Replace, in each node and agent, &lt;Striim_home&gt;/lib/log4j*2.14.1*.jar with following 3 downloaded jars above:\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li data-renderer-start-pos=\"966\">log4j-api-2.17.1.jar</li>\n<li data-renderer-start-pos=\"966\">log4j-jcl-2.17.1.jar</li>\n<li data-renderer-start-pos=\"966\">log4j-core-2.17.1.jar</li>\n</ul>\n</li>\n</ul>\n</li>\n<li><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">Start the server/agent.</span></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<p class=\"wysiwyg-indent1\">2. <span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">For Striim versions between 3.7.5 and 3.10.3.8 - </span></p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>\n<span class=\"s1\">For log4j-1.2.17.jar, it is</span><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\"> not believed to be impacted, and please do NOT replace this jar.</span>\n</li>\n<li>\n<span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">For </span><span class=\"s1\">log4j-core-2.7.jar and </span><span class=\"s1\">log4j-api-2.7.jar (or </span><span class=\"s1\">og4j-core-2.16.0.jar and </span><span class=\"s1\">log4j-api-2.16.0.jar in version 3.10.3.8)</span><span class=\"s1\">, please replace them with related 2.17.1 jars, in each node/agent:</span>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>log4j-api-2.17.1.jar</li>\n<li>log4j-core-2.17.1.jar</li>\n</ul>\n</li>\n</ul>\n</li>\n<li><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">Start the server/agent.</span></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<p class=\"wysiwyg-indent1\">3. Patches</p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">3.10.3.8A includes 2.17.0 jars.</span></li>\n<li><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\"><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">4.0.4.3A includes 2.17.1 jars (log4j-1.2.17.jar is removed). <br></span></span></li>\n</ul>\n</li>\n</ul>\n</li>\n</ul>\n<p data-renderer-start-pos=\"625\"><strong data-renderer-mark=\"true\">Additional :</strong></p>\n<ol>\n<li data-renderer-start-pos=\"654\">For bundled Kafka (./Kafka/libs/log4j-1.2.17.jar), it has no impact if this Kafka is not started and used. Starting in version 4.1.0.2 and later, the log4j jars are upgraded to version 2.17.2.</li>\n</ol>\n<p class=\"wysiwyg-indent4\"> </p>\n<p data-renderer-start-pos=\"1179\"> </p>\n<p data-renderer-start-pos=\"1269\"><strong>Reference/Sources :</strong></p>\n<ul>\n<li data-renderer-start-pos=\"1269\"><span data-inline-card=\"true\" data-card-url=\"https://logging.apache.org/log4j/2.x/security.html\"><span class=\"loader-wrapper\"><a class=\"sc-jbKcbu fAfaTz\" tabindex=\"0\" href=\"https://logging.apache.org/log4j/2.x/security.html\" target=\"_self\" rel=\"undefined\" data-testid=\"inline-card-resolved-view\">Log4j – Apache Log4j Security Vulnerabilities</a></span></span></li>\n</ul>"} {"page_content": "<h4><span class=\"wysiwyg-font-size-large\"><strong><span class=\"wysiwyg-color-black\">Problem:</span></strong></span></h4>\n<p>App with OracleReader hits following error. Restarting the app normally works.</p>\n<pre>2021-12-07 13:26:21,211 -ERROR com.webaction.cdcProcess.layer.txncache.TxnCacheLayer.hasNext() : <br><strong>Unable to enrich the following partial WAEvent</strong> : <br>{\"_id\":null,\"OperationName\":\"<strong>UNSUPPORTED</strong>\",...}}<br>at com.striim.cdcProcess.partialWAEventProcessor.dataFetcher.PartialWAEventDataFetcher.getEnrichedWAEvent(PartialWAEventDataFetcher.java:33)<br>at com.striim.cdcProcess.partialWAEventProcessor.PartialWAEventUnsupportedHandler.getEnrichedWAEvent(PartialWAEventUnsupportedHandler.java:19)<br>at com.striim.cdcProcess.partialWAEventProcessor.PartialWAEventProcessor.getEnrichedWAEvent(PartialWAEventProcessor.java:38)<br>at com.striim.alm.txncache.OracleTxnCacheLayer.<strong>enrichOperation</strong>(OracleTxnCacheLayer.java:99)<br>at com.webaction.cdcProcess.layer.txncache.TxnCacheLayer.hasNext(TxnCacheLayer.java:132)<br>at com.webaction.cdcProcess.Layer.hasNext(Layer.java:50)<br>at com.striim.alm.typeHandler.TypeHandlerLayer.hasNext(TypeHandlerLayer.java:43)<br>at com.webaction.proc.<strong>OracleALM</strong>_1_0.receiveImpl(OracleALM_1_0.java:171)<br>at com.webaction.proc.OracleReader_1_0.receiveImpl(OracleReader_1_0.java:322)<br>at com.webaction.proc.BaseProcess.receive(BaseProcess.java:169)<br>at com.webaction.proc.SourceProcess.receive(SourceProcess.java:117)<br>at com.webaction.runtime.components.Source.run(Source.java:155)<br>at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br>at java.util.concurrent.FutureTask.run(FutureTask.java:266)<br>at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)<br>at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)<br>at java.lang.Thread.run(Thread.java:748)</pre>\n<h4>\n<br><strong><span class=\"wysiwyg-font-size-large\">Cause:</span></strong>\n</h4>\n<p>This is Logminer related problem (Oracle <span>Doc ID 1228844.1) </span>: when starting from certain SCN, the record output is UNSUPPORTED, while if starting from a different SCN, logminer content output becomes valid (such as INSERT/UPDATE/DELETE).<br><br><span class=\"wysiwyg-font-size-large\"><strong>Solution:</strong></span></p>\n<p>To bypass the logminer limitation Striim retries the read when hitting UNSUPPORTED operation which in most cases have resolved the issue. This change is in Striim versions 4.0.3 and higher.</p>\n<p>If the source oracle DB is prior to 19c, a workaround is to use classic mode with hidden property (_h_useCalssic) in oracleReader. e.g.,</p>\n<pre><br>CREATE OR REPLACE SOURCE ORC_TEST USING Global.OracleReader ( <br>TransactionBufferDiskLocation: '.striim/LargeBuffer', <br><strong>_h_useClassic: true,</strong><br>adapterName: 'OracleReader', <br>OutboundServerProcessName: 'WebActionXStream', <br>DDLCaptureMode: 'All', <br>Compression: false, <br>ReaderType: 'LogMiner', <br>connectionRetryPolicy: 'timeOut=30, retryInterval=30, maxRetries=3', <br>Password_encrypted: 'true', <br>SupportPDB: false, <br>QuiesceMarkerTable: 'QUIESCEMARKER', <br>QueueSize: 4096, <br>CommittedTransactions: true, <br>XstreamTimeOut: 600, <br>DictionaryMode: 'OnlineCatalog', <br>Username: 'striim', <br>Tables: 'USER1.TABLE1', <br>TransactionBufferType: 'Memory', <br>FetchSize: 10000, <br>ConnectionURL: 'jdbc:oracle:thin:@192.168.00.1:1529:orcl', <br>Password: 'AICvBi+6fcOI+W+xqAYs5A==', <br>TransactionBufferSpilloverSize: '10MB', <br>FilterTransactionBoundaries: true, <br>SendBeforeImage: true ) <br>OUTPUT TO orc_app_demo;</pre>"} {"page_content": "<h2><strong>Question:</strong></h2>\n<p>I have oracle cdc to oracle setup, and want to mask one column with MD5 hash value. How to do that?</p>\n<p> </p>\n<h2><strong>Answer:</strong></h2>\n<p>Following is an example.</p>\n<p><span>1. Source and target table DDLs: (make sure target column name is long enough to handle MD5 value):</span><br><span>source: </span><br><span>create table s3(id number primary key, name varchar2(100), FIRST_NAME varchar2(100), MIDDLE_NAME varchar2(100), PHONE_NUMBER varchar2(100));</span><br><br><span>target:</span><br><span>create table s3(id number primary key, name varchar2(100), FIRST_NAME varchar2(100), MIDDLE_NAME varchar2(100), PHONE_NUMBER varchar2(100));</span><br><br><span>2. Striim CQ conversion:</span></p>\n<pre>SELECT replacedata(o,'NAME',org.apache.commons.codec.digest.DigestUtils.md5hex(to_string(GETDATA(o,'NAME')))) FROM ora2_src_stream o</pre>\n<p><span>3. Insert a row in target:</span><br><span>insert into s3 values (1, 'DOE','JOHN','R','123-456-7890');</span><br><span>commit;</span><br><br><span>4. Check the row values in source and target:</span><br><br></p>\n<pre>SQL&gt; select * from s3;<br><br> ID<br>----------<br>NAME<br>--------------------------------------------------------------------------------<br>FIRST_NAME<br>--------------------------------------------------------------------------------<br>MIDDLE_NAME<br>--------------------------------------------------------------------------------<br>PHONE_NUMBER<br>--------------------------------------------------------------------------------<br> 1<br><span class=\"wysiwyg-color-red\">DOE</span><br>JOHN<br>R<br>123-456-7890<br><br><br>SQL&gt; select * from t3;<br><br> ID<br>----------<br>NAME<br>--------------------------------------------------------------------------------<br>FIRST_NAME<br>--------------------------------------------------------------------------------<br>MIDDLE_NAME<br>--------------------------------------------------------------------------------<br>PHONE_NUMBER<br>--------------------------------------------------------------------------------<br> 1<br><span class=\"wysiwyg-color-red\">85d05fd9229df84c06f2cbc6267e4fd7</span><br>JOHN<br>R<br>123-456-7890</pre>\n<p><br><span>5. Confirm:</span><br><span>use any online MD5 tool (e.g. </span><a href=\"https://www.md5hashgenerator.com/\" rel=\"noreferrer\">https://www.md5hashgenerator.com/</a><span>), you may see that</span><br><span>'DOE' MD5 hash values is: 85d05fd9229df84c06f2cbc6267e4fd7</span></p>\n<p> </p>\n<p>used tql file is attached.</p>\n<p> </p>"} {"page_content": "<div class=\"accept-terms\">\n<div style=\"text-align: center;\">\n<h2>\n<span>You must accept the Striim License Agreement </span><span>to download the software.</span>\n</h2>\n<h3><a title=\"https://support.striim.com/hc/en-us/articles/360038194454-Striim-Version-Support-Policy\" href=\"https://support.striim.com/hc/en-us/articles/360038194454-Striim-Version-Support-Policy\"> Extended Version Support Policy</a></h3>\n<textarea style=\"width: 500px; height: 300px;\"> \nIMPORTANT: Please read this End User License Agreement (“Agreement”) before clicking the “accept” button, installing, configuring and/or using the Software (as defined below) that accompanies or is provided in connection with this Agreement. By clicking the “Accept” button, installing, configuring and/or using the Software, you and the entity that you represent (“Customer”) agree to be bound by this Agreement with Striim, Inc. (“Striim”). You represent and warrant that you have the authority to bind such entity to these terms. If Customer does not unconditionally agree to all of the terms of this Agreement, use of the Software is strictly prohibited.\n\nTO THE EXTENT CUSTOMER HAS SEPARATELY ENTERED INTO AN END USER LICENSE AGREEMENT WITH STRIIM COVERING THE SAME SOFTWARE, THE TERMS AND CONDITIONS OF SUCH END USER LICENSE AGREEMENT SHALL SUPERSEDE THIS AGREEMENT IN ITS ENTIRETY.\n\nThis Agreement includes and incorporates by reference the following documents:\n\nStandard Terms and Conditions\nExhibit A – Support and Maintenance Addendum\nOrder Forms (as defined below)\n\nThe Agreement includes the documents listed above and states the entire agreement between the parties regarding its subject matter and supersedes all prior and contemporaneous agreements, terms sheets, letters of intent, understandings, and communications, whether written or oral. All amounts paid by Customer under this Agreement shall be non-refundable and non-recoupable, unless otherwise provided herein. Any pre-printed terms in any Order Forms, quotes, or other similar written purchase authorization that add to, or conflict with or contradict, any provisions in the Agreement will have no legal effect. The provisions of this Agreement may be amended or waived only by a written document signed by both parties.\n\nSTANDARD TERMS AND CONDITIONS\n1.\tDEFINITIONS\n1.1 “CPU” means a single central processing unit of a Customer System, with one or more Cores. \n1.2 “Core” means each of the independent processor components within a single CPU.\n1.3 “Customer” means that person or entity listed on the Order Form.\n1.4 “Customer System” means one or more computer system(s) that is: (a) owned or leased by Customer or its Subsidiary; and (b) within the possession and control of Customer or its Subsidiary.\n1.5 “Documentation” means the standard end-user technical documentation, specifications, materials and other information Striim supplies in electronic format with the Software or makes available electronically. Advertising and marketing materials are not Documentation.\n1.6 “Effective Date” has the same meaning as used in the Order Form.\n1.7 “Error” means a reproducible failure of the Software to perform in substantial conformity with its Documentation.\n1.8 “Intellectual Property Rights” means copyrights, trademarks, service marks, trade secrets, patents, patent applications, moral rights, contractual rights of non-disclosure or any other intellectual property or proprietary rights, however arising, throughout the world.\n1.9 “Order Form” means the order form executed by Customer substantially in the form set forth on Exhibit B.\n1.10 “Product Use Environment” means the environment, including without limitation the number of Cores or Sources and Targets identified in an Order Form. \n1.11 “Product Use Environment Upgrade” means the addition of any additional Cores or Sources and Targets.\n1.12 “Release” means any Update or Upgrade if and when such Update or Upgrade is made available to Customer by Striim pursuant to Exhibit A. In the event of a dispute as to whether a particular Release is an Upgrade or an Update, Striim’s published designation will be dispositive.\n1.13 “Software” means the software that Striim provides to Customer or its Subsidiary (in object code format only) as identified on the Order Form, and any Releases thereto if and when such Releases are made available by Striim. \n1.14 “Sources and Targets” means the source and target systems of the data being analyzed.\n1.15 “Subsidiary” means with respect to Customer, any person or entity that (a) is controlled by Customer, where “control” means ownership of fifty percent (50%) or more of the outstanding voting securities (but only as long as such person or entity meets these requirements) and (b) has a primary place of business in the United States.\n1.16 “Update” means, if and when available, any Error corrections, fixes, workarounds or other maintenance releases to the Software provided by Striim to Customer.\n1.17 “Upgrade” means, if and when available, new releases or versions of the Software, that materially improve the functionality of, or add material functional capabilities to the Software. “Upgrade” does not include the release of a new product for which there is a separate charge. If a question arises as to whether a release is an Upgrade or a new product, Striim’s determination will prevail.\n1.18 “Use” means to cause a Customer System to execute any machine-executable portion of the Software in accordance with the Documentation or to make use of any Documentation, Releases, or related materials in connection with the execution of any machine-executable portion of the Software.\n1.19 “User” means an employee of Customer or its Subsidiary or independent contractor to Customer or its Subsidiary that is working for Customer or its Subsidiary and has been authorized by Customer or its Subsidiary to Use the Software. \n2.\tGRANT AND SCOPE OF LICENSE\n2.1 Software License. Subject to the terms and conditions of this Agreement, during the term specified on the Order Form, Striim hereby grants Customer and its Subsidiaries a non-exclusive, non-transferable (except as provided under Section 12.6), non-sublicensable license for Users to install (if Customer elects to self-install the Software), execute and Use the Software supplied to Customer hereunder, solely within the Product Use Environment on a Customer System and use the Documentation, solely for Customer’s or its Subsidiaries’ own internal business purposes. Customer shall be solely responsible for all acts or omissions of its Subsidiaries and any breach of this Agreement by a Subsidiary of Customer shall be deemed a breach by Customer.\n2.2 License Restrictions. Customer shall not: (a) Use the Software except as expressly permitted under Section 2.1; (b) separate the component programs of the Software for use on different computers; (c) adapt, alter, publicly display, publicly perform, translate, create derivative works of, or otherwise modify the Software; (d) sublicense, lease, rent, loan, or distribute the Software to any third party; (e) transfer the Software to any third party (except as provided under Section 12.6); (f) reverse engineer, decompile, disassemble or otherwise attempt to derive the source code for the Software, except as permitted by applicable law; (g) remove, alter or obscure any proprietary notices on the Software or Documentation; or (h) allow third parties to access or use the Software, including any use in any application service provider environment, service bureau, or time-sharing arrangements. No portion of the Software may be duplicated by Customer, except as otherwise expressly authorized in writing by Striim. Customer may, however, make a reasonable number of copies of the machine-readable portion of the Software solely for back-up purposes, provided that such back-up copy is used only to restore the Software on a Customer System, and not for any other use or purpose. Customer will reproduce on each such copy all notices of patent, copyright, trademark or trade secret, or other notices placed on such Software by Striim or its suppliers. \n2.3 License Keys. Customer acknowledges that the Software may require license keys or other codes (“Keys”) in order for Customer to install and/or Use the Software. Such Keys may also control continued access to, and Use of, the Software, and may prevent the Use of the Software on any systems except a Customer System. Customer will not disclose the Keys or information about the Keys to any third party. Customer shall not Use any Software except pursuant to specific Keys issued by Striim that authorizes such Use.\n3.\tPROPRIETARY RIGHTS. Customer acknowledges and agrees that the Software, including its sequence, structure, organization, source code and Documentation contains valuable Intellectual Property Rights of Striim and its suppliers. The Software and Documentation are licensed and not sold to Customer, and no title or ownership to such Software, Documentation, or the Intellectual Property Rights embodied therein passes as a result of this Agreement or any act pursuant to this Agreement. The Software, Documentation, and all Intellectual Property Rights therein are the exclusive property of Striim and its suppliers, and all rights in and to the Software and Documentation not expressly granted to Customer in this Agreement are reserved. Striim owns all rights, title, and interest to the Software and Documentation. Nothing in this Agreement will be deemed to grant, by implication, estoppel or otherwise, a license under any existing or future patents of Striim, except to the extent necessary for Customer to Use the Software and Documentation as expressly permitted under this Agreement.\n4.\tCONFIDENTIALITY\n4.1 Confidential Information. Each party (the “Disclosing Party”) may during the term of this Agreement disclose to the other party (the “Receiving Party”) non-public information regarding the Disclosing Party’s business, including technical, marketing, financial, employee, planning, and other confidential or proprietary information, that (1) if in tangible form, is clearly marked at the time of disclosure as being confidential, or (2) if disclosed orally or visually, is designated at the time of disclosure as confidential, or (3) is reasonably understood to be confidential or proprietary information, whether or not marked. (“Confidential Information”). Without limiting the generality of the foregoing, the Software and the Documentation constitute Striim’s Confidential Information and Customer Data constitutes Customer's Confidential Information.\n4.2 Protection of Confidential Information. The Receiving Party will not use any Confidential Information of the Disclosing Party for any purpose not permitted by this Agreement, and will disclose the Confidential Information of the Disclosing Party only to employees or contractors of the Receiving Party who have a need to know such Confidential Information for purposes of this Agreement and are under a duty of confidentiality no less restrictive than the Receiving Party’s duty hereunder. The Receiving Party will protect the Disclosing Party’s Confidential Information from unauthorized use, access, or disclosure in the same manner as the Receiving Party protects its own confidential or proprietary information of a similar nature and with no less than reasonable care.\n4.3 Exceptions. The Receiving Party’s obligations under Section 4.2 with respect to Confidential Information of the Disclosing Party will terminate to the extent such information: (a) was already known to the Receiving Party at the time of disclosure by the Disclosing Party; (b) is disclosed to the Receiving Party by a third party who had the right to make such disclosure without any confidentiality restrictions; (c) is, or through no fault of the Receiving Party has become, generally available to the public; or (d) is independently developed by the Receiving Party without access to, or use of, the Disclosing Party’s Confidential Information. In addition, the Receiving Party will be allowed to disclose Confidential Information of the Disclosing Party to the extent that such disclosure is (i) approved in writing by the Disclosing Party, (ii) necessary for the Receiving Party to enforce its rights under this Agreement in connection with a legal proceeding; or (iii) required by law or by the order or a court of similar judicial or administrative body, provided that the Receiving Party notifies the Disclosing Party of such required disclosure promptly and in writing and cooperates with the Disclosing Party, at the Disclosing Party’s reasonable request and expense, in any lawful action to contest or limit the scope of such required disclosure.\n4.4 Return of Confidential Information. The Receiving Party will either return to the Disclosing Party or destroy all Confidential Information of the Disclosing Party in the Receiving Party’s possession or control and permanently erase all electronic copies of such Confidential Information promptly upon the written request of the Disclosing Party or the termination of this Agreement, whichever comes first. Upon request, the Receiving Party will certify in writing that it has fully complied with its obligations under this Section 4.4.\n4.5 Confidentiality of Agreement. Neither party will disclose the terms of this Agreement to anyone other than its attorneys, accountants, and other professional advisors under a duty of confidentiality except (a) as required by law, or (b) pursuant to a mutually agreeable press release, or (c) in connection with a proposed merger, financing, or sale of such party’s business.\n5.\tADDITIONAL ORDERS; DELIVERY; INSTALLATION\n5.1 Additional Orders. Subject to the terms and conditions of this Agreement, Customer or a Subsidiary of Customer may place orders with Striim for renewals to Software licenses, additional licenses to the Software and/or support and maintenance or training services, including but not limited to Product Use Environment Upgrades (collectively “Additional Products and Services”) by contacting Striim and executing another Order Form with Striim for the Additional Products and Services.\n5.2 Delivery and Installation. Striim will install the Software on a Customer System unless Customer elects to self-install, in which case Striim will deliver the Software and its related Documentation electronically to Customer and Customer will be solely responsible for installing the Software on its Customer System (“Delivery”). Customer will receive all Updates and Upgrades from Striim under this Agreement by electronic delivery. Customer shall promptly provide to Striim all information that is necessary to enable Striim to transmit electronically all such items to Customer. Customer acknowledges that certain internet connections and hardware capabilities are necessary to complete electronic deliveries, and agrees that Customer personnel will receive electronic deliveries by retrieving the Software placed by Striim on a specific Striim controlled server. Customer acknowledges that the electronic deliveries may be slow and time-consuming depending upon network traffic and reliability. In furtherance of the purpose of the electronic deliveries, Striim will not deliver to Customer, and Customer will not accept from Striim, any Software or Documentation deliverable under this Agreement in any tangible medium including, but not limited to, CD-ROM, tape or paper. Customer will be deemed to have unconditionally and irrevocably accepted the Software and related Documentation upon Delivery. \n6.\tSUPPORT; TRAINING.\n6.1 Support and Maintenance. Support and maintenance services provided by Striim (if any) for the Software will be subject to the timely and full payment of all support fees as set forth in an Order Form and will be subject to the terms and conditions of Exhibit A (Support and Maintenance Addendum) to this Agreement. Other than as expressly provided in Exhibit A, this Agreement does not obligate Striim to provide any support or maintenance services. For the avoidance of doubt, Striim has the right to suspend any and all support and maintenance services if Customer has not made timely and full payment of all support and maintenance fees as set forth in an Order Form.\n6.2 Training. Striim shall have no obligation to provide training of Customer personnel regarding Use of the Software unless Customer purchases training services from Striim, as specified in the relevant Order Form, which training services will be provided, based on Striim’s then-current training services policy. Customer must purchase training services from Striim if Customer elects to self-install the Software.\n7.\tTERM AND TERMINATION\n7.1 Term. The term of this Agreement will begin on the Effective Date and continue in force until this Agreement is terminated in accordance with Section 7.2. The term of the Software license shall be as set forth on the Order Form. \n7.2 Termination of Agreement. Each party may terminate this Agreement for material breach by the other party which remains uncured thirty (30) days after delivery of written notice of such breach to the breaching party. Notwithstanding the foregoing, Striim may immediately terminate this Agreement and all licenses granted hereunder if Customer breaches Section 2 hereof. The foregoing rights of termination are in addition to any other rights and remedies provided in this Agreement or by law. \n7.3 Effect of Termination. Upon termination of this Agreement (or termination of any license granted hereunder), all rights of Customer to Use the Software (or under the relevant license) will cease and: (a) all license rights granted under this Agreement will immediately terminate and Customer shall promptly stop all Use of the Software; (b) Striim’s obligation to provide support for the Software will terminate; (c) Customer shall erase all copies of the Software from Customer’s computers, and destroy all copies of the Software and Documentation on tangible media in Customer’s possession or control or return such copies to Striim; and (d) upon request by Striim, Customer shall certify in writing to Striim that that it has returned or destroyed such Software and Documentation. \n7.4 Survival. Sections 1, 3, 4, 7.3, 7.4, 8, 9, 10 (only for claims arising based on Use of the Software prior to termination of the applicable license), 11, and 12 will survive the termination of this Agreement.\n8.\tFEES. Customer shall pay Striim the fees as set forth on the applicable Order Form. Striim shall send invoices to Customer based on the invoice schedules set forth on the applicable Order Form. All payments shall be made in U.S. dollars. Unless otherwise specified in the applicable Order Form, Customer will pay all fees payable to Striim within thirty (30) days following the receipt by Customer of an invoice from Striim. Late payments will accrue interest at the rate of one and one-half percent (1.5%) per month, or if lower, the maximum rate permitted under applicable law. Striim reserves the right to increase fees each calendar year with thirty (30) days prior written notice to Customer. Additional payment terms may be set forth in the applicable Order Form. All fees are exclusive of any sales, use, excise, import, export or value-added tax, levy, duty or similar governmental charge which may be assessed based on any payment due hereunder, including any related penalties and interest (“Taxes”). Customer is solely responsible for all Taxes resulting from transactions under this Agreement, except Taxes based on Striim’s net income. Customer will indemnify and hold Striim harmless from (a) the Customer’s failure to pay (or reimburse Striim for the payment of) all such Taxes; and (b) the imposition of and failure to pay (or reimburse Striim for the payment of) all governmental permit fees, license fees, customs fees and similar fees levied upon delivery of the Software or Documentation which Striim may incur in respect of this Agreement or any other fees required to be made by Customer under this Agreement, together with any penalties, interest, and collection or withholding costs associated therewith.\n9.\tLIMITED WARRANTY\n9.1 Software Warranty. Striim warrants to, and for the sole benefit of, Customer that, subject to Section 9.2, any Software, as delivered by Striim and properly installed and operated within the Product Use Environment and used as permitted under this Agreement and in accordance with the Documentation, will perform substantially in accordance with the Documentation for ninety (90) days from the date of Delivery. Customer’s exclusive remedy and Striim’s sole liability for breach of this warranty is for Striim, at its own expense, to replace the Software with a version of the Software that corrects those Errors that Customer reports to Striim during such warranty period. Any Error correction provided will not extend the original warranty period. \n9.2 Exclusions. Striim will have no obligation under this Agreement to correct, and Striim makes no warranty with respect to, Errors related to: (a) improper installation of the applicable Software; (b) changes that Customer has made to the applicable Software; (c) Use of the applicable Software in a manner inconsistent with the Documentation and this Agreement; (d) combination of the applicable Software with third party hardware or software not conforming to the operating environment specified in the Documentation; or (e) malfunction, modification, or relocation of Customer’s servers.\n9.3 Disclaimer. EXCEPT AS PROVIDED IN SECTION 9.1, STRIIM HEREBY DISCLAIMS ALL WARRANTIES WHETHER EXPRESS, IMPLIED OR STATUTORY WITH RESPECT TO THE SOFTWARE, DOCUMENTATION, INSTALLATION SERVICES, SUPPORT SERVICES, TRAINING SERVICES AND ANY OTHER PRODUCTS OR SERVICES PROVIDED TO CUSTOMER UNDER THIS AGREEMENT, INCLUDING WITHOUT LIMITATION ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NON-INFRINGEMENT, AND ANY WARRANTY AGAINST INTERFERENCE WITH CUSTOMER’S ENJOYMENT OF THE SOFTWARE, DOCUMENTATION, INSTALLATION SERVICES, SUPPORT SERVICES, AND ANY OTHER PRODUCTS OR SERVICES PROVIDED TO CUSTOMER UNDER THIS AGREEMENT. \n10.\tPROPRIETARY RIGHTS INDEMNITY\n10.1 Striim’s Obligation. Subject to the terms and conditions of Section 10, Striim will defend at its own expense any suit or action brought against Customer by a third party to the extent that the suit or action is based upon a claim that the Software infringes such third party’s United States copyrights or misappropriates such third party’s trade secrets recognized as such under the Uniform Trade Secrets Act or such other similar laws, and Striim will pay those costs and damages finally awarded against Customer in any such action or those costs and damages agreed to in a monetary settlement of such claim, in each case that are specifically attributable to such claim. However, such defense and payments are subject to the conditions that: (a) Striim will be notified promptly in writing by Customer of any such claim; (b) Striim will have sole control of the defense and all negotiations for any settlement or compromise of such claim; and (c) Customer will cooperate and, at Striim’s request and expense, assist in such defense. THIS SECTION 10.1 STATES STRIIM’S ENTIRE LIABILITY AND CUSTOMER’S SOLE AND EXCLUSIVE REMEDY FOR ANY INTELLECTUAL PROPERTY RIGHT INFRINGEMENT AND/OR MISAPPROPRIATION.\n10.2 Alternative. If Customer’s or its Subsidiaries’ Use of Software is prevented by injunction or court order because of infringement, or should any Software be likely to become the subject of any claim in Striim’s opinion, Customer will permit Striim, at the sole discretion of Striim and no expense to Customer, to: (i) procure for Customer and its Subsidiaries the right to continue using such Software in accordance with this Agreement; or (ii) replace or modify such Software so that it becomes non-infringing while providing substantially similar features. Where (i) and (ii) above are not commercially feasible for Striim, the applicable licenses will immediately terminate and Striim will refund pro rated fees for the remainder of the term to End User. \n10.3 Exclusions. Striim will have no liability to Customer or any of its Subsidiaries for any claim of infringement or misappropriation to the extent based upon: (a) Use of the Software not in accordance with this Agreement or the Documentation; (b) the combination of the applicable Software with third party hardware or software not conforming to the operating environment specified in Documentation; (c) Use of any Release of the Software other than the most current Release made available to Customer; or (d) any modification of the Software by any person other than Striim. Customer will indemnify Striim against all liability, damages and costs (including reasonable attorneys’ fees) resulting from any such claims.\n10.4 Required Updates. In the event the Software become subject to a claim or in Striim’s opinion is likely to be subject to a claim, upon notice from Striim to Customer that required updates are available, Customer agrees to download and install such updates to the Software onto Customer Systems within five (5) business days (the “Required Update Period”). At the end of any Required Update Period, Customer’s and its Subsidiaries’ right and license to Use all prior versions of the Software shall automatically terminate and Striim shall have no liability for any Use of the prior versions of the Software occurring after the Required Update Period.\n11.\tLIMITATION OF LIABILITY. IN NO EVENT WILL STRIIM BE LIABLE TO CUSTOMER OR ANY OTHER PARTY FOR ANY SPECIAL, PUNITIVE, INDIRECT, INCIDENTAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATED TO THIS AGREEMENT UNDER ANY LEGAL THEORY, INCLUDING, BUT NOT LIMITED TO, LOSS OF DATA, LOSS OF THE USE OR PERFORMANCE OF ANY PRODUCTS, LOSS OF REVENUES, LOSS OF PROFITS, OR BUSINESS INTERRUPTION, EVEN IF STRIIM KNOWS OF OR SHOULD HAVE KNOWN OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT WILL STRIIM’S TOTAL CUMULATIVE LIABILITY ARISING OUT OF OR RELATED TO THIS AGREEMENT EXCEED THE TOTAL AMOUNT OF FEES RECEIVED BY STRIIM FROM CUSTOMER UNDER THIS AGREEMENT DURING THE TWELVE (12) MONTHS IMMEDIATELY PRECEDING SUCH CLAIM. THIS SECTION 11 WILL APPLY EVEN IF AN EXCLUSIVE REMEDY OF CUSTOMER UNDER THIS AGREEMENT HAS FAILED OF ITS ESSENTIAL PURPOSE.\n12.\tGENERAL\n12.1 Audit Rights. During the term of this Agreement and for two (2) years thereafter, Striim or its representatives, may upon at least ten (10) days’ written notice, inspect and audit records, Customer Systems, and premises of Customer during normal business hours to verify Customer’s compliance with this Agreement. \n12.2 Notices. All notices, consents and approvals under this Agreement must be delivered in writing by courier, by facsimile or by certified or registered mail (postage prepaid and return receipt requested) to the other party at the address set forth above, and will be effective upon receipt or three (3) business days after being deposited in the mail as required above, whichever occurs sooner. Either party may change its address by giving notice of the new address to the other party.\n12.3 Relationship of Parties. The parties hereto are independent contractors. Nothing in this Agreement will be deemed to create an agency, employment, partnership, fiduciary or joint venture relationship between the parties. \n12.4 Publicity. Striim may use Customer’s name and a description of Customer’s Use of the Software for investor relations and marketing purposes.\n12.5 Compliance with Export Control Laws. The Software may contain encryption technology controlled under U.S. export law, the export of which may require an export license from the U.S. Commerce Department. Customer will comply with all applicable export control laws and regulations of the U.S. and other countries. Customer will defend, indemnify, and hold harmless Striim from and against all fines, penalties, liabilities, damages, costs and expenses (including reasonable attorneys’ fees) incurred by Striim as a result of Customer’s breach of this Section 12.5.\n12.6 Assignment. Customer may not assign or transfer, by operation of law, merger or otherwise, any of its rights or delegate any of its duties under this Agreement (including, without limitation, its licenses for the Software) to any third party without Striim’s prior written consent. Any attempted assignment or transfer in violation of the foregoing will be null and void. Striim may assign its rights or delegate its obligations under this Agreement. \n12.7 Governing Law and Venue. This Agreement will be governed by the laws of the State of California, excluding any conflict of law provisions that would require the application of the laws of any other jurisdiction. The United Nations Convention on Contracts for the International Sale of Goods shall not apply to this Agreement. Any action or proceeding arising from or relating to this Agreement must be brought exclusively in a federal or state court located in Santa Clara, California. Each party irrevocably consents to the personal jurisdiction and venue in, and agrees to service of process issued by, any such court. Notwithstanding the foregoing, either party may bring an action or suit seeking injunctive relief to protect its Intellectual Property Rights or Confidential Information in any court having jurisdiction.\n12.8 Force Majeure. Any delay in or failure of performance by either party under this Agreement, other than a failure to pay amounts when due, will not be considered a breach of this Agreement and will be excused to the extent caused by any occurrence beyond the reasonable control of such party.\n12.9 Remedies. Except as provided in Sections 9 and 10 of this Agreement, the parties’ rights and remedies under this Agreement are cumulative. Customer acknowledges that the Software contains valuable trade secrets and proprietary information of Striim, that any actual or threatened breach of Section 2 (Grant and Scope of License) or Section 4 (Confidentiality) will constitute immediate, irreparable harm to Striim for which monetary damages would be an inadequate remedy, and that injunctive relief is an appropriate remedy for such breach. If any legal action is brought to enforce this Agreement, the prevailing party will be entitled to receive its attorneys’ fees, court costs, and other collection expenses, in addition to any other relief it may receive.\n12.10 Waiver; Severability. Any waiver or failure to enforce any provision of this Agreement on one occasion will not be deemed a waiver of any other provision or of such provision on any other occasion. If any provision of this Agreement is adjudicated to be unenforceable, such provision will be changed and interpreted to accomplish the objectives of such provision to the greatest extent possible under applicable law and the remaining provisions will continue in full force and effect. \n12.11 Order of Precedence; Construction. The provisions of the standard terms and conditions will prevail regardless of any inconsistent or conflicting provisions on any Order Forms. The Section headings of this Agreement are for convenience and will not be used to interpret this Agreement. As used in this Agreement, the word “including” means “including but not limited to.” \n \nEXHIBIT A\n\n\nSUPPORT AND MAINTENANCE POLICY\n\nTHE TERMS AND CONDITIONS IN THIS ADDENDUM APPLY TO THE SUPPORT AND MAINTENANCE SERVICES PROVIDED BY STRIIM TO CUSTOMER (IF ANY). SUBJECT TO CUSTOMER’S PAYMENT OF THE APPLICABLE SUPPORT AND MAINTENANCE FEES, STRIIM WILL PROVIDE THE SUPPORT AND MAINTENANCE SERVICES DESCRIBED IN THIS ADDENDUM.\n1.\tDEFINITIONS. For purposes of this Addendum, the following terms have the following meanings. Capitalized terms not defined in this Addendum have the meanings described in the Agreement.\n1.1\t“Response Time” means the period of time between (a) Customer’s registration of an Error pursuant via Striim’s online ticketing system in accordance with Section 2.3 (Error Correction); and (b) the commencement of steps to address the Error in accordance with this Addendum by Striim.\n1.2\t “Support Services” means the support and maintenance services described in Section 2 (Support Services) to be performed by Striim pursuant to this Addendum. \n2.\tSUPPORT SERVICES \n2.1\tForm of Support. Striim will provide Support Services by means set forth in the following table, subject to the conditions regarding availability or response times with respect to each such form of access as set forth in the table. Support Services will consist of answering questions regarding the proper Use of, and providing troubleshooting assistance for, the Software. \nFORM OF SUPPORT\tAVAILABILITY\nTelephonic support +1 (650) 241-0680 or such other phone number as Striim may provide from time to time)\t8 am to 7 pm Pacific Time, Mon. – Fri. (excluding Striim Holidays)\nEmail Support (support@Striim.com or such other email address as Striim may provide from time to time)\t24 x 7 x 365\nWeb-based Support (http://www.Striim.com/ or such other URL as Striim may provide from time to time)\t24 x 7 x 365\n\n2.2\tSeverity Levels. If Customer identifies an Error and would like such Error corrected, Customer will promptly report such Error in writing to Striim, specifying (a) the nature of the Error; (b) the circumstances under which the Error was encountered, including the processes that were running at the time that the Error occurred; (c) technical information for the equipment upon which the Software was running at the time of the Error; (d) the steps, if any, that Customer took immediately following the Error; and (e) the immediate impact of the Error upon Customer’s ability to operate the Software. Upon receipt of any such Error report, Striim will evaluate the Error and classify it into one of the following Severity Levels based upon the following severity classification criteria:\nSEVERITY LEVEL\tSEVERITY CLASSIFICATION CRITERIA\nSeverity 1\tError renders continued Use of the Software commercially infeasible\nSeverity 2\tError prevents a critical function of the Software from operating in substantial accordance with the Documentation.\nSeverity 3\tError prevents a major non-critical function of the Software from operating in substantial accordance with the Documentation.\nSeverity 4\tError adversely affects a minor function of the Software or consists of a cosmetic nonconformity, error in Documentation, or other problem of similar magnitude.\n\n2.3\tError Correction. Striim will use commercially reasonable efforts to provide a correction or workaround to all reproducible Errors that are reported in accordance with Section 2.2 (Severity Levels) above. Such corrections or workarounds may take the form of Updates, procedural solutions, correction of Documentation errors, or other such remedial measures as Striim may determine to be appropriate. Striim will also endeavor to affect the following Response Times for each of the following categories of Errors. \nSEVERITY LEVEL\tRESPONSE TIME\nSeverity 1\tOne (1) Hour during M-F; two (2) hours on weekends\nSeverity 2\tTwo (2) Hours M-F; four (4) hours on weekends\nSeverity 3\tFour (4) business days\nSeverity 4\tSeven (7) business days\n\n3.\tMAINTENANCE\n3.1\tUpdates. Customer will be entitled to obtain and Use all Updates and Upgrades that are generally released during the term of this Addendum provided that Customer has paid the applicable support and maintenance fees. Striim may make such Updates and Upgrades available to Customer through electronic download. The provision of any Update or Upgrade to Customer will not operate to extend the original warranty period on the Software.\n3.2\tIntellectual Property. Upon release of an Update or Upgrade to Customer, such Update or Upgrade will be deemed to be “Software” within the meaning of the Agreement, and subject to payment by Customer of the applicable support and maintenance fees, Customer will acquire license rights to Use such Update or Upgrade in accordance with the terms and conditions of the Agreement. There are no express or implied licenses in this Addendum, and all rights are reserved to Striim.\n4.\tCUSTOMER RESPONSIBILITIES AND EXCLUSIONS\n4.1\tCustomer Responsibilities. As a condition to Striim’s obligations under this Addendum, Customer will provide the following:\n(a)\tGeneral Cooperation. Customer will cooperate with Striim to the extent that such cooperation would facilitate Striim’s provision of Support Services hereunder. Without limiting the foregoing, at Striim’s request, Customer will (i) provide Striim with reasonable access to appropriate personnel, records, network resources, maintenance logs, physical facilities, and equipment; (ii) refrain from undertaking any operation that would directly or indirectly block or slow down any maintenance service operation; and (iii) comply with Striim’s instructions regarding the Use and operation of the Software.\n(b)\tData Backup. Customer agrees and acknowledges that Striim’s obligations under this Addendum are limited to the Software, and that Striim is not responsible for the operation and general maintenance of Customer’s computing environment. Striim will not be responsible for any losses or liabilities arising in connection with any failure of data backup processes. \n(c)\tSpecific Customer Assistance Requests. Customer may request that, in providing support services hereunder, Striim directly access Customer’s production systems, either by logging in using Customer’s access credentials and/or through a remote (e.g., WebEx) session initiated by Customer. Striim is not responsible for any effect on, loss of, or damage to, Customer’s technology systems or data from Striim’s attempt to address trouble tickets from within Customer’s production environment, nor is Striim agreeing to any Customer-prescribed security requirements as a condition of such access. Customer also may request that, in providing support services hereunder, Striim receive Customer data from one or more specific transactions for the purpose of attempting to re-create errors. Customer will provide only such data that Customer may legally provide to Striim, in compliance with Customer’s contractual obligations to third parties. Striim does not promise any level of protection with respect to such data other than as required under the applicable confidentiality provisions in effect between Striim and Customer, even if such data in Customer’s possession is subject to additional legal requirements, and does not warrant that such data will not be lost or compromised. With respect to either of the foregoing scenarios, Striim will require that such request be documented in the support ticketing system and confirmed by Customer in writing, and at its discretion may decline to (as the case may be) access the production system or receive Customer’s transaction data. The provisions of this paragraph supersede any conflicting provision in this Addendum or in the underlying agreement between the parties.\n4.2\tExclusions. Notwithstanding anything to the contrary in this Addendum, Striim will have no obligation to provide any Support Services to Customer to the extent that such Support Services arise from or relate to any of the following: (a) any modifications or alterations of the Software by any party other than Striim or Striim’s subcontractors; (b) any Use of the Software in a computing environment not meeting the system requirements set forth in the Documentation, including hardware and operating system requirements; (c) any issues arising from the failure of the Software to interoperate with any other software or systems, except to the extent that such interoperability is expressly mandated in the applicable Documentation; (d) any breakdowns, fluctuations, or interruptions in electric power or the telecommunications network; (e) any Error that is not reproducible by Striim; or (f) any violation of the terms and conditions of this Agreement, including any breach of the scope of a license grant. In addition, Customer agrees and acknowledges that any information relating to malfunctions, bugs, errors, or vulnerabilities in the Support Services constitutes Confidential Information of Striim, and Customer will refrain from using such information for any purpose other than obtaining Support Services from Striim, and will not disclose such information to any third party.\n5.\tTERM AND TERMINATION \n5.1\tTerm. As long as Customer timely pays, as applicable, the annual fees for a term license or the support and maintenance fees applicable for a perpetual license as set forth on the applicable Order Form, the term of this Addendum will commence upon the original date of Delivery of the applicable Software and continue during the term of the Agreement, unless earlier terminated in accordance with this section. \n5.2\tTermination. This Addendum will automatically terminate upon the termination of Customer’s license to the Software set forth in the Agreement. In addition, each party will have the right to terminate this Addendum immediately upon written notice if the other party materially breaches this Addendum and fails to cure such breach within thirty (30) days after written notice of breach by the non-breaching party. Sections 1 (Definitions), 5.2 (Termination), 5.3 (Lapsed Support), 6 (Warranty), and any payment obligations accrued by Customer prior to termination or expiration of this Addendum will survive such termination or expiration. \n5.3\tLapsed Support. For a period of twelve (12) months after any lapse of Support Services through the termination or expiration of this Addendum (other than Striim’s termination for Customer’s breach), Customer subsequently may elect to reinstate such Support Services for such Software upon the terms and conditions set forth in this Agreement; provided, however, that (a) such Support Services have not been discontinued by Striim; (b) the Agreement continues to be in effect; and (c) Customer pays to Striim an amount equal to all of the fees that would have been due to Striim had the Support Services been provided under this Agreement during the entire period of such lapse.\n6.\tWARRANTY. Striim warrants that the Support Services will be performed with at least the same degree of skill and competence normally practiced by consultants performing the same or similar services. Customer’s sole and exclusive remedy, and Striim’s entire liability, for any breach of the foregoing warranty shall be for Striim to reperform, in a conforming manner, any nonconforming Support Services that are reported to Striim by Customer in writing within thirty (30) days after the date of completion of such Services. \nEXCEPT AS EXPRESSLY SET FORTH IN THE PRECEDING PARAGRAPH, THE SUPPORT SERVICES AND ALL MATERIALS FURNISHED TO CUSTOMER UNDER THIS ADDENDUM ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND. WITHOUT LIMITING THE FOREGOING, EXCEPT AS SET FORTH IN THIS SECTION, STRIIM DISCLAIMS ANY AND ALL REPRESENTATIONS AND WARRANTIES, GUARANTEES, AND CONDITIONS, WHETHER EXPRESS, IMPLIED, OR STATUTORY, WITH RESPECT TO THE SUPPORT SERVICES AND ANY MATERIALS FURNISHED HEREUNDER, INCLUDING THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NONINFRINGEMENT, ACCURACY, AND QUIET ENJOYMENT.\n\n\n \n </textarea><br><button id=\"accept-terms\" style=\"border: 2px solid #00A7E5; background: #00A7E5; color: #fff;\">I accept the Striim License Agreement</button>\n</div>\n</div>\n<div class=\"download-links\" style=\"display: none;\">\n<p>Here are the latest version of Striim GA software</p>\n<p><em><span class=\"wysiwyg-font-size-large\">TGZ installation packages 3.10.3.8D</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/Striim_3.10.3.8D.tgz\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/Striim_3.10.3.8D.tgz\">Striim TGZ Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/Striim_Agent_3.10.3.8D.tgz\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/Striim_Agent_3.10.3.8D.tgz\">Striim Agent Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">RPM Installation Packages 3.10.3.8D</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-dbms-3.10.3.8D-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-dbms-3.10.3.8D-Linux.rpm\">Linux RPM DBMS Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-node-3.10.3.8D-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-node-3.10.3.8D-Linux.rpm\">Linux RPM Node Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-samples-3.10.3.8D-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-samples-3.10.3.8D-Linux.rpm\">Linux RPM Sample Applications Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-agent-3.10.3.8D-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-agent-3.10.3.8D-Linux.rpm\">Linux RPM Agent Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">DEB Installation Package 3.10.3.8D</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-dbms-3.10.3.8D-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-dbms-3.10.3.8D-Linux.deb\">Linux Debian DBMS Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-node-3.10.3.8D-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-node-3.10.3.8D-Linux.deb\">Linux Debian Node Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-samples-3.10.3.8D-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-samples-3.10.3.8D-Linux.deb\">Linux Debian Sample Applications Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-agent-3.10.3.8D-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/striim-agent-3.10.3.8D-Linux.deb\">Linux Debian Agent Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">NSK/NSX Package 3.10.3.8</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8/E31038\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8/E31038\">NSK Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8/X4008\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8/X31038\">NSX Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">EventPublish API 3.10.3.8D</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/Striim_EventPublishAPI_3.10.3.8D.zip\" href=\"https://striim-downloads.striim.com/Releases/3.10.3.8D/Striim_EventPublishAPI_3.10.3.8D.zip\">EventPublish API</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">Striim User Guide</span></em></p>\n<p>Please get the pdf version of the user guide thru the WebUI. Click help -&gt; Documentation (PDF)</p>\n<p> </p>\n<h2><em>Previous Versions of Striim</em></h2>\n<p>For downloading previous versions of Striim, please open a ticket to Striim support.</p>\n</div>\n<p>\n<script src=\"https://code.jquery.com/jquery-3.4.1.min.js\" integrity=\"sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo=\" crossorigin=\"anonymous\"></script>\n<script>// <![CDATA[\n$(document).ready(function(){\n \t$(\"#accept-terms\").click(function(){\n \t$('.accept-terms').hide();\n $('.download-links').show();\n })\n \n })\n// ]]></script>\n</p>"} {"page_content": "<p> </p>\n<p>Besides using UI to manage Application Groups , you can also manage via Tungsten Command line tool</p>\n<p> </p>\n<p><strong>To create group : </strong><br><br><span>W (admin) &gt; create application group new_group_test;</span></p>\n<pre><span>Processing - create application group new_group_test</span><br><span>-&gt; SUCCESS</span><span></span></pre>\n<p> </p>\n<p><strong>To list the application groups: </strong><br><br><span>W (admin) &gt; list application groups;</span></p>\n<pre><span>Processing - list application groups</span><br><span>GROUP 1 =&gt; admin.Hao_Group</span><br><span>GROUP 2 =&gt; admin.new_group_test</span><br><span>GROUP 3 =&gt; admin.HAO_NEW</span><br><span>GROUP 4 =&gt; admin.Default_Group</span></pre>\n<p> </p>\n<p><span><strong>To move the application to different group : </strong><br><br>W (admin) &gt; alter application group new_group_test add admin.postgresql_il;</span></p>\n<pre><span>Processing - alter application group new_group_test add admin.postgresql_il<br>-&gt; SUCCESS</span></pre>\n<p> </p>"} {"page_content": "<p> </p>\n<h3>What is striimDiagUtility:</h3>\n<p data-renderer-start-pos=\"588\">StriimDiagUtility collects information about the Striim Server health via the Health API call. The utility also collects details at application level like point in time stats and some historical stats looking back x hours ( not greater than 24hours ).</p>\n<p data-renderer-start-pos=\"846\">The utility is multi-threaded to the degree of number of available logical cores.This helps in finishing report collection faster for Striim Server running large number of applications. This is currently supported only on the Striim server and not on Striim Agent. <strong>This is available from Striim version 3.10.3.6 onwards.</strong></p>\n<p data-renderer-start-pos=\"846\"> </p>\n<h3 data-renderer-start-pos=\"846\">What does the utility collect</h3>\n<p>Following are the items and contents of the files that can be gathered via the utility</p>\n<p> </p>\n<table style=\"border-collapse: collapse; width: 109.14%;\" border=\"1\">\n<tbody>\n<tr>\n<th class=\"ak-renderer-tableHeader-sortable-column\" style=\"width: 45.2858%; background-color: #888888;\" data-colwidth=\"254\" aria-sort=\"none\">\n<div class=\"ak-renderer-tableHeader-sortable-column__button\" tabindex=\"0\" role=\"button\" aria-disabled=\"false\">\n<h4 data-renderer-start-pos=\"1256\"><strong>Report</strong></h4>\n<figure class=\"ak-renderer-tableHeader-sorting-icon ak-renderer-tableHeader-sorting-icon__no-order\" aria-hidden=\"true\">\n<div role=\"presentation\">\n<figure class=\"sc-kgoBCf aZWJD\">\n<div class=\"sorting-icon-svg__no_order table-sorting-icon-inactive sc-kGXeez eEgVld\"></div>\n</figure>\n</div>\n</figure>\n</div>\n</th>\n<th class=\"ak-renderer-tableHeader-sortable-column\" style=\"width: 82.5495%; background-color: #888888;\" data-colwidth=\"253\" aria-sort=\"none\">\n<div class=\"ak-renderer-tableHeader-sortable-column__button\" tabindex=\"0\" role=\"button\" aria-disabled=\"false\">\n<h4 data-renderer-start-pos=\"1266\"><strong>Content </strong></h4>\n<figure class=\"ak-renderer-tableHeader-sorting-icon ak-renderer-tableHeader-sorting-icon__no-order\" aria-hidden=\"true\">\n<div role=\"presentation\">\n<figure class=\"sc-kgoBCf aZWJD\">\n<div class=\"sorting-icon-svg__no_order table-sorting-icon-inactive sc-kGXeez eEgVld\"></div>\n</figure>\n</div>\n</figure>\n</div>\n</th>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"1302\">AppDetails.out</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"1320\">Point in time details per application.</p>\n<ol class=\"ak-ol\" data-indent-level=\"1\">\n<li>\n<p data-renderer-start-pos=\"1362\">mon</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"1369\">checkpoint</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"1383\">lee application.</p>\n</li>\n</ol>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"1416\">AppReports.out</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"1434\">Historical report per application.</p>\n<ol class=\"ak-ol\" data-indent-level=\"1\">\n<li>\n<p data-renderer-start-pos=\"1472\">mon report (configurable to x hrs)</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"1510\">check point history</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"1533\">lee stats (currently not integrated).</p>\n</li>\n</ol>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"1589\">ServerStack&lt;TimeStamp&gt;.out</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"1619\">Contains Server stack trace for all participating striim servers in the cluster.</p>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"1713\">ServerObjects.out</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"1734\">Full Server Health Object collected. Serves as the one place to look up :-</p>\n<ol class=\"ak-ol\" data-indent-level=\"1\">\n<li>\n<p data-renderer-start-pos=\"1813\">Cluster topology.</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"1835\">WActionStore Health.</p>\n</li>\n<li>\n<p data-renderer-start-pos=\"1860\">Cache Health and a host of other Server Objects.</p>\n</li>\n</ol>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"1923\">All | &lt;namespace.Application&gt;.out</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"1960\">exported tql text based on option selection ALL | &lt;namespace.ApplicationName&gt;</p>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"2050\">CustomReport.out</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"2070\">Output from the call to Striim/tools/bin/customApp.tql this currently.</p>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"2232\">conf/*.properties</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"2253\">All properties under Striim/conf directory.</p>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"2421\">striim.server.log | striim.server.log.*</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"2464\">latest striim.server.log | all |none rolled-over striim.server.log.* based on latest | all striim.server.log collection</p>\n</td>\n</tr>\n<tr>\n<td style=\"width: 45.2858%;\" data-colwidth=\"254\">\n<p data-renderer-start-pos=\"2816\">&lt;clustername&gt;&lt;customername&gt;&lt;datetimestamp&gt;.tgz</p>\n</td>\n<td style=\"width: 82.5495%;\" data-colwidth=\"253\">\n<p data-renderer-start-pos=\"2866\">Archive containing all above report files . This is produced under Striim/tools directory.</p>\n</td>\n</tr>\n</tbody>\n</table>\n<p> </p>\n<h3>How to execute the script on Striim node</h3>\n<p>The striimDiagUtility comes part of the striim install bundle and can be found under &lt;striim home&gt;/tools</p>\n<p><strong>Note: The script needs to be run as the striim install user </strong></p>\n<p>Switch user to striim before running the script</p>\n<p><strong>$ sudo su - striim</strong></p>\n<pre>$ sudo su - striim<br>$ cd &lt;striim home&gt;/tools<br>$ bin/striimDiagUtility.sh -h<br><br>usage: striimDiagUtility<br><br>-h,--help Prints out Usage and some more help<br>-j,--jstack &lt;arg&gt; Default 4, 30secs apart.Customize by specifying<br>freq,period in secs<br>-l,--serverlogs &lt;arg&gt; Default only latest striim.server.log, specify<br>all if all logs needed<br>-p,--password &lt;arg&gt; Required option of password to connect as user<br>for striim cluster<br>-t,--clusterUrl &lt;arg&gt; Required option of clusterUrl to connect Striim<br>and Rest API<br>-u,--username &lt;arg&gt; Required option of user,password to connect<br>striim cluster<br>Usage: striimDiagUtility.sh -u &lt;striim user&gt; -p &lt;password&gt; -t &lt;clusterurl:port&gt; <br>-j &lt;# of stacktraces&gt;,&lt;interval between in secs&gt; <br>-l &lt;latest striim.server.log | all | none &gt;</pre>\n<p class=\"p1\"><span class=\"s1\">To gather jstack for 3 times, 30 seconds apart and also include the latest server log the command would be</span></p>\n<pre>$ cd &lt;striim home&gt;/tools<br>$ bin/striimDiagUtility.sh -u admin -p admin -t localhost:9080 -j 3,30 -l latest<br><br>/Users/rajesh/app/Striim_31036/tools<br>/Users/rajesh/app/Striim_31036/tools/..<br>Archiving Server logs and properties,Application Details and Server Details.</pre>\n<p class=\"p1\">The output file would be generated under &lt;striim home&gt;/tools</p>\n<pre class=\"p1\">$ ls *.gz<br>-rw-r--r-- 1 rajesh staff Sep 27 08:56 Striim_testStriim2021-09-27T08:56:10.910.tar.gz<br>-rw-r--r-- 1 rajesh staff Sep 27 15:50 Striim_testStriim2021-09-27T15:50:53.486.tar.gz<br>rajesh@MACOS /Users/rajesh/app/Striim_31036/tools $ </pre>\n<h3>Additional steps needed for RPM installation</h3>\n<p>If striim is installed using TGZ build (or using .zip on windows) no additional step is needed. For environments where striim is installed using RPM build following steps are needed </p>\n<p>1) copy the dependent libraries from &lt;striim home&gt;/lib to &lt;striim home&gt;/tools</p>\n<pre>$ sudo su - striim<br>$ cd &lt;striim home&gt;/tools<br>$ cp ../lib/commons-io-* .<br>$ cp ../lib/<span>commons-cli-1* .<br>$ cp ../lib/commons-compress-1* .<br>$ cp ../lib/json-2* .<br>$ cp ../lib/log4j-1* .</span></pre>\n<p class=\"p1\">2) modify the striimDiagUtility.sh to remove <strong>$TOOLS_HOME/../lib </strong>from classpath</p>\n<pre> -cp \"<strong>$TOOLS_HOME/../lib</strong>:$TOOLS_HOME/conf:$JAVA_HOME/lib/*:$TOOLS_HOME/*\" com.webaction.diagbundle.StriimDiagUtility $*</pre>\n<p class=\"p1\">to</p>\n<pre> -cp \"$TOOLS_HOME/conf:$JAVA_HOME/lib/*:$TOOLS_HOME/*\" com.webaction.diagbundle.StriimDiagUtility $*<span></span></pre>\n<p>3) Make sure JAVA_HOME is set and tools.jar exist in $JAVA_HOME/lib</p>"} {"page_content": "<h3>Goal:</h3>\n<p>The goal of this note is to configure the BigQueryWriter adapter properties to reduce <span>crashes due to various Google BQ API (internal HTTP requests, Load/Streaming/Query job API) errors </span>faced during integration with Google BigQuery target</p>\n<h3>Background:</h3>\n<p><span style=\"font-weight: 400;\">Striim’s BigQueryWriter uses Bigquery’s Java client version “</span><span style=\"font-weight: 400;\">google-cloud-bigquery-1.127.0” as of Striim version 3.10.3.6. </span></p>\n<p><span style=\"font-weight: 400;\">Striim supports two ways to deliver data into Google BigQuery namely:</span><span style=\"font-weight: 400;\"></span></p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\">\n<span style=\"font-weight: 400;\"><span>Using </span><a title=\"https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-csv\" href=\"https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-csv\" target=\"_blank\" rel=\"noopener\"><span>Load API</span></a> : Incoming data is buffered locally as a CSV file. One CSV file is created per target table and uploaded to </span><span style=\"font-weight: 400;\">BigQuery table </span><span style=\"font-weight: 400;\">once the upload condition is met (Batch Policy)</span>\n</li>\n<li style=\"font-weight: 400;\" aria-level=\"1\">\n<span style=\"font-weight: 400;\"><span>Using </span><a title=\"https://cloud.google.com/bigquery/streaming-data-into-bigquery\" href=\"https://cloud.google.com/bigquery/streaming-data-into-bigquery\" target=\"_blank\" rel=\"noopener\"><span>Streaming API</span></a> : Incoming data is buffered in memory and multiple such buffers are maintained (one buffer per target BigQuery table). The c</span><span style=\"font-weight: 400;\">ontents of the memory buffers is delivered to BigQuery tables </span><span style=\"font-weight: 400;\">once the upload condition is met (Batch Policy)</span><span style=\"font-weight: 400;\"></span>\n</li>\n</ol>\n<p> </p>\n<p><span style=\"font-weight: 400;\">There could be errors faced during the integration process due to various reasons like quota issues, google internal errors etc. A typical <strong>json payload</strong> looks like below</span></p>\n<pre><span style=\"font-weight: 400;\">{<br><strong>\"code\" : 500</strong>,<br>\"errors\" : [ {<br>\"domain\" : \"global\",<br>\"message\" : \"A retriable error could not be retried due to Extensible Stubs <br>memory limits\",<br>\"<strong>reason\" : \"backendError\"</strong><br>} </span></pre>\n<h3>Solution:</h3>\n<p><span style=\"font-weight: 400;\">Google suggests retrying these errors and this is configurable using BigQueryWriter properties</span></p>\n<h4>a) <strong>ConnectionRetryPolicy</strong>\n</h4>\n<p><span style=\"font-weight: 400;\">This defines the <span>BQ client API's </span>retry configuration. A typical configuration would look like below and the values suggested below can be modified with the guidance from Striim and Google Support</span></p>\n<pre><span style=\"font-weight: 400;\">ConnectionRetryPolicy:'totalTimeout=600,initialRetryDelay=10,retryDelayMultiplier=2.5,<br>maxRetryDelay=400,maxAttempts=5,jittered=True,initialRpcTimeout=10,<br>rpcTimeoutMultiplier=2.0, maxRpcTimeout=30'</span></pre>\n<h4>b) <strong>Retryable error codes</strong> (json formatted exceptions)</h4>\n<p>We can specify either the http error <span style=\"font-weight: 400;\"><strong>\"code\"</strong></span> (like 500, 503 etc) from the <strong>json payload</strong><br>or the string literal <span style=\"font-weight: 400;\">\"<strong>reason\" </strong>from the <strong>json payload </strong></span>(like backendError, internalError etc) <br>or both.</p>\n<p>For possible error codes, refer this page : <a href=\"https://cloud.google.com/bigquery/docs/error-messages\">https://cloud.google.com/bigquery/docs/error-messages</a></p>\n<p>A typical configuration would look like below</p>\n<pre><span>_h_RetriableErrorCodes:'backendError,badgateway,badRequest,accessDenied,internalError,jobinternalerror,400,410,500,502,503,400'<br></span></pre>\n<p>If there is a failure in <span>uploading or merging</span> a batch and the exception contains a retryable error code internally <span>BigQueryWriter will retry the batch based on the retry parameters specified in ConnectionRetryPolicy</span>.</p>\n<p>Note : Only a few error codes(backendError, internalError etc) are encouraged to be a part of \"<span>_h_RetriableErrorCodes</span>\" property. Error codes like \"notFound\" or \"quotaExceeded\" cannot be retried as an example</p>\n<h4>c) <strong>Retryable error text</strong> (non-json formatted exceptions)</h4>\n<p>Starting with version 4.0.4.3 <span>malformed exception messages can be retried as following,</span></p>\n<pre><span><code class=\"code css-9z42f9\" data-renderer-mark=\"true\">_h_RetriableErrorText : '502 Bad Gateway, 500 Internal Error'</code></span><span></span></pre>\n<p><span>The default values for both the hidden properties are</span></p>\n<pre><span><code class=\"code css-9z42f9\" data-renderer-mark=\"true\">_h_RetriableErrorCodes : 'jobratelimitexceeded',</code><br><code class=\"code css-9z42f9\" data-renderer-mark=\"true\">_h_RetriableErrorText : '502 Bad Gateway',</code></span></pre>\n<p><span>Apart from above, all exceptions that are marked retryable by BigQuery will be retried. </span><br><span>i.e, all 5xx errors will be retried.</span></p>"} {"page_content": "<h3>Goal:</h3>\n<p>The goal of this note is to document logdump commands needed to troubleshoot</p>\n<p>issues reported against GGTrailReader/ TrailParser</p>\n<h3>Scenarios</h3>\n<p>1. To search records based on SCN (system change number) of Oracle source db</p>\n<p>Say the SCN of interest is 100</p>\n<p data-renderer-start-pos=\"616\">logdump&gt;open &lt;traifile seqno&gt;<br>logdump&gt;ghdr on<br>logdump&gt;detail data on<br>logdump&gt;ggstoken detail<br>logdump&gt;filter include ggstoken logcsn &gt;= 100<br>logdump&gt;n</p>\n<p data-renderer-start-pos=\"616\">2. To find the minimum and maximun SCN in a trail</p>\n<p data-renderer-start-pos=\"616\">logdump&gt;open &lt;traifile seqno&gt;<br>logdump&gt;file header detail on<br>logdump&gt; n</p>\n<pre data-renderer-start-pos=\"616\">TokenID x3a ':' FirstCSN Info x00 Length 129 <br>0e31 3630 3239 3637 3938 3434 3837 3500 0000 0000 | .16029679844875..... <br>TokenID x3b ';' LastCSN Info x00 Length 129 <br>0e31 3630 3239 3639 3035 3939 3132 3800 0000 0000 | .16029690599128..... </pre>\n<p data-renderer-start-pos=\"616\">3. To search for a given value </p>\n<p data-renderer-start-pos=\"616\">The column value can be searched by hex or string</p>\n<p data-renderer-start-pos=\"616\">logdump&gt;open &lt;traifile seqno&gt;<br>logdump&gt;ghdr on<br>logdump&gt; detail data on<br>logdump&gt; filter include hex /3139 3637/<br>logdump&gt; n</p>\n<pre data-renderer-start-pos=\"616\">Column 0 (x0000), Len 8 (x0008) <br>0000 0400 3139 3637 | ....1967 </pre>\n<p data-renderer-start-pos=\"616\">logdump&gt; filter include string \"shopping\"<br>logdump&gt; n</p>\n<pre data-renderer-start-pos=\"616\">Column 6 (x0006), Len 35 (x0023) <br>0000 1f00 2f73 686f 7070 696e 672f 705f 696d 672f | ..../shopping/p_img/ <br>3130 302f 3030 3938 365f 322e 6a70 67 | 100/00986_2.jpg </pre>\n<p data-renderer-start-pos=\"616\">The complete output is omitted for brevity</p>\n<p data-renderer-start-pos=\"616\">4. To search by table name, operation type</p>\n<p data-renderer-start-pos=\"616\">logdump&gt; open &lt;trail sequence number&gt;<br>logdump&gt; ghdr on<br>logdump&gt; detail data on<br>logdump&gt; filter inc filename SBB.SALE_M_BER<br>logdump&gt; filter inc rectype Insert<br>logdump&gt; filter match all<br>logdump&gt; n<br>logdump&gt; filter clear</p>\n<p data-renderer-start-pos=\"616\"> </p>\n<p data-renderer-start-pos=\"616\"> </p>\n<p data-renderer-start-pos=\"616\"> </p>"} {"page_content": "<h3>Issue:</h3>\n<p> Postgres table has a primary key and if an update is performed on non primary key column, then PostgresReader captures it as pkupdate. The mon metrics shows table level statistics as PKUPDATES. If target is a DatabaseWriter, the update uses all columns in SET/ WHERE clause leading to slow performance.</p>\n<p>Below is the update statement on the DatabaseWriter (In this example Target is Oracle)</p>\n<pre>UPDATE \"STRIIM\".\"ACCOUNTS\" SET \"USER_ID\" = :1 , \"USERNAME\" = :2 , \"PASSWORD\" = :3 , \"EMAIL\" = :4 , \"CREATED_ON\" = :5 , \"LAST_LOGIN\" = :6 <br><strong>where \"USER_ID\" = :7 and \"USERNAME\" = :8 and \"PASSWORD\" = :9 and \"EMAIL\" = :10 and \"CREATED_ON\" = :11 and \"LAST_LOGIN\" = :12</strong> </pre>\n<p> </p>\n<h3>Cause:</h3>\n<p> Replica identity is set to FULL at the table level on source Postgres. When replica identity is set to FULL all the columns will be logged in before image and update on non-pk column is considered as primary key update. The replica identity can be set at table level and valid values are {DEFAULT | USING INDEX <tt class=\"REPLACEABLE c2\">index_name</tt> | FULL | NOTHING}</p>\n<p>To Find the replica identity configuration on Postgres table do following</p>\n<pre><span> \\</span><span class=\"il\">d+</span><span> &lt;Schema&gt;.&lt;Table_name&gt;;</span><br><br><span>eg:</span><br><br>Column | Type | Collation | Nullable | Default | Storage | Stats target | Description <br>------------+-----------------------------+-----------+----------+-------------------------------------------+----------+--------------+-------------<br>user_id | integer | | not null | nextval('accounts_user_id_seq'::regclass) | plain | | <br>username | character varying(50) | | not null | | extended | | <br>password | character varying(50) | | not null | | extended | | <br>email | character varying(255) | | not null | | extended | | <br>created_on | timestamp without time zone | | not null | | plain | | <br>last_login | timestamp without time zone | | | | plain | | <br>Indexes:<br>\"accounts_pkey\" PRIMARY KEY, btree (user_id)<br>\"accounts_email_key\" UNIQUE CONSTRAINT, btree (email)<br>\"accounts_username_key\" UNIQUE CONSTRAINT, btree (username)<br><strong>Replica Identity: FULL</strong><br><br><span class=\"s1\">If it does not show Replica identity it is considered as default.</span><br><br><span class=\"s1\">or </span><br><br><span class=\"s1\">SELECT CASE relreplident<br>WHEN 'd' THEN 'default'<br>WHEN 'n' THEN 'nothing'<br>WHEN 'f' THEN 'full'<br>WHEN 'i' THEN 'index'<br>END AS replica_identity<br>FROM pg_class<br>WHERE oid = 'employee'::regclass; <br></span><br><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>replica_identity<span class=\"Apple-converted-space\"> </span></span><br><span class=\"s1\"> ------------------</span><br><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>FULL</span><br><br><br><br></pre>\n<p>As per <a href=\"https://www.postgresql.org/docs/9.4/sql-altertable.html\">https://www.postgresql.org/docs/9.4/sql-altertable.html</a><br><br><em><strong><tt class=\"LITERAL\">REPLICA IDENTITY </tt></strong></em><br><em>This form changes the information which is written to the write-ahead log to identify rows which are updated or deleted. This option has no effect except when logical replication is in use. <tt class=\"LITERAL\">DEFAULT</tt> (the default for non-system tables) records the old values of the columns of the primary key, if any. <tt class=\"LITERAL\">USING INDEX</tt> records the old values of the columns covered by the named index, which must be unique, not partial, not deferrable, and include only columns marked <tt class=\"LITERAL\">NOT NULL</tt>. <tt class=\"LITERAL\">FULL</tt> records the old values of all columns in the row. <tt class=\"LITERAL\">NOTHING</tt> records no information about the old row. (This is the default for system tables.) In all cases, no old values are logged unless at least one of the columns that would be logged differs between the old and new versions of the row.</em></p>\n<h3> </h3>\n<h3>Solution:</h3>\n<p> Set the replica identity from FULL to default</p>\n<p>eg:</p>\n<pre class=\"p1\"><em><span class=\"s1\">alter table public.accounts replica identity default;</span></em></pre>\n<p class=\"p1\"><span class=\"s1\">Once you change the replicat identity to default. The DatabaseWriter will use only key column(s) in WHERE clause with SET clause containing all the columns.</span></p>\n<p class=\"p1\"><span class=\"s1\">eg:</span></p>\n<pre class=\"p1\"><span class=\"s1\">UPDATE \"STRIIM\".\"</span><span class=\"s2\"><strong>ACCOUNTS</strong></span><span class=\"s1\">\" SET<span class=\"Apple-converted-space\"> </span>\"USERNAME\" = :1<span class=\"Apple-converted-space\"> </span>, \"PASSWORD\" = :2<span class=\"Apple-converted-space\"> </span>, \"EMAIL\" = :3<span class=\"Apple-converted-space\"> </span>, \"CREATED_ON\" = :4<span class=\"Apple-converted-space\"> </span>, \"LAST_LOGIN\" = :5 <br><strong><span class=\"Apple-converted-space\"> </span>where \"USER_ID\" = :6 <span class=\"Apple-converted-space\"> </span></strong></span></pre>\n<p> </p>\n<p> </p>\n<p> </p>"} {"page_content": "<h3>Description:</h3>\n<p><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">Incremental Batch Reader w</span><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">orks like DatabaseReader but has two additional properties, Check Column and Start Position, which allow you to specify that reading will begin at a user-selected position. To specify the starting point, the table(s) to be read must have a column containing either a timestamp or a sequential number. The most common use case is for populating data warehouses.</span></p>\n<p>More details are available here <a href=\"https://www.striim.com/docs/en/incremental-batch-reader.html\">https://www.striim.com/docs/en/incremental-batch-reader.html</a></p>\n<p> </p>\n<h3>Solution:</h3>\n<p>To be able to read multiple tables in parallel use following properties and set the value of 'n' to the number of tables part of the Tables list.</p>\n<pre><br>ConnectionpoolSize :'n',<br><br>Where n is the number of active connections.<br><br>By default, the value of n is 1. This is the reason you could only see one table <br>being read at a time. Please increase it. This property will allow tables to be read <br>parallel without waiting for a connection. <br><br>ThreadpoolSize :'n',<br><br>Use this to specify how many tables you want to read in parallel.<br><br>If ThreadPoolSize is 3, then three tables can be read parallel at the same time.<br>If ConnectionpoolSize is 3, the three tables can obtain the connection and start <br>publishing the events without waiting.</pre>\n<p> </p>"} {"page_content": "<p dir=\"auto\"> </p>\n<p dir=\"auto\"><span class=\"wysiwyg-underline\"><strong>Issue:</strong></span></p>\n<p dir=\"auto\">Application is configured to read from GGtrails using filereader with GGtrailparser and it has following CQ.</p>\n<pre dir=\"auto\"><span>CREATE OR REPLACE CQ CQ_FILTER_DATE</span><br><span>INSERT INTO STRM_CQ_FILTER_DATE</span><br><span>SELECT * FROM strm_ggfilereader g</span><br><span>WHERE to_string(meta(g,\"TableName\")) == 'SBB.SBB_DOWN_LINE'</span><br><span>and</span><br><span>(</span><br><span>TO_STRING(META(g, 'OperationName')) in ('INSERT','UPDATE')</span><br><span>or</span><br><span>( TO_STRING(META(g, 'OperationName')) == 'DELETE' and DSUBTRACT(DNOW(),</span><br><span>DDAYS(3)) &gt; TO_DATE(data[0]))</span><br><span>);</span></pre>\n<p dir=\"auto\">App fails with below error</p>\n<pre><span>2021-09-02 17:17:48,707 @S172_22_39_173 @atomy.GGFR_to_AWS_Kinesis_CDC_007 -ERROR com.webaction.runtime.components.CQTask.receive() : Problem running CQ CQ_preProcess_NONE_SBBDOWNLINE for event TE([{\"_id\":null,\"timeStamp\":1630570668704,\"originTimeStamp\":0,\"key\":null,\"sourceUUID\":null,\"data\":[[2021,3,24,0,0,0,0],\"TEST\",\"12437348\",\"1344\",\"0\",\"215\",\"0\",\"330\",\"2000\",[2021,3,24,14,14,50,0]],\"metadata\":{\"TableID\":1,\"TableName\":\"PAL.CUSTOMER\",\"TxnID\":\"0.10.33434\",\"OperationName\":\"DELETE\",\"FileName\":\"et000000000\",\"FileOffset\":12074162,\"TimeStamp\":1616563185000,\"Oracle ROWID\":\"\",\"CSN\":\"56773021527\",\"RecordStatus\":\"VALID_RECORD\"},\"userdata\":null,\"before\":null,\"dataPresenceBitMap\":\"fwc=\",\"beforePresenceBitMap\":\"AAA=\",\"typeUUID\":{\"uuidstring\":\"01ec0bc5-158a-e611-9184-00155dfe42d6\"}}] [] &lt;empty range&gt;) :</span><br><strong>java.lang.IllegalArgumentException: Passed Parameter of type org.joda.time.LocalDateTime cannot be converted to DateTime</strong><br><strong>at com.webaction.runtime.BuiltInFunc.TO_DATE(BuiltInFunc.java:374)</strong><br><span>at QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.runImpl(QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.java)</span><br><span>at com.webaction.runtime.components.CQSubTask.processBatch(CQSubTask.java:118)</span><br><span>at com.webaction.runtime.components.CQSubTask.processAdded(CQSubTask.java:136)</span><br><span>at com.webaction.runtime.components.CQSubTask.processNotAggregated(CQSubTask.java:173)</span><br><span>at QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.run(QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.java)</span><br><span>at com.webaction.runtime.components.CQTask.receive(CQTask.java:347)</span><br><span>at com.webaction.runtime.DistributedRcvr.doReceive(DistributedRcvr.java:245)</span><br><span>at com.webaction.runtime.DistributedRcvr.onMessage(DistributedRcvr.java:110)</span><br><span>at com.webaction.jmqmessaging.InprocAsyncSender.processMessage(InprocAsyncSender.java:52)</span><br><span>at com.webaction.jmqmessaging.AsyncSender$AsyncSenderThread.run(AsyncSender.java:124)</span><br><br>Java.lang.IllegalArgumentException: Passed Parameter of type org.joda.time.LocalDateTime cannot be converted to DateTime<br>at com.webaction.runtime.BuiltInFunc.TO_DATE(BuiltInFunc.java:374)<br>at QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.runImpl(QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.java)<br>at com.webaction.runtime.components.CQSubTask.processBatch(CQSubTask.java:118)<br>at com.webaction.runtime.components.CQSubTask.processAdded(CQSubTask.java:136)<br>at com.webaction.runtime.components.CQSubTask.processNotAggregated(CQSubTask.java:173)<br>at QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.run(QueryExecPlan_atomy_CQ_preProcess_NONE_SBBDOWNLINE_atomy_GGFR_CDC_PreProcessed_Stream_007_g.java)<br>at com.webaction.runtime.components.CQTask.receive(CQTask.java:347)<br>at com.webaction.runtime.DistributedRcvr.doReceive(DistributedRcvr.java:245)<br>at com.webaction.runtime.DistributedRcvr.onMessage(DistributedRcvr.java:110)<br>at com.webaction.jmqmessaging.InprocAsyncSender.processMessage(InprocAsyncSender.java:52)<br>at com.webaction.jmqmessaging.AsyncSender$AsyncSenderThread.run(AsyncSender.java:124)</pre>\n<p><span class=\"wysiwyg-underline\"><strong>Cause:</strong></span></p>\n<p> GGTrail date formats are written in localdateformat. So it returns the value of type <span>org.joda.time.LocalDateTime. When you try to convert LocalDateTime to DateTime using TO_DATE function. It throws an error </span></p>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong>Solution:</strong></span></p>\n<p>Convert the localdatetime to string and then convert string to date.</p>\n<p>eg:</p>\n<p> TO_DATE(data[0].toString())</p>\n<p>So the CQ would be </p>\n<pre>SELECT * FROM GGFR_CDC_PreProcessed_Stream_007 g<br>WHERE to_string(meta(g,\"TableName\")) == 'SBB.SBB_DOWN_LINE'<br>and<br>(<br> TO_STRING(META(g, 'OperationName')) in ('INSERT','UPDATE')<br> or<br> ( TO_STRING(META(g, 'OperationName')) == 'DELETE' and DSUBTRACT(DNOW(),DDAYS(3)) &gt; TO_DATE(data[0].toString())));</pre>"} {"page_content": "<p>Striim <span>Product Support requests end users to gather jstack/ heap dump etc which needs the Striim server process ID as an argument. There are several ways to get this information and this note documents the options available currently</span></p>\n<p>Note: Following commands needs to be executed as the Striim owner</p>\n<p>1. <strong>ps</strong> command which is used to list the currenlty running processes</p>\n<pre class=\"p1\"><span class=\"s1\">ps ax | grep -i \"com.webaction.runtime.Server\" | grep java | grep -v grep | awk '{print $1}'</span></pre>\n<p>2. java process status command which lists the JVM's running currently</p>\n<pre><strong>jps</strong></pre>\n<pre>$ jps<br><br>18771 Kafka<br>30089 Jps<br>18540 QuorumPeerMain<br>26941 derbyrun.jar<br>26958 Server<br><br>$ jps| grep -i Server| awk '{print $1}'</pre>\n<p>Look for the \"Server\" pid in the list</p>\n<p>Note: This command requires the installation of<strong> java-1.8.0-openjdk-devel-1.8.0*</strong> package to work in openJDK environments</p>"} {"page_content": "<p><span>Striim Product Support requests commonly require diagnostics during incident responses to expedite identification of causes for a product behavior. The goal of this article is provide a quick reference check list for common symptoms and the diagnostic artifacts a Striim Product Support Engineers may request during a support follow up interaction. Please bookmark this article for future reference as we will continue to add-on and update this information with additional tips and tools for quick reference</span></p>\n<p> </p>\n<h1>App crash</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>&lt;striim home&gt;/logs/striim.server.log</li>\n<li>TQL of the app</li>\n</ul>\n</li>\n</ul>\n<h1>App Hang</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>&lt;striim home&gt;/logs/striim.server.log</li>\n<li>console ; mon &lt;app/ source/ target&gt; -- repeated twice in 5 minute interval</li>\n<li>console; show &lt;app&gt; checkpoint history -- repeated twice in 5 minute interval</li>\n<li>console; describe &lt;app&gt; -- repeated twice in 5 minute interval</li>\n<li>OS: CPU/Memory usage</li>\n<li>jstack outputs 3 times with 10 second interval</li>\n<li>TQL of the app</li>\n</ul>\n</li>\n</ul>\n<h1>Oracle CDC Performance/Slowness</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>console ; mon &lt;source/stream/target&gt; -- repeated twice in 5 minute interval</li>\n<li>Check oracle redo generating rate. if more than 1TB/day, get dry-run etst result.</li>\n</ul>\n</li>\n<li style=\"list-style-type: none;\"> https://support.striim.com/hc/en-us/articles/360003577053-How-to-measure-Oracle-Logminer-Read-Rate- \n<ul>\n<li>OS: CPU/Memory usage</li>\n<li>Oracle: alert log</li>\n<li>TQL of the app</li>\n</ul>\n</li>\n</ul>\n<h1>OJet CDC Performance/Slowness</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>DB</li>\n</ul>\n</li>\n<li style=\"list-style-type: none;\">\n<pre>col MEMORY_ALLOCATED_KNLASG HEADING 'Used MB'<br>col STRMPOOL_SIZE_KNLASG HEADING 'Total Allocated MB'<br>select MEMORY_ALLOCATED_KNLASG/1024/1024 as MEMORY_ALLOCATED_KNLASG , <br>STRMPOOL_SIZE_KNLASG/1024/1024 as STRMPOOL_SIZE_KNLASG from x$knlasg;<br><br>select capture_name,sga_used/(1024*1024) as used_mb, sga_allocated/(1024*1024) <br>as alloced_mb,total_messages_captured as msgs_captured, total_messages_enqueued <br>as msgs_enqueued from gv$xstream_capture order by capture_name;<br><br>col APPLY_name for a30<br>SELECT r.inst_id,ap.APPLY_NAME,<br>DECODE(ap.APPLY_CAPTURED,<br>'YES','Captured LCRS',<br>'NO','User-Enqueued','UNKNOWN') APPLY_CAPTURED,<br>SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,<br>r.SGA_USED/1024/1024 sga_used_mb,<br>r.sga_allocated/1024/1024 sga_allocated_mb<br>FROM gV$XSTREAM_APPLY_READER r, gV$SESSION s, DBA_APPLY ap<br>WHERE r.SID = s.SID AND<br>r.SERIAL# = s.SERIAL# AND<br>r.inst_id = s.inst_id AND<br>r.APPLY_NAME = ap.APPLY_NAME order by ap.apply_name;</pre>\n<ul>\n<li>console\n<pre>show &lt;source_name&gt; status ; <br>show &lt;source_name&gt; status details ;<br>show &lt;source_name&gt; memory ; <br>show &lt;source_name&gt; memory details;</pre>\n</li>\n</ul>\n</li>\n</ul>\n<h1>Striim Monitor Metrics</h1>\n<ul>\n<li>console; mon all</li>\n<li>console; status Global.MonitoringSourceApp</li>\n<li>console; status Global.MonitoringProcessApp</li>\n<li>console: select * from <span>Global.MonitoringStream1; -- check if the return event is old or not.</span>\n</li>\n<li>jstack outputs 3 times with 10 second interval</li>\n<li>Resource usage: CPU, Memory</li>\n<li>turn on trace for a 3 minutes, and get server and debug logs under ./logs/</li>\n</ul>\n<p><code class=\"code css-9z42f9\" data-renderer-mark=\"true\">SET LOGLEVEL = {'monitor' : 'debug'};</code></p>\n<p>wait for 3 min</p>\n<p><code class=\"code css-9z42f9\" data-renderer-mark=\"true\">SET LOGLEVEL = {'monitor' : 'off'};</code></p>\n<h1>Striim server cpu increase</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>&lt;striim home&gt;/logs/striim.server.log</li>\n<li>$ top -n 1 -H -p &lt;striim server pid&gt;</li>\n<li>$ jstack &lt;striim server pid&gt; &gt;&gt; jstack_$(hostname).$(date +%F).log</li>\n<li>console; mon &lt;server name&gt;</li>\n<li>console; mon all;</li>\n</ul>\n</li>\n</ul>\n<h1>Striim server Hang (connect to UI/console)</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>&lt;striim home&gt;/logs/striim.server.log</li>\n<li>JFR for 5 minutes. </li>\n</ul>\n</li>\n</ul>\n<p> $ jcmd &lt;striim server pid&gt; VM.unlock_commercial_features<br> $ jcmd &lt;striim server pid&gt; JFR.start duration=300s filename=$(date +%Y%m%d_%H%M).jfr</p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>$ jstack &lt;striim server pid&gt; &gt;&gt; jstack_$(hostname).$(date +%F).log</li>\n</ul>\n</li>\n</ul>\n<h1>Striim server Memory increase</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li><span>For all the running apps, from console: show &lt;namespace&gt;.&lt;appname&gt; memsize;</span></li>\n<li>\n<p>JFR for 5 minutes. </p>\n</li>\n</ul>\n</li>\n</ul>\n<p> $ jcmd &lt;striim server pid&gt; VM.unlock_commercial_features<br> $ jcmd &lt;striim server pid&gt; JFR.start duration=300s filename=$(date +%Y%m%d_%H%M).jfr</p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>$ sar -r</li>\n<li>$ free -m</li>\n<li>&lt;striim home&gt;/logs/striim.server.log</li>\n<li>&lt;striim home&gt;/<span>hs_er*.log</span><span></span>\n</li>\n<li><span>striim_memcheck.sh output (see reference below)</span></li>\n</ul>\n</li>\n</ul>\n<h1>Striim app/component Memory Usage</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>show &lt;app|component&gt; memsize;</li>\n</ul>\n</li>\n</ul>\n<h1>Striim server Startup/Upgrade error</h1>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>startUp.properties</li>\n<li>ls -lrRt /opt/striim &gt;&gt; ls_$(date +\"%m-%d-%Y\").log</li>\n<li>&lt;striim home&gt;/logs/striim-node.log (if present)</li>\n<li>&lt;striim home&gt;/logs/striim.server.log</li>\n<li>MDR dump: (1) derby: targ wactionrepos directory; (2) oracle: export dump; (3) postgres: pg_dump as sql file</li>\n</ul>\n</li>\n</ul>\n<h1>References</h1>\n<div class=\"\">* Java VM Heap Memory Utilization Check Script: <a class=\"\" title=\"https://support.striim.com/hc/en-us/articles/115015683148-What-shall-I-collect-when-I-suspect-there-is-a-memory-leak-in-Striim-\" href=\"https://support.striim.com/hc/en-us/articles/115015683148-What-shall-I-collect-when-I-suspect-there-is-a-memory-leak-in-Striim-\">https://support.striim.com/hc/en-us/articles/115015683148-What-shall-I-collect-when-I-suspect-there-is-a-memory-leak-in-Striim-</a>\n</div>\n<div class=\"\">* How to find the Striim server process id: <a href=\"https://support.striim.com/hc/en-us/articles/4407528999703-How-to-find-the-Striim-Server-PID-Process-ID\">https://support.striim.com/hc/en-us/articles/4407528999703-How-to-find-the-Striim-Server-PID-Process-ID</a>\n</div>\n<div class=\"\"></div>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p>User has used system Health REST API to check the health of apps on striim. When parsed by a tool JSON received from the striim gives errors. The “appHealthMap” was empty which caused the error.</p>\n<p>In json output received apphealthmap is empty</p>\n<pre>\"cacheHealthMap\": {},<br>\"clusterSize\": 0,<strong><br>\"appHealthMap\": {},</strong></pre>\n<p> </p>\n<p><strong>Cause:</strong></p>\n<p>There is a known bug <br>DEV-24126 Health REST API result has too few information </p>\n<p> </p>\n<p><strong>Solution:</strong></p>\n<p>Upgrade to 3.10.3.3 or higher version. For downloading latest version, use below note</p>\n<p><a href=\"https://support.striim.com/hc/en-us/articles/229277848-Download-of-Latest-Version-of-Striim\" target=\"_blank\" rel=\"noopener\">Download_Striim_Latest_version</a></p>"} {"page_content": "<p>For SQL Server CDC capture, the CDC shadow tables have to be populated first by agent. If the agent is conservative and slow, it may introduce long lag up to days.</p>\n<p> </p>\n<p>For this issue, the solution is to add trace flag 1448 (<a href=\"https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15\">https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15</a>).</p>\n<p> </p>\n<p>Please contact Microsoft support first, before adding this trace flag.</p>"} {"page_content": "<pre><span>Issue:</span><br><span>BQ Writer fails with Required field cannot be null</span><br><br><span>\"code\" : 400,</span><br><span>\"errors\" : [ {</span><br><span>\"domain\" : \"global\",</span><br><span>\"location\" : \"q\",</span><br><span>\"locationType\" : \"parameter\",</span><br><strong>\"message\" : \"Required field CRE_USR_ID cannot be null\",</strong><br><span>\"reason\" : \"invalidQuery\"</span><br><span>} ],</span><br><br><br></pre>\n<p> </p>\n<p><br><strong>Cause 1:</strong><br><br><span>When app is restarted, the transactions are replayed from restart position. </span><br><span>Source is Oracle/GGtrailreader and only primary key logging is enabled and target table has several NOT NULL columns. BQ writer is in merge mode Optimized merge is set to true on BQ writer to apply this partial image. </span></p>\n<p><span>This issue can occur if full column logging is not enabled and BQ writer is set to appendonly mode.</span><br><br><span>Consider the following example(BQ writer is in merge mode with optimized merge is true</span><br><span>Tx1 : Insert</span><br><span>Tx2: Update</span><br><span>Tx3: delete</span><br><br><span>Tx1 Integrated with BQ and checkpoint is updated. Now, Tx2 and Tx3 are integrated and acknowledged. but the checkpoint interval has not reached and BQ will not have that record on target(It is already deleted due to Tx3) and the restart position would be at Tx2.. Now the app is stopped. Restart position will replayed from Tx2 which is an update. but the record is already deleted and now BQ writer will try to insert the update operation. Since it does not have all the columns logged, app fails with Required column cannot be null</span><br><br><br><strong>Cause 2:</strong><br><span>Starting from Striim version 3.10.3.6, a partition feature was implemented which to avoid the full table scan. This will improve the performance of the BQ writer and reduce the cost involved due to full table scan. Partition column should be column from source and supplemental logging should be enabled on source for this column. </span><br><br><span>In users case partition column was set on a column which an extra column on target populated through CQ using dnow().</span></p>\n<p><span>eg:</span></p>\n<p>CQ: SELECT putUserData(s, 'replication_time',dnow(),'OpType’, META(x, ‘OperationName’))</p>\n<p>Table Mapping in BQ Tables: 'HIMA.ACCOUNTS,himadataset.ACCOUNTS COLUMNMAP(BQ_replication_time=@USERDATA (replication_time))</p>\n<p>The value of BQ_replication_time target table is 2021-08-04T08:00:11.063000 and but merge query is trying to search for latest date dnow() 2021-08-05T12:12:45.860000. So it is not able to find the record and it will try to insert the record. This will result in inserting the record. If all column logging is enabled on source record will inserted or it fails with Required field cannot be null if all column logging is not enabled on source.</p>\n<p><span>This is incorrect configuration.</span><br><br><strong>Solution 1: If you version is pre-3.10.3.6 version. Then upgrade to 3.10.3.6 version.</strong><br><span>Starting from 3.10.3.6 version, during crash or stop of the application, the target acknowledged positions(app checkpoint) are persisted to Metadata repository even though recovery/checkpoint interval has not reached . This will reduce the occurence of the above mentioned issue. </span><br><br><strong>Solution 2:</strong><br><span>Ensure that you are on 3.10.3.6 version and partition column should be a source column with supplemental logging enabled on source and the datatype should be Numeric or Date datatype.</span></p>"} {"page_content": "<h3> </h3>\n<h2>Symptoms:</h2>\n<p>All user clicks and navigation is slower in Striim web UI and/or Striim Server shows high CPU usage.</p>\n<p> </p>\n<h2>Observation:</h2>\n<p>High CPU usage can be caused by various reasons and following are some of the generic cases</p>\n<p> </p>\n<h4>1) check if \"debug\" is enabled on the Striim server</h4>\n<p>Following are ways to confirm it</p>\n<p>a) W (admin)&gt; set;</p>\n<p>The root level should be set to WARN and if debug was set for individual loggers please disable it</p>\n<p>b) Check the hothreads output</p>\n<p>W (admin) &gt; list servers:</p>\n<p>W (admin) &gt; mon &lt;server name&gt;;</p>\n<p>│ 25.5% (127.6ms out of 500ms) cpu usage by thread 'qtp361859039-434566'<br>│ 3/10 snapshots sharing following 23 elements <br>│ org.apache.log4j.Category.getEffectiveLevel(Category.java:442) <br>│ org.apache.log4j.Category.<strong>isDebugEnabled</strong>(Category.java:736)</p>\n<p>c) check &lt;STRIIM HOME&gt;/conf/<span class=\"s1\">log4j.server.properties</span></p>\n<p class=\"p1\"><span class=\"s1\">log4j.rootLogger should be set to <em>INFO</em></span></p>\n<p class=\"p1\"> </p>\n<h4 class=\"p1\"><span class=\"s1\">2) check if \"parallelthreads\" are enabled in the target writers.</span></h4>\n<p class=\"p1\"><span class=\"s1\">With parallelism enabled it is expected to see a increase in the CPU usage. Alternatively multiple writers can be configured to split the load rather than having a single writer with parallelthreads enabled.</span></p>"} {"page_content": "<p>It is common to use postgres as Striim MDR. When installing Striim on GCP, SSL connection to Google Cloud SQL Postgres may be used.<br>This is supported in 3.10.3.6 and up.</p>\n<p>Following are the steps of config and troubleshooting:</p>\n<p> </p>\n<p>1. create Google cloud sql postgres instance.<br>- login to gcp<br>- select SQL, create DB instance<br>- select Postgres and related settings.<br>- creating instance (it will take a few min).<br>e.g., in gcp, can be created from web login<br>assuming db=striimrepo</p>\n<p> </p>\n<p>2. create DB user<br>login as superuser, through shell in the instance.<br>pg on gcp, default 'postgres' user is superuser</p>\n<p>create user striim;<br>GRANT ALL PRIVILEGES ON DATABASE striimrepo to striim;<br>\\password striim<br>\\q</p>\n<p> </p>\n<p>3. setup normal connection (enable network access):<br>- search from browser for 'My IP Address'<br>- add it to Connection -&gt; Networking -&gt; Public IP<br>- test with java code and psql</p>\n<p> </p>\n<p>4. test connection:<br>e.g.,<br>psql -d striimrepo -h 12.34.56.78 -U striim</p>\n<p> </p>\n<p>5. install striim tables<br>cd $STRIIM_HOME/conf/<br>psql -d striimrepo -h 12.34.56.78 -U striim -f DefineMetadataReposPostgres.sql</p>\n<p>\\q</p>\n<p> </p>\n<p>6. set up SSL connection<br>- Connection -&gt; Security<br>(1) check \"Allow only SSL connection\"<br>(2) create client certificate<br>(3) download the 3 pem files: server-ca.pem, client-cert.pem, client-key.pem</p>\n<p> </p>\n<p>7. test with psql:<br>psql \"sslmode=verify-ca sslrootcert=server-ca.pem sslcert=client-cert.pem sslkey=client-key.pem hostaddr=12.34.56.78 port=5432 user=Striim dbname=striimrepo\"</p>\n<p>although this works, jdbc connection failed with:<br>Could not read SSL key file /Users/myuser/Downloads/client-key.pem?user=striim.</p>\n<p> </p>\n<p>8. change key from der to pk8<br>openssl pkcs8 -topk8 -outform DER -in client-key.pem -out client-key.pem.pk8 -nocrypt<br>chmod 600 client-key.pem.pk8</p>\n<p> </p>\n<p>9. test jdbc connection:<br>in attached JDBCExample.java file<br>(1) modify connection URL (line 21) to:<br>\"jdbc:postgresql://12.34.56.78:5432/striimrepo?stringtype=unspecified&amp;ssl=true&amp;sslmode=verify-ca&amp;sslrootcert=/Users/myuser/Downloads/server-ca.pem&amp;sslcert=/Users/myuser/Downloads/client-cert.pem&amp;sslkey=/Users/myuser/Downloads/client-key.pem.pk8\", \"striim\", \"password\")) {</p>\n<p> </p>\n<p>(2) compile the java code<br>javac JDBCExample.java<br>(this will generate a file: JDBCExample.class)</p>\n<p> </p>\n<p>(3) test connection.<br>example:<br>$ time java -cp .:&lt;Striim_HOME&gt;/lib/postgresql-42.2.2.jar JDBCExample<br>Connected to the database!<br>....</p>\n<p> </p>\n<p>10. after confirming the SSL connection through jdbc to postgres on gcp, setup postgres MDR user password in Striim keystore:<br>run sysConfig.sh to setup MDR password</p>\n<p> </p>\n<p>11. config in startUp.properties<br>example:<br>MetadataDb=postgres<br>#MetaDataRepositoryLocation=12.34.56.78:5432/striimrepo?stringtype=unspecified<br>MetaDataRepositoryLocation=12.34.56.78:5432/striimrepo?stringtype=unspecified&amp;ssl=true&amp;sslmode=verify-ca&amp;sslrootcert=/Users/myuser/Downloads/server-ca.pem&amp;sslcert=/Users/myuser/Downloads/client-cert.pem&amp;sslkey=/Users/myuser/Downloads/client-key.pem.pk8<br>MetaDataRepositoryDBname=striimrepo<br>MetaDataRepositoryUname=striim</p>\n<p>12. Similar connection URL also works for DBReader and DBWriter on Cloud SQL PG with SSL (example is in attached tql file)</p>"} {"page_content": "<p>Environment :<br>Oracle Database : Any Oracle Version.</p>\n<p>Issue : User password contains special characters ( ! or $ or !$)</p>\n<p><strong>Error: If the password contains special character $. It fails with below error</strong></p>\n<pre><em>Striim$ bin/schemaConversionUtility.sh -s=\"oracle\" -d=\"jdbc:oracle:thin:@192.168.1.50:1521/orcl\" -u=\"striim\" -p=\"Stri!12$3\" -b=\"HIMA.EMPLOYEE\" -t=\"postgres\"</em><br><em>-bash: !12: event not found</em><br><br><br><br></pre>\n<p>If it contains only $ and it fails with below error</p>\n<p>Striim$ bin/schemaConversionUtility.sh -s=\"oracle\" -d=\"jdbc:oracle:thin:@192.168.1.50:1521/orcl\" -u=\"striim\" <strong>-p=\"Striim12$3\"</strong> -b=\"HIMA.EMPLOYEE\" -t=\"postgres\"</p>\n<p> </p>\n<pre>Striim$ bin/schemaConversionUtility.sh -s=\"oracle\" -d=\"jdbc:oracle:thin:@192.168.1.50:1521/orcl\" -u=\"striim\" -p=\"Striim12$3\" -b=\"HIMA.EMPLOYEE\" -t=\"postgres\"<br><br>java.sql.SQLException: ORA-01017: invalid username/password; logon denied​<br><br>at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494)<br>at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:441)<br>at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:436)<br>at oracle.jdbc.driver.T4CTTIfun.processError(T4CTTIfun.java:1061)<br>at oracle.jdbc.driver.T4CTTIoauthenticate.processError(T4CTTIoauthenticate.java:550)<br>at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:623)<br>at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)<br>at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:499)<br>at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:1279)<br>at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:663)<br>at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:688)<br>at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:39)<br>at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:691)<br>at java.sql.DriverManager.getConnection(DriverManager.java:664)<br>at java.sql.DriverManager.getConnection(DriverManager.java:208)<br>at com.striim.connection.JDBCConnection.connect(JDBCConnection.java:29)<br>at com.striim.connection.StriimConnection.&lt;init&gt;(StriimConnection.java:80)<br>at com.striim.connection.JDBCConnection.&lt;init&gt;(JDBCConnection.java:24)<br>at com.striim.connection.OracleConnection.&lt;init&gt;(OracleConnection.java:8)<br>at com.striim.connection.StriimConnection.getConnection(StriimConnection.java:53)<br>at com.striim.connection.StriimConnection.getConnection(StriimConnection.java:34)<br>at com.striim.schema.conversion.SchemaConverter.initializeConnection(SchemaConverter.java:63)<br>at com.striim.schema.conversion.SchemaConverter.&lt;init&gt;(SchemaConverter.java:51)<br>at com.striim.schema.api.SchemaConversionHelper.convertSchemaWithFk(SchemaConversionHelper.java:236)<br>at com.striim.schemaconversion.utility.SchemaConversionUtility.main(SchemaConversionUtility.java:82)</pre>\n<p> </p>\n<p>Even sqlplus fails with not using single quotes for password contains '$' symbol</p>\n<pre>Striim$ ssh oracle@192.168.1.50<br>oracle@192.168.1.50's password:<br>Last login: Wed Jul 14 22:41:30 2021 from 192.168.1.7<br>-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory<br>-bash-4.2$ . oraenv<br>ORACLE_SID = [orcl] ? orcl<br>The Oracle base has been set to /u01/app/oracle<br>-bash-4.2$ sqlplus striim/Striim12$3<br><br>SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 14 22:43:06 2021<br>Copyright (c) 1982, 2016, Oracle. All rights reserved.<br>ERROR:<br>ORA-01017: invalid username/password; logon denied<br>bash-4.2$ sqlplus striim/'Striim12$3'</pre>\n<p> </p>\n<p><strong>Solution:</strong><br>Use single quotes for password command<br>Striim$ bin/schemaConversionUtility.sh -s=\"oracle\" -d=\"jdbc:oracle:thin:@192.168.1.50:1521/orcl\" -u=\"striim\" -p=<strong>'Striim12$3'</strong> -b=\"HIMA.EMPLOYEE\" -t=\"postgres\"</p>"} {"page_content": "<p>Oracle<strong> CDB/PDB ORA-01435: user does not exist: </strong></p>\n<p>Environment :<br>Oracle 12c/18c/19c (CDB/PDB ) (It’s applicable for all CDB database running from version 12c)</p>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong>Issue:</strong></span></p>\n<p>Oracle Reader fails with the below error , though user exists and have necessary permission to perform logminer</p>\n<pre>2021-07-08 15:21:03,810 @S10_204_112_147 @admin.PSP_NPP_ORA2PG_IL2 -WARN com.striim.alm.alm.logminer.LogminerExecutor.start() : Starting logminer at : 146882010608 – 146882010608<br>2021-07-08 15:21:03,823 @S10_204_112_147 @admin.PSP_NPP_ORA2PG_IL2 -INFO com.webaction.appmanager.AppManager.changeApplicationState() : Change Striim Application State: STATUS: admin.PSP_NPP_ORA2PG_IL2<br>2021-07-08 15:21:03,825 @S10_204_112_147 @admin.PSP_NPP_ORA2PG_IL2 -ERROR com.webaction.runtime.components.Source.start() : ORA-01435: user does not exist<br>ORA-06512: at \"SYS.DBMS_LOGMNR\", line 72<br>ORA-06512: at line 2<br><br>java.sql.SQLException: ORA-01435: user does not exist<br>ORA-06512: at \"SYS.DBMS_LOGMNR\", line 72<br>ORA-06512: at line 2<br>at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494)<br>at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:446)<br>at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1054)<br><br><strong>2021-07-08 15:21:03,826 @S10_204_112_147 @admin.PSP_NPP_ORA2PG_IL2 -INFO com.webaction.runtime.components.FlowComponent.notifyAppMgr() : exception event created :ExceptionEvent : {</strong><br><strong>\"componentName\" : \"QPSPNPP01_1\" , \"componentType\" : \"SOURCE\" , \"exception\" : \"com.webaction.runtime.components.StriimComponentException\" , \"message\" : \"ORA-01435: user does not exist\\nORA-06512: at \\\"SYS.DBMS_LOGMNR\\\", line 72\\nORA-06512: at line 2\\n\" , \"relatedEvents\" : \"[]\" , \"action\" : \"CRASH\" , \"exceptionType\" : \"AdapterException\" , \"epochNumber\" : -1</strong><br><strong>}</strong></pre>\n<p><span class=\"wysiwyg-underline\"><strong>Cause :</strong></span></p>\n<p>The container database has 4 pluggable database(PDB1,PDB2,PDB3,PDB4) configured and one of the pluggable database (PDB3)in READ ONLY mode &amp; other 3 pluggable databases in READ WRITE.</p>\n<p>The user c##striim was created when one of the PDB databases (PDB3) in READ ONLY mode, so new user c##striim information not available in the PDB3 database.</p>\n<p>which caused the error \"ORA-01435: user does not exist\"</p>\n<p><span class=\"wysiwyg-underline\"><strong>Solution :</strong></span></p>\n<p> When creating a Common user for OracleReader ensure that all user databases are in read-write mode. So that the common user is created and synced across all PDB's.</p>\n<p><strong>Note: PDB$SEED can be is read-only</strong></p>\n<p>This is documented in Oracle Metalink</p>\n<p><em>Oracle doc: Accessing CDB_TABLES View By Newly Created Common User Shows Error \"ORA-01435: user does not exist\"<strong> (Doc ID 2718606.1)</strong></em></p>\n<p><em><strong>Steps required</strong></em></p>\n<p>1. Open the read-only PDB in read-write mode at least once. If the PDB is opened in read-write mode once, the user gets created inside the PDB.</p>\n<p>Once the PDB is open, verify if the new common user exists, if not sync the common user by doing the following in the PDB(s):</p>\n<pre><em>SQL&gt; alter session set container=&lt;PDB name&gt;;</em></pre>\n<pre>-- Verify the common user in the PDB:<br><em>SQL&gt; select username from dba_users where username like 'C##%';</em></pre>\n<pre>-- If required sync the common user:<br><em>SQL&gt; execute SYS.DBMS_PDB.SYNC_PDB;</em></pre>\n<pre>-- Verify you can now see the common user in the PDB:<br><em>SQL&gt; select username from dba_users where username like 'C##%';</em></pre>\n<pre>Rerun the query once the common user exists in PDB:<br>SQL&gt; select count(*) from cdb_tables;</pre>\n<p>--&gt; Able to run the OracleReader(CDC) app successfully</p>\n<p><strong>Note: After the new user sync with PDB, we can put the PDB3 in READ-ONLY mode and verify the required privileges given as per the document before starting the striim OracleReader.</strong></p>\n<p><a href=\"https://www.striim.com/docs/archive/3102/en/creating-an-oracle-user-with-logminer-privileges.html\">https://www.striim.com/docs/archive/3102/en/creating-an-oracle-user-with-logminer-privileges.html</a></p>\n<p> </p>"} {"page_content": "<p> </p>\n<h2>Symptom:</h2>\n<p>AzureSQLDWHWriter writing to synapse fails with following error</p>\n<pre>2021-05-12 14:27:53,737 @bawnmspc.CDC_GUS_PK3 -WARN com.webaction.runtime.components.FlowComponent.notifyAppMgr() : received exception from component :CDC_GUS_TGT_PK3, of exception type : com.webaction.common.exc.SystemException<br>com.webaction.common.exc.SystemException: Failure in integration : com.microsoft.sqlserver.jdbc.SQLServerException: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot insert the value NULL into column 'LOGIN_TXT', table 'tempdb.dbo.QTable_1784252a064a46c19cfe8058271cfeb6_58'; column does not allow nulls. INSERT fails. Additional error &lt;2&gt;: ErrorMsg: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The statement has been terminated., SqlState: 01000, NativeError: 3621 <br>at com.striim.dwhwriter.DWHWriter.throwThreadExeption(DWHWriter.java:1254)<br>at com.webaction.concurrency.ExceptionNotifyingThreadPool.afterExecute(ExceptionNotifyingThreadPool.java:58)<br>at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)<br>at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)<br>at java.lang.Thread.run(Thread.java:748)<br>2021-05-12 14:27:53,735 @bawnmspc.CDC_GUS_PK3 -INFO com.striim.dwhwriter.DWHWriter.customPostRollover() : Submitting integration task to the ExecutorService : IntegrationRequest for the file .striim/bawnmspc/CDC_GUS_TGT_PK3/WCWSNPADM.GG_USR/WCWSNPADM.GG_USR_10.csv.gz with event count 0<br>2021-05-12 14:27:53,732 @bawnmspc.CDC_GUS_PK3 -ERROR com.webaction.integration.AzureDWHIntegrationTask.execute() : Exception while integrating into AzureSQLDWH for the target table WCWSNPADM.GG_USR : [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Cannot insert the value NULL into column 'LOGIN_TXT', table 'tempdb.dbo.QTable_1784252a064a46c19cfe8058271cfeb6_58'; column does not allow nulls. INSERT fails. Additional error &lt;2&gt;: ErrorMsg: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The statement has been terminated., SqlState: 01000, NativeError: 3621 <br>[Ljava.lang.StackTraceElement;@59610be5</pre>\n<h2>Cause:</h2>\n<p>AzureSQLDWHWriter is set to MERGE mode and source UPDATE event doesn't have value for not null columns causing the error</p>\n<h2>Fix:</h2>\n<p>Enable ALL column supplemental logging on source and disable compression in the source CDC Adapter</p>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p>DatabaseReader with Oracle as source fails with below error</p>\n<pre><br>4-acde48001122\",\"admin.app_ora_db_reader\",\"APPLICATION\"],\"callbackIndex\":706} <br>com.webaction.exception.Warning: Exception(s) leading to CRASH State: <br>{ <br>\"componentName\" : \"src_ora_db_reader\" , \"componentType\" : \"SOURCE\" , \"exception\" : \"java.lang.Exception\" , \"message\" : \"Could not load class for the datatype SYS.XMLTYPE. Required class is oracle/xdb/XMLType\" , \"relatedEvents\" : \"[]\"<br>} <br><br>at com.webaction.runtime.compiler.Compiler.compileActionStmt(Compiler.java:4219) <br>at com.webaction.runtime.compiler.stmts.ActionStmt.execute(ActionStmt.java:36) <br>at com.webaction.runtime.compiler.Compiler.compileStmt(Compiler.java:165) <br>at com.webaction.runtime.QueryValidator$1.execute(QueryValidator.java:178) <br>at com.webaction.runtime.compiler.Compiler.compile(Compiler.java:196) <br>at com.webaction.runtime.QueryValidator.compile(QueryValidator.java:172) <br>at com.webaction.runtime.QueryValidator.compile(QueryValidator.java:163) <br>at com.webaction.runtime.QueryValidator.CreateStartFlowStatement(QueryValidator.java:530) <br>at sun.reflect.GeneratedMethodAccessor471.invoke(Unknown Source) <br>at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) <br>at java.lang.reflect.Method.invoke(Method.java:498) <br>at com.webaction.web.RMIWebSocket$RMIWSMessageExecutor.handleRMIRequest(RMIWebSocket.java:868) <br>at com.webaction.web.RMIWebSocket$RMIWSMessageExecutor.run(RMIWebSocket.java:791) <br>at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) <br>at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) <br>at java.lang.Thread.run(Thread.java:748) <br>2021-07-01 08:40:53,836 @striim_mac_server @ -INFO com.webaction.web.RMIWebSocket$RMIWSMessageExecutor.handleRMIRequest() : Executing CRUDHandler With callback Index 706 In pool-14-thread-1</pre>\n<p>Cause:<br>The table contains SYS.xmltype which is not directly supported by DatabaseReader. <br><br>Solution:</p>\n<p>Use getClobVal() function in query option to fetch the value of xmltype column instead of using TABLE property<br><br>eg:<br><br></p>\n<table style=\"border-collapse: collapse; width: 100%;\" border=\"1\">\n<tbody>\n<tr>\n<td style=\"width: 50%;\"><strong>Create table</strong></td>\n<td style=\"width: 50%;\"><strong>select output </strong></td>\n</tr>\n<tr>\n<td style=\"width: 50%;\">CREATE TABLE tab2 <br>( <br>empid number not null primary key, <br>name varchar2(100), <br>col_xmltype SYS.XMLTYPE <br>);</td>\n<td style=\"width: 50%;\">200 Sathya &lt;?xml version=\"1.0\"?&gt; <br>&lt;TABLE_NAME&gt;MY_TABLE&lt;/TABLE_NAME&gt; <br><br>100 Himachalapathy &lt;?xml version=\"1.0\"?&gt; <br>&lt;TABLE_NAME&gt;EMPLOYEE&lt;/TABLE_NAME&gt;</td>\n</tr>\n</tbody>\n</table>\n<p><br>In TQL , use the query option</p>\n<p>Query: 'select x.empid,x.name,x.col_xmltype.getclobval() from hima.tab2 x',</p>\n<p>Output to a filewriter will be like the following</p>\n<pre><br>[<br>{<br>\"metadata\":{\"TableName\":\"HIMA.TAB1;\",\"ColumnCount\":3,\"OperationName\":\"SELECT\"},<br>\"data\":{<br>\"EMPID\":\"200\",<br>\"NAME\":\"Sathya\",<br>\"COL_XMLTYPE\":\"&lt;?xml version=\\\"1.0\\\"?&gt;\\n&lt;TABLE_NAME&gt;MY_TABLE&lt;\\/TABLE_NAME&gt;\\n\"<br><br>},<br>\"before\":null,<br>\"userdata\":null<br>},<br>{<br>\"metadata\":{\"TableName\":\"HIMA.TAB1;\",\"ColumnCount\":3,\"OperationName\":\"SELECT\"},<br>\"data\":{<br>\"EMPID\":\"100\",<br>\"NAME\":\"Himachalapathy\",<br>\"COL_XMLTYPE\":\"&lt;?xml version=\\\"1.0\\\"?&gt;\\n&lt;TABLE_NAME&gt;EMPLOYEE&lt;\\/TABLE_NAME&gt;\\n\"<br>},</pre>\n<p>Note:<br>If you are replicating to a target database, then your target databasewriter mapping would be</p>\n<p>QUERY, &lt;Schema&gt;.&lt;table_name&gt;;</p>"} {"page_content": "<p>https://www.striim.com/docs/en/xml-formatter.html</p>\n<p>Scope of this note is to understand the \"Element Tuple\" option provided part of XML Formatter</p>\n<p class=\"p1\">XML Formatter : Formats a writer's output as XML</p>\n<table style=\"border-collapse: collapse; width: 100%;\" border=\"1\">\n<tbody>\n<tr>\n<td style=\"width: 25%;\">Property</td>\n<td style=\"width: 25%;\">Type</td>\n<td style=\"width: 25%;\">default value</td>\n<td style=\"width: 25%;\">notes</td>\n</tr>\n<tr>\n<td style=\"width: 25%;\">Charset</td>\n<td style=\"width: 25%;\">String</td>\n<td style=\"width: 25%;\"> </td>\n<td style=\"width: 25%;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 25%;\"><strong>Element Tuple</strong></td>\n<td style=\"width: 25%;\">String</td>\n<td style=\"width: 25%;\"> </td>\n<td style=\"width: 25%;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 25%;\">\n<p class=\"p1\">Root Element</p>\n</td>\n<td style=\"width: 25%;\">String</td>\n<td style=\"width: 25%;\"> </td>\n<td style=\"width: 25%;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 25%;\">\n<p class=\"p1\">Row Delimiter</p>\n</td>\n<td style=\"width: 25%;\">String</td>\n<td style=\"width: 25%;\">\\n</td>\n<td style=\"width: 25%;\"> </td>\n</tr>\n</tbody>\n</table>\n<p> </p>\n<p><strong>Solution:</strong></p>\n<p>The elementtuple property is used to define the xml structure of a typed event. As provided in the document, the format required is:</p>\n<p>Element-name:Attribute1:Attribute2:text=any field that is to be represented as a value,</p>\n<p>Another-Element-name:Another-Attribute1:Another-Attribute2:text=any field that is to be represented as a value.</p>\n<p>Following is a sample usage.</p>\n<pre>create source XMLSource using FileReader (<br>directory:'/Users/striim/Adapters/Sources/XMLParserV2/src/test/resources/',<br>positionByEOF:false,<br>wildcard:'books.xml'<br>) Parse using XMLParserV2 <br>(rootnode:'/books/',<br>) OUTPUT TO XmlStream;<br><br>CREATE TYPE XmlData(<br>authors String,<br>title String,<br>category String,<br>YOP String);<br><br>CREATE STREAM XmlDataStream OF XmlData;<br><br>CREATE CQ XmlToFooDataINSERT INTO XmlDataStream<br>SELECT data.element(\"authors\").asXML(),<br>data.element(\"title\").getText(),<br>data.element(\"category\").getText(),<br>data.element(\"year\").getText()FROM XmlStream;<br><br>CREATE OR REPLACE TARGET Target1 USING FileWriter ( <br>filename: \"/Users/striim/output/TestTql/output.csv\"<br>) format using XMLFormatter (<br>rootelement:'document',<br>elementtuple: 'title:category:YOP:text=title'<br>)<br>INPUT FROM XmlDataStream;<br><br><br></pre>\n<p><strong>Sample Books.xml</strong></p>\n<pre>&lt;!-- &lt;?xml version=\"1.0\" encoding=\"UTF-8\"?&gt; --&gt;<br>&lt;!-- Describes list of books--&gt;<br>&lt;books xmlns=\"http://example.com/books\"&gt; <br>&lt;book language=\"English\"&gt;<br>&lt;authors&gt;<br>&lt;author&gt;Mark Twain&lt;/author&gt;<br>&lt;/authors&gt;<br>&lt;title&gt;&lt;![CDATA[The Adventures of Tom Sawyer]]&gt;&lt;/title&gt;<br>&lt;category&gt;FICTION&lt;/category&gt;<br>&lt;year&gt;1876&lt;/year&gt;<br>&lt;/book&gt;<br>&lt;book language=\"English\"&gt;<br>&lt;authors&gt;<br>&lt;author&gt;Niklaus Wirth&lt;/author&gt;<br>&lt;/authors&gt;<br>&lt;title&gt;&lt;![CDATA[The Programming Language Pascal]]&gt;&lt;/title&gt;<br>&lt;category&gt;PASCAL&lt;/category&gt;<br>&lt;year&gt;1971&lt;/year&gt;<br>&lt;/book&gt;<br>&lt;book language=\"English\"&gt;<br>&lt;authors&gt;<br>&lt;author&gt;O.-J. Dahl&lt;/author&gt;<br>&lt;author&gt;E. W. Dijkstra&lt;/author&gt;<br>&lt;author&gt;C. A. R. Hoare&lt;/author&gt;<br>&lt;/authors&gt;<br>&lt;title&gt;&lt;![CDATA[Structured Programming]]&gt;&lt;/title&gt;<br>&lt;category&gt;PROGRAMMING&lt;/category&gt;<br>&lt;year&gt;1972&lt;/year&gt;<br>&lt;/book&gt;<br>&lt;/books&gt;</pre>\n<p><strong>Output:</strong></p>\n<pre>&lt;?xml version=\"1.0\" encoding=\"UTF-8\"?&gt;<br>&lt;document&gt;<br>&lt;title category=\"FICTION\" YOP=\"1876\" &gt; The Adventures of Tom Sawyer &lt;/title&gt;<br>&lt;title category=\"PASCAL\" YOP=\"1971\" &gt; The Programming Language Pascal &lt;/title&gt;<br>&lt;title category=\"PROGRAMMING\" YOP=\"1972\" &gt; Structured Programming &lt;/title&gt;<br>&lt;/document&gt;</pre>\n<p> </p>\n<p>The “text=” part is optional but has to be provided as a static, Usage:</p>\n<p>elementtuple: 'title:category:YOP:text='</p>\n<p>The resultant o/p would be:</p>\n<pre>&lt;?xml version=\"1.0\" encoding=\"UTF-8\"?&gt;<br>&lt;document&gt;<br>&lt;title category=\"FICTION\" YOP=\"1876\" &gt; &lt;/title&gt;<br>&lt;title category=\"PASCAL\" YOP=\"1971\" &gt; &lt;/title&gt;<br>&lt;title category=\"PROGRAMMING\" YOP=\"1972\" &gt; &lt;/title&gt;<br>&lt;/document&gt;</pre>\n<p> </p>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p>We get following errors when starting striim server after upgrading to 3.10.3.5</p>\n<pre>Please go to http://&lt;&gt;:9080 or https://&lt;&gt;:9081 to administer, or use console<br>Exception in thread \"Global:MonitoringStream1:01eb968c-0df5-c311-9ce3-42010a67010d:01ebba10-cf51-0a96-a2ae-42010a670111:Async-Sender\" java.lang.NoSuchMethodError: com.webaction.runtime.BuiltInFunc.matchAlertForMonitoringEvent(Lcom/fasterxml/jackson/databind/JsonNode;)Lcom/fasterxml/jackson/databind/JsonNode;<br>at QueryExecPlan_System$Alerts_monitorInputCQ_Global_MonitoringStream1_s.runImpl(QueryExecPlan_System$Alerts_monitorInputCQ_Global_MonitoringStream1_s.java)<br>at com.webaction.runtime.components.CQSubTask.processBatch(CQSubTask.java:118)<br>at com.webaction.runtime.components.CQSubTask.processAdded(CQSubTask.java:136)<br>at com.webaction.runtime.components.CQSubTask.processNotAggregated(CQSubTask.java:173)<br>at QueryExecPlan_System$Alerts_monitorInputCQ_Global_MonitoringStream1_s.run(QueryExecPlan_System$Alerts_monitorInputCQ_Global_MonitoringStream1_s.java)<br>at com.webaction.runtime.components.CQTask.receive(CQTask.java:347)<br>at com.webaction.runtime.DistributedRcvr.doReceive(DistributedRcvr.java:245)<br>at com.webaction.runtime.DistributedRcvr.onMessage(DistributedRcvr.java:110)<br>at com.webaction.jmqmessaging.InprocAsyncSender.processMessage(InprocAsyncSender.java:52)<br>at com.webaction.jmqmessaging.AsyncSender$AsyncSenderThread.run(AsyncSender.java:124)</pre>\n<p><strong>Solution:</strong></p>\n<p><strong>Step 1</strong>: Connect to Striim server via Tungsten Console and delete the monitoring / system alert apps with following commands. You need to execute these commands as \"admin\" one by one and ignore any errors while executing the command</p>\n<pre>stop application Global.MonitoringSourceApp;<br>stop application Global.MonitoringProcessApp;<br>stop application System$Alerts.AlertingApp;<br>undeploy application Global.MonitoringSourceApp;<br>undeploy application Global.MonitoringProcessApp;<br>undeploy application System$Alerts.AlertingApp;<br>drop application Global.MonitoringSourceApp cascade;<br>drop application Global.MonitoringProcessApp cascade;<br>drop application System$Alerts.AlertingApp cascade;</pre>\n<p>Additionally for Versions 4.1.x and higher drop following app as well</p>\n<pre><code>stop System$Notification.NotificationSourceApp;<br>undeploy application System$Notification.NotificationSourceApp;<br>drop application System$Notification.NotificationSourceApp cascade;</code></pre>\n<p><strong>Step 2: </strong>Stop Striim server and remove \"data\" directory under &lt;striim home&gt;/elasticsearch</p>\n<p><strong>Step 3</strong>: Start Striim server back</p>\n<p><strong>Step 4:</strong> Connect to UI and re-enter SMTP details if alerts are being used </p>"} {"page_content": "<p><strong><span class=\"wysiwyg-font-size-large\">Problem:</span></strong><br>User case is fileReader -&gt; Snowflake. Application is running and local files are generated in ./.striim/ folder. But nothing goes to Snowflake target. My Snowflake has proxy server.</p>\n<p><br><strong><span class=\"wysiwyg-font-size-large\">Solution: for Snowflake writer with the proxy server:</span></strong></p>\n<p>1. Snowflake will require access on Http (80) as well as Https (443) ports to it’s resources.</p>\n<p>2. Proxy server may cache tokens and rest calls, causing unpredictable behavior in Striim.</p>\n<p>3. Proxy server may use and SSL certificate that is not signed by major authority. This certificate will be silently rejected by Snowflake without any exceptions but the API call will not be processed.</p>\n<p>4. The SnowCD diagnostic tool can be used to narrow down the hosts that are not accessible through proxy. In this case, the access to google storage urls was blocked. These hosts should be escaped in:</p>\n<p>-Dhttp.nonProxyHosts=\"*.google.com”</p>\n<p>5. Proxy settings can be set once in the server.sh and reused in multiple applications</p>\n<p>-Dhttp.useProxy=true<br>-Dhttps.proxyHost=&lt;proxy_host&gt;<br>-Dhttp.proxyHost=&lt;proxy_host&gt;<br>-Dhttps.proxyPort=&lt;proxy_port&gt;<br>-Dhttp.proxyPort=&lt;proxy_port&gt;</p>\n<p>Note the variations of the settings some are http some are https. Any other way was failing to work</p>\n<p>6. Even after all this is resolved Striim behavior is inconsistent application run once and fail second time or delivers only first batch and then is getting stock. Reason for that might be that initial OCSP calls are cached by the proxy and old token is returned. That must be disabled by</p>\n<p>(1) https://community.snowflake.com/s/article/How-to-turn-off-OCSP-checking-in-Snowflake-client-drivers</p>\n<p>(2) In JDBC connection url add parameter insecureMode=true</p>"} {"page_content": "<p><strong>Symptom: </strong></p>\n<p>MSSqlReader crashes with an error similar to following</p>\n<p><strong><span> </span></strong>\"com.webaction.source.mssqlcommon.MSSqlThreadCommon: com.webaction.source.mssqlcommon.MSSqlThreadCommon.processRecord(MSSqlThreadCommon.java:500): 2523 : Table Metadata does not match metadata in ResultSet,Possible cause of this error could be deletion/addition of columns after enabling cdc on that table Table dbo.ContactBase has 183 columns, And its ResultSet has 182\"</p>\n<p> </p>\n<p><strong>Cause:</strong></p>\n<p>This is due to DDL change (column added in this case). SQL Server change tracking table doesn't get updated with ddl changes on the base table and thus the different in the column counts between the base table and its ResultSet</p>\n<p> </p>\n<p><strong>Solution:</strong></p>\n<p>MSSqlReader currently doesn't support capturing ddl changes. One of the following can be done,</p>\n<p>a) ignore the column additions. In other words capture only the original columns ignoring the added columns</p>\n<p><span>Tables: 'dbo.ContactBase(-kramp_pricatcontact,-kramp_pricatavg);',</span></p>\n<p><span>Note: when excluding column make sure the table name is in exact case as the table in the db</span></p>\n<p>OR</p>\n<p><span>b) disable and enable the CDC to capture the updated table</span></p>\n<p><span>If the requirement is to capture the values for the newly added columns as well then following needs to be done,</span></p>\n<p><span>-- exclude the table from cdc app</span></p>\n<p><span>-- drop the TYPE for the table from Striim command line console</span></p>\n<p><span>-- disable and enable the CDC for the table in question</span></p>\n<p><span>-- resync the table and include it in cdc app</span></p>\n<p> </p>"} {"page_content": "<h2>Problem:</h2>\n<p>When using postgres as MDR, got following error when starting striim-node:</p>\n<p>Server with version:'3.10.3.3' is connecting to a incompatible metadata with version:'null'.</p>\n<p> </p>\n<h2>Causes/Solutions:</h2>\n<p>Cause 1:<br>Postgres is on Azure.<br>Solution is to set MetadataDb as azurepostgres, instead of postgres.<br>For details see: https://www.striim.com/docs/en/hosting-striim-s-metadata-repository-on-azure-database-for-postgresql.html</p>\n<p>Cause 2:<br>Postgres tables are under different schema.<br>e.g., login user is 'striim' and schema is 'public'.<br>Solution is to create the tables under same schema as login user.<br>CREATE SCHEMA striim;<br>GRANT ALL ON SCHEMA striim TO striim;</p>"} {"page_content": "<p>Problem:</p>\n<p>Starting app failed with error:</p>\n<p><span>Connection Not Made: ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found. </span></p>\n<p> </p>\n<p><span>Cause:</span></p>\n<p><span>This is likely due to mismatch of Oracle version (like 11g) with used Oracle jdbc driver file.</span></p>\n<p> </p>\n<p><span>Solution:</span></p>\n<p><span>1. Check Striim Doc about the suggested jdbc driver.</span></p>\n<p><span>2. If still not working, in ./bin/server,sh, add a JVM setting, and restart the Striim node.<br></span></p>\n<p><span> (if agent is in use, agent.sh needs to be modified also)</span></p>\n<pre>-Doracle.jdbc.timezoneAsRegion=false</pre>\n<p> </p>"} {"page_content": "<p>Problem:</p>\n<p>When doing initial load or CDC from MySql as source, when source column is Tinyint(1), and target is a non-MySQL DB (e.g., Postgres) SMALLINT, the app will crash with error like following:</p>\n<p> </p>\n<p>DatabaseWriterException | cloudsqlwriter<br>Mapping of SourceType {java.lang.Boolean} to TargetType {int2(5)} is not supported for target filed {xxxxxxx}. Source Table {xxxxxxx} Target Table {xxxxxxx}, Column index {8}COLLAPSE<br>Exception event payload:</p>\n<p> </p>\n<p>Cause:</p>\n<p>Tinyint(1) value is captured by jdbc as boolean by default. This caused mismatch at target writer.</p>\n<p> </p>\n<p>Solution:</p>\n<p>In source MySQL adapter (DatabaseReader or MySQLReader), add \"<span>tinyInt1isBit=false\" to the end of</span> connection URL.</p>\n<p>examples:</p>\n<p><span>jdbc:mysql://host:3306/myDB?tinyInt1isBit=false</span></p>\n<p><span>jdbc:mysql://host:3306/myDB?useSSL=false&amp;tinyInt1isBit=false</span></p>\n<p> </p>\n<p> </p>"} {"page_content": "<p> </p>\n<h2>Symptoms</h2>\n<p>Following error is seen using Azure Postgres with SSL enabled as MDR</p>\n<pre>Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException<br>Internal Exception: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry <br>for host \"100.6.178.160\", user \"striim\", database \"postgres\", SSL on<br>Error Code: 0<br>at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:331)<br>at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:326)<br>at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:138)<br>at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162)<br>at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.setOrDetectDatasource(DatabaseSessionImpl.java:204)<br>at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:741)<br>at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:239)<br>at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:685)<br>... 13 more</pre>\n<h2>Cause</h2>\n<p>Azure firewall rule needs to be added to whitelist the ip address</p>\n<p> </p>\n<h2>Solution</h2>\n<p>Allow access from Connection Security under settings</p>\n<p><img src=\"https://support.striim.com/hc/article_attachments/360102121714/mceclip0.png\" alt=\"mceclip0.png\"></p>"} {"page_content": "<p><span class=\"wysiwyg-underline\"><strong>Symptoms: </strong></span></p>\n<p>PostgresReader Striim CDC app crash with following message</p>\n<pre>-ERROR com.striim.postgresreader.processor.read.LogicalReadProcessor com.striim.postgresreader.processor.read.LogicalReadProcessor.execute (LogicalReadProcessor.java:172) Could not proceed with read because of the exception<br>org.postgresql.util.PSQLException: ERROR: out of memory<br> Detail: Cannot enlarge string buffer containing 1073741243 bytes by 790 more bytes.<br> Where: slot \"striim_slot\", output plugin \"wal2json\", in the change callback, associated LSN 5EAA/AE67EDD0<br> at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)<br> at org.postgresql.core.v3.QueryExecutorImpl.processCopyResults(QueryExecutorImpl.java:1114)<br> at org.postgresql.core.v3.QueryExecutorImpl.readFromCopy(QueryExecutorImpl.java:1033)<br> at org.postgresql.core.v3.CopyDualImpl.readFromCopy(CopyDualImpl.java:41)<br> at org.postgresql.core.v3.replication.V3PGReplicationStream.receiveNextData (V3PGReplicationStream.java:155) at org.postgresql.core.v3.replication.V3PGReplicationStream.readInterna l(V3PGReplicationStream.java:124)<br> at org.postgresql.core.v3.replication.V3PGReplicationStream.read(V3PGReplicationStream.java:70)<br> at com.striim.postgresreader.processor.read.LogicalReadProcessor.processRecords(LogicalReadProcessor.java:80)<br> at com.striim.postgresreader.processor.read.LogicalReadProcessor.execute(LogicalReadProcessor.java:158)<br> at com.striim.postgresreader.processor.ProcessorThread.run(ProcessorThread.java:53)<br> at java.lang.Thread.run(Thread.java:748)<br>2020-09-22 18:36:54,519 @S10_111_12_36 @tokopedia_o2o.tokopedia_o2o -INFO Global:exceptionsStream:5619c7df-2292-4535-bbe5-e376c5f5bc42:5619c7df-2292-4535-bbe5-e376c5f5bc42:Async-Sender com.webaction.exceptionhandling.WAExceptionMgr.receive (WAExceptionMgr.java:63) received an exception object.<br>2020-09-22 18:36:54,519 @S10_111_12_36 @tokopedia_order.tokopedia_order -ERROR com.striim.postgresreader.processor.read.LogicalReadProcessor com.striim.postgresreader.processor.ProcessorThread.handleException (ProcessorThread.java:72) Exception occured in com.striim.postgresreader.processor.read.LogicalReadProcessor<br>org.postgresql.util.PSQLException: ERROR: out of memory</pre>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong>Cause:</strong></span></p>\n<p>Postgres Reader currently uses wal2json format 1 which supports capturing transactions whose size is &lt;=1GB. In the user's case, the transaction size was more than 1GB that caused the app to crash.</p>\n<p><br><span class=\"wysiwyg-underline\"><strong>Workaround:</strong></span></p>\n<p>Commit large transactions such that the transaction size generated is less than 1GB.</p>\n<p> </p>\n<p><strong><span class=\"wysiwyg-underline\">Solution:</span></strong></p>\n<p><strong> </strong></p>\n<p>The wal2json format 2 will handle such large transactions. Striim supports wal2json format from Striim veersion 4.0.3 or higher version using the following property the default is format 1. <span>Change </span><code class=\"code\">1</code><span> to </span><code class=\"code\">2</code><span> if you are using wal2json version 2</span></p>\n<p> </p>\n<div>\n<pre> PostgresConfig: '{\\n\\\"ReplicationPluginConfig\\\": {\\n\\t\\t\\\"Name\\\": \\\"WAL2JSON\\\",\\n\\t\\t\\\"Format\\\": \\<strong>\"2\\\"</strong>\\n\\t}\\n}', </pre>\n</div>\n<div></div>\n<div><strong>Note:</strong></div>\n<div><span>Amazon uses version 2. Azure can use either version 1 or 2. Google Cloud SQL uses version 1.</span></div>"} {"page_content": "<p> </p>\n<p><strong>Affected Versions :</strong><br>Striim Version :3.10.3.1, 3.10.3.2<br>Metadata Repository (MDR) is Oracle database using SSL or non-SSL connection to MDR(oracle)</p>\n<p> </p>\n<p><strong>Symptoms :</strong> Striim startup fails with one of the following errors</p>\n<p>\"java.lang.NoClassDefFoundError:Oracle/security/pki/OraclePKIProvider\".</p>\n<p> OR</p>\n<p><span class=\"s1\">NoClassDefFoundError: oracle/security/crypto/asn1/ASN1Object</span></p>\n<p> </p>\n<p><strong>Startup Error:</strong></p>\n<p>xxxx-Pro:Striim velu$ bin/server.sh<br>Starting Striim Server - Version 3.10.3.1 (b657ff25fb)<br>Exception in thread \"main\" java.lang.NoClassDefFoundError: oracle/security/pki/OraclePKIProvider<br>at com.webaction.metaRepository.MetaDataDbFactory.getOurMetaDataDb(MetaDataDbFactory.java:27)<br>at com.webaction.runtime.BaseServer.setMetaDataDbProviderDetails(BaseServer.java:486)<br>at com.webaction.runtime.NodeStartUp.startUp(NodeStartUp.java:133)<br>at com.webaction.runtime.NodeStartUp.&lt;init&gt;(NodeStartUp.java:74)<br>at com.webaction.runtime.Server.startUptheNode(Server.java:720)<br>at com.webaction.runtime.Server.&lt;init&gt;(Server.java:126)<br>at com.webaction.runtime.Server.main(Server.java:2124)<br>Caused by: java.lang.ClassNotFoundException: oracle.security.pki.OraclePKIProvider<br>at java.net.URLClassLoader.findClass(URLClassLoader.java:382)<br>at java.lang.ClassLoader.loadClSSLConfig: 'javax.net.ssl.trustStore=/Users/velu/network/ssl_localvm/cwallet.sso;javax.net.ssl.trustStoreType=SSO;javax.net.ssl.trustStorePassword=oracle123'SSLConfig: 'javax.net.ssl.trustStore=/Users/velu/network/ssl_localvm/cwallet.sso;javax.net.ssl.trustStoreType=SSO;javax.net.ssl.trustStorePassword=oracle123'ass(ClassLoader.java:418)<br>at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)<br>at java.lang.ClassLoader.loadClass(ClassLoader.java:351)<br>... 7 more</p>\n<p class=\"p1\"><span class=\"s1\">xxxx-Pro:Striim velu$<span class=\"Apple-converted-space\"> </span>bin/server.sh</span></p>\n<p class=\"p1\"><span class=\"s1\">Starting Striim Server - Version 3.10.3.1 (b657ff25fb)</span></p>\n<p class=\"p1\"><span class=\"s1\">[EL Severe]: ejb: 2020-12-23 10:57:25.028--ServerSession(671981276)--java.lang.NoClassDefFoundError: oracle/security/crypto/asn1/ASN1Object</span></p>\n<p class=\"p1\"><span class=\"s1\">[EL Severe]: ejb: 2020-12-23 10:57:35.352--ServerSession(2124978601)--java.lang.NoClassDefFoundError: oracle/security/crypto/asn1/ASN1Object</span></p>\n<p class=\"p1\"> </p>\n<p class=\"p1\"> </p>\n<p><strong>Cause :</strong></p>\n<p>Starting 3.10.3.1 Striim supports SSL connection to MDR (oracle). So Striim expects following jars files in $STRIIM_HOME/lib location for <strong>SSL connection</strong></p>\n<p>1. osdt_core.jar,</p>\n<p>2. oraclepki.jar</p>\n<p>3. osdt_cert.jar</p>\n<p><br><strong>For non-SSL connection</strong> to MDR(oracle) we require <strong>oraclepki.jar</strong> in $STRIIM_HOME/lib</p>\n<p> </p>\n<p><strong>Solution :</strong><br>Download the zip file that contains 3 jar files(osdt_core.jar,oraclepki.jar,osdt_cert.jar) files attached to this article</p>\n<p><a href=\"https://support.striim.com/hc/article_attachments/360099665194/ora_jar_MDR_connection.zip\">https://striim.zendesk.com/hc/article_attachments/360099665194/ora_jar_MDR_connection.zip</a></p>\n<p>and place in </p>\n<p>$STRIIM_HOME/lib ( e.g /opt/striim/lib )</p>\n<p> Or</p>\n<p>you can copy the (osdt_core.jar,oraclepki.jar,osdt_cert.jar) files from $ORACLE_HOME/jlib/oraclepki.jar to $STRIIM_HOME/lib ( e.g /opt/striim/lib )</p>\n<p> </p>\n<p>After downloading and placing the files in lib directory stop/start the striim server</p>\n<p> </p>\n<p class=\"p1\"> </p>\n<p class=\"p1\"> </p>"} {"page_content": "<p><br>Striim's MSSQLReader needs CDC to be enabled on the database. So the db user requires db_owner privilege to enable CDC on the tables in the database.</p>\n<p>If the security restrictions doesn't allow db_owner permission to be granted to the db user then following steps can be followed to avoid granting db_owner</p>\n<h3><strong>Setup steps for MSSQL</strong></h3>\n<p>1. On \"Microsoft SQL Server Management Studio\"... start the \"SQL Server Agent\"</p>\n<p>2. Enable CDC on MSSQL Database<br>Login to SQLServer with 'sysadmin' privilege execute the following command.</p>\n<pre>USE MyDB <br>GO <br>EXEC sys.sp_cdc_enable_db <br>GO<br>The above command will enabled CDC on SQL Server Database MyDB.</pre>\n<p>3. Enable CDC on a specific table. Login as sysadmin or db_owner privilege, execute the following command. Below command enables CDC on table DEMOOWNER.DEMOTABLE in MyDB.</p>\n<pre><br>USE MyDB <br>GO<br>EXEC SYS.sp_cdc_enable_table @SOURCE_SCHEMA = DEMOOWNER, @SOURCE_NAME = DEMOTABLE , @ROLE_NAME = 'STRIIM_READER'<br>GO</pre>\n<p>4. Grant the striim db user select privileges on the original table and shadow table created e.g. </p>\n<pre>GRANT SELECT ON cdc.DEMOOWNER_DEMOTABLE_CT TO &lt;MSSQLReader_USER&gt;; <br>GRANT SELECT ON DEMOOWNER.DEMOTABLE TO &lt;MSSQLReader_USER&gt;;</pre>\n<p><span class=\"wysiwyg-underline\"><strong>Note:</strong></span> sometimes the database can create different names for the shadow table. Please verify from \"select capture_instance from cdc.change_tables\"</p>\n<p>5. To list all tables that have Change Data Capture (CDC) enabled on a SQL Server database, use the following query:</p>\n<pre>USE Database_Name<br>GO<br>SELECT s.name AS Schema_Name, tb.name AS Table_Name , tb.object_id, tb.type, tb.type_desc, tb.is_tracked_by_cdc FROM sys.tables tb INNER JOIN sys.schemas s on s.schema_id = tb.schema_id<br>WHERE tb.is_tracked_by_cdc = 1</pre>\n<p> <br>6. Edit the Striim app and set AutoDisableTableCDC to false. </p>\n<p>Also if you want to add new tables to replication, then CDC should be enabled on those tables before including those tables in Striim app.</p>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p><span class=\"wysiwyg-underline\">App topology</span>: Source is Oracle and Target is Cloud MySQL.</p>\n<p>When replicating emoji symbols, DatabaseWriter connecting to mysql target fails with below exception</p>\n<pre>-ERROR com.webaction.Policy.DefaultExecutionPolicy.rollbackBatchOnBatchException() : <br>Error in com.webaction.Policy.DefaultExecutionPolicy::Execute-<br>java.sql.BatchUpdateException: Incorrect string value: '\\xF0\\x9F\\x93\\xA6 ...' for column 'COMMENTS' </pre>\n<p> </p>\n<p><strong>Cause:</strong></p>\n<p>The server character set on mysql side is set to UTF-8. This causes the app to crash with incorrect string value. Source Oracle NLS_CHARACTER_SET is AL32UTF8</p>\n<p> </p>\n<p>Following query on MySQL shows the character set information</p>\n<pre><strong>SHOW VARIABLES WHERE Variable_name LIKE 'character\\_set\\_%' OR Variable_name LIKE 'collation%';</strong><br><span>+--------------------------+--------------------+</span><br><span>| Variable_name | Value |</span><br><span>+--------------------------+--------------------+</span><br><span>| character_set_client | utf8mb4 |</span><br><span>| character_set_connection | utf8mb4 |</span><br><span>| character_set_database | utf8mb4 |</span><br><span>| character_set_filesystem | binary |</span><br><span>| character_set_results | utf8mb4 |</span><br><strong>| character_set_server | utf8 |</strong><br><span>| character_set_system | utf8 |</span><br><span>| collation_connection | utf8mb4_general_ci |</span><br><span>| collation_database | utf8mb4_unicode_ci |</span><br><span>| collation_server | utf8_general_ci |</span><br><span>+--------------------------+--------------------+</span></pre>\n<p> </p>\n<p><strong>Solution:</strong></p>\n<p><span>Change Cloud MySQL <strong>character_set_server</strong> to <strong>utf8mb4 u</strong></span></p>\n<p><br><span>Reference: https://docs.oracle.com/cd/E17952_01/connector-j-en/connector-j-reference-charsets.html</span><br><br><strong>Setting the Character Encoding<br></strong><br><span>The character encoding between client and server is automatically detected upon connection. You specify the encoding on the server using the character_set_server for server versions 4.1.0 and newer, and the character_set system variable for server versions older than 4.1.0. The driver automatically uses the encoding specified by the server. For more information, see Server Character Set and Collation.</span><br><br><span>For example, to use 4-byte UTF-8 character sets with Connector/J, configure the MySQL server with</span><strong><span> </span>character_set_server=utf8mb4,<span> </span></strong><span>and leave characterEncoding out of the Connector/J connection string. Connector/J will then autodetect the UTF-8 setting.</span><br><br><strong><img src=\"/attachments/token/EQe6G9p2X6lcCVXf6x3lwQYqj/?name=inline-1076842378.png\" data-original-height=\"490\" data-original-width=\"645\"></strong></p>"} {"page_content": "<h2>Issue:</h2>\n<p>MySQLReader failed with following message</p>\n<p>2020-11-12 21:08:13,538 @S35_235_76_245 @migration.mysql -ERROR StartSources-mysql com.webaction.proc.MySQLReader_1_0.checkMySQLPrivileges (MySQLReader_1_0.java:721) Exception while determining the user's privileges in MySQLjava.sql.SQLException: Operation not allowed for a result set of type ResultSet.TYPE_FORWARD_ONLY.<br>java.sql.SQLException: Operation not allowed for a result set of type ResultSet.TYPE_FORWARD_ONLY.<br>at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:129)<br>at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)<br>at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)<br>at com.mysql.cj.jdbc.result.ResultSetImpl.first(ResultSetImpl.java:584)<br>at com.webaction.proc.MySQLReader_1_0.checkMySQLPrivileges(MySQLReader_1_0.java:666)<br>at com.webaction.proc.MySQLReader_1_0.init(MySQLReader_1_0.java:193)<br>at com.webaction.runtime.components.Source.start(Source.java:344)<br>at com.webaction.runtime.components.Flow.startSources(Flow.java:593)<br>at com.webaction.runtime.components.Flow$2.run(Flow.java:1688)<br>at java.lang.Thread.run(Thread.java:748)</p>\n<p> </p>\n<h2>Cause &amp; Solution:</h2>\n<p>The privileges looked fine</p>\n<pre><span>mysql&gt; grant all on *.* to rladmin;</span><br><br><span>mysql&gt; show grants for current_user();</span><br><br><span>| Grants for rladmin@% |</span><br><br><span>| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE ON *.* TO 'rladmin'@'%' WITH GRANT OPTION |</span></pre>\n<p>This turned out to be a Mysql JDBC Driver issue. </p>\n<p>Replacing <span>mysql-connector-java-8.0.22.jar in the STRIIM_HOME/lib with following resolved the issue</span></p>\n<p><span>mysql-connector-java-5.1.46.jar</span> </p>"} {"page_content": "<p>This article shows the steps of using a UDF to write events to Kafka partitions in round robin.</p>\n<p><span>1. download the attached jar file and put it under ./lib/ directory.</span><br><span>2. restart the striim server, which will include this jar file.</span><br><span>3. in kafkawriter, specify in property KafkaConfig with:</span><br><span>partitioner.class=com.striim.utility.kafka.KafkaRRPartitioner;</span><br><span>(see attached example tql file).</span></p>\n<p> </p>\n<p> </p>\n<p> </p>"} {"page_content": "<h3>Issue:</h3>\n<p>When Striim installed disk is having space issues app(s) crash with following message</p>\n<p class=\"p1\"><span class=\"s1\">com.webaction.common.exc.SystemException: Server S192_168_1_239 has not enough disk space. Only 4.6587215601 GB is left</span></p>\n<p class=\"p1\"> </p>\n<h3 class=\"p1\">Problem:</h3>\n<p class=\"p1\">The number reported here is % and not GB. The error message should have been</p>\n<p class=\"p1\"><span class=\"s1\">Server S192_168_1_239 has not enough disk space. Only 4.6587215601 % is left</span></p>\n<p class=\"p1\"><span class=\"s1\">It is calculated as</span></p>\n<pre class=\"p1\"><span class=\"s1\">freespace * 100 / total space</span></pre>\n<p> </p>\n<h3>Solution:</h3>\n<p>The default of <span>MaxAllowedFreeDiskSpacePercent is 5% which means if the free disk space percentage falls below 5% the apps would crash.</span></p>\n<p><span>In the simulated environment total space was 923 GB and free space was 43 GB and thus the error since its below the default of 5%</span></p>\n<pre class=\"p1\">(43*100)/923 = 4.6587215601 %</pre>\n<p><span>This can be modified, if needed, via startUp.properties however adding more space and keeping atleast 100gb of free space is recommended.</span></p>\n<pre><span>MaxAllowedFreeDiskSpacePercent=4</span></pre>\n<p>Note: Starting Striim version 3.10.3 the message has been modified / improved to report following,</p>\n<pre>com.webaction.common.exc.SystemException: Server S192_168_1_50 is low on disk space: <br>minimum is 5.0% but only 4.4% is free. Change this threshold by <br>setting 'striim.node.maxAllowedFreeDiskSpacePercent'.</pre>"} {"page_content": "<p><strong>Issue</strong></p>\n<p>Postgres reader does not capture from the partition table.</p>\n<p>Table </p>\n<p>CREATE TABLE measurement (<br>city_id int not null ,<br>logdate date not null,<br>peaktemp int,<br>unitsales int<br>) PARTITION BY RANGE (logdate);</p>\n<p><br>CREATE TABLE measurement_y2020q1 PARTITION OF measurement<br>FOR VALUES FROM ('2020-01-01 00:00:00') TO ('2020-03-31 23:59:59');</p>\n<p>CREATE TABLE measurement_y2020q2 PARTITION OF measurement<br>FOR VALUES FROM ('2020-04-01 00:00:00') TO ('2020-06-30 23:59:59');</p>\n<p> </p>\n<p><strong>Cause:</strong></p>\n<p> In order to capture from a partition, the partition name must be specified in table property. If you specify the main table data will not be captured. If you specify partial wildcard(<strong><span> Tables: 'public.measurement_y2020%',)</span><br></strong>measurement_y2020%, then postgres reader will capture only from the partitions that are already created at the time of the start of striim application. </p>\n<p> </p>\n<p><strong>Workaround:</strong></p>\n<p> Whenever a new partition is added, stop the application and include the new partition name in table property. But this may lead to data loss, if some records were inserted in newly added partition before restarting striim application.</p>\n<p> </p>\n<p><strong>Solution:</strong></p>\n<p> In order to capture the data without restarting application specify full wildcard for the entire schema. But mapping can be done </p>\n<pre><strong> Tables: 'public.%',</strong></pre>\n<p>If your target database is a single table and not a partition table, in database writer mapping can be done like the following </p>\n<pre><strong> Tables: 'public.measurement_y2020%,pal.measurement;',</strong> </pre>\n<p> </p>\n<p> </p>"} {"page_content": "<p>The Striim server log file (s) generation can be controlled via STRIIM_HOME/conf/<span class=\"s1\">log4j.server.properties file. Any changes to this file requires the Striim server to be restarted for the changes to take effect</span></p>\n<p> </p>\n<h3>\n<span class=\"s1\">a) On Striim versions 3.x and before</span><span class=\"s1\"></span>\n</h3>\n<p> </p>\n<h4><span class=\"s1\">-- how to change the number and size of the striim.server.log file (s) that gets generated</span></h4>\n<p><span class=\"s1\">Modify the count of log files (MaxBackupIndex) and size of each log file (MaxFileSize) as shown below</span></p>\n<pre class=\"p1\"><span class=\"s1\">log4j.rootLogger=info, logfile<br># rolling log file based on size/ count</span><br><span class=\"s1\">log4j.appender.logfile=org.apache.log4j.RollingFileAppender</span><br><span class=\"s1\">log4j.appender.logfile.File=logs/striim.server.log</span><span class=\"s1\"><br><strong>log4j.appender.logfile.MaxFileSize=100MB</strong></span><span class=\"s1\"><br><strong>log4j.appender.logfile.MaxBackupIndex=4</strong></span></pre>\n<h4> </h4>\n<h4>-- how to generate one striim.server.log every day</h4>\n<p>following generates one logfile every day regardless of the size using <a href=\"https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html\" target=\"_blank\" rel=\"noopener\">DailyRollingFileAppender</a></p>\n<pre class=\"p1\"><span class=\"s1\">log4j.rootLogger=info, logfile</span><br><span class=\"s1\">#log4j.appender.logfile=org.apache.log4j.RollingFileAppender<br></span># rolling log file daily instead of size/count<br>log4j.appender.logfile=org.apache.log4j.DailyRollingFileAppender<br><span class=\"s1\">log4j.appender.logfile.File=logs/striim.server.log</span><br>#<span class=\"s1\">log4j.appender.logfile.MaxFileSize=1GB<br>log4j.appender.logfile.MaxBackupIndex=14</span></pre>\n<p>Note: With DailyRollingFileAppender the MaxBackup and MaxFileSize are not relevant and thus not needed.</p>\n<p>The log file can tend to grow larger if there is one every day. So unless required it is preferred to control the log file growth with size &amp; count rather than having daily logswitch.</p>\n<p> </p>\n<h3>\n<span class=\"s1\">b) On Striim versions 4.x and above</span><span class=\"s1\"></span>\n</h3>\n<h4> </h4>\n<h4><span class=\"s1\">-- how to change the number and size of the striim.server.log file (s) that gets generated</span></h4>\n<p><span class=\"s1\">Modify the count of log files (strategy.max) and size of each log file (policies.size.size) as shown below</span></p>\n<pre class=\"p1\"><span class=\"s1\"># RollingFileAppenders</span><br><span class=\"s1\">appender.ServerFileAppender.name = ServerFileAppender</span><br><span class=\"s1\">appender.ServerFileAppender.type = RollingFile</span><br><span class=\"s1\">appender.ServerFileAppender.fileName=logs/striim.server.log</span><br><span class=\"s1\">appender.ServerFileAppender.filePattern=logs/striim.server.log-%i</span><br><span class=\"s1\">appender.ServerFileAppender.layout.type=PatternLayout</span><br><span class=\"s1\">appender.ServerFileAppender.layout.header=${StriimHeader:logHeader}</span><br><span class=\"s1\">appender.ServerFileAppender.layout.pattern=%d @%X{ServerToken} @%X{AppName} -%p %t %C.%M (%F:%L) %m%n</span><br><span class=\"s1\">appender.ServerFileAppender.policies.type = Policies</span><br><span class=\"s1\">appender.ServerFileAppender.policies.size.type = SizeBasedTriggeringPolicy</span><br><strong><span class=\"s1\">appender.ServerFileAppender.policies.size.size=1GB</span></strong><br><span class=\"s1\">appender.ServerFileAppender.strategy.type = DefaultRolloverStrategy</span><br><strong><span class=\"s1\">appender.ServerFileAppender.strategy.max = 9</span></strong><br><span class=\"s1\">appender.ServerFileAppender.filter.threshold.type= ThresholdFilter</span><br><span class=\"s1\">appender.ServerFileAppender.filter.threshold.level= INFO</span></pre>\n<p> </p>\n<h4>-- how to generate one striim.server.log every day</h4>\n<p>following generates one logfile every day regardless of the size using <code>TimeBasedTriggeringPolicy</code></p>\n<pre><code>appender.ServerFileAppender.fileName=/var/logs/striim.server.log<br>appender.ServerFileAppender.filePattern=/var/logs/striim.server_%d{yyyy-MM-dd}.log<br>appender.ServerFileAppender.policies.time.type = TimeBasedTriggeringPolicy<br>appender.ServerFileAppender.policies.time.interval= 1<br><br><span class=\"s1\"># RollingFileAppenders</span><br><span class=\"s1\">appender.ServerFileAppender.name = ServerFileAppender</span><br><span class=\"s1\">appender.ServerFileAppender.type = RollingFile</span><br><span class=\"s1\">appender.ServerFileAppender.fileName=logs/striim.server.log</span><br><span class=\"s1\">#appender.ServerFileAppender.filePattern=logs/striim.server.log-%i<br><strong>appender.ServerFileAppender.filePattern=logs/striim.server_%d{yyyy-MM-dd}.log</strong></span><br><span class=\"s1\">appender.ServerFileAppender.layout.type=PatternLayout</span><br><span class=\"s1\">appender.ServerFileAppender.layout.header=${StriimHeader:logHeader}</span><br><span class=\"s1\">appender.ServerFileAppender.layout.pattern=%d @%X{ServerToken} @%X{AppName} -%p %t %C.%M (%F:%L) %m%n</span><br><span class=\"s1\">appender.ServerFileAppender.policies.type = Policies<br><strong>appender.ServerFileAppender.policies.time.type = TimeBasedTriggeringPolicy</strong><br><strong>appender.ServerFileAppender.policies.time.interval= 1</strong></span><br><span class=\"s1\">#appender.ServerFileAppender.policies.size.type = SizeBasedTriggeringPolicy</span><br><span class=\"s1\">#appender.ServerFileAppender.policies.size.size=1GB</span><br><span class=\"s1\">appender.ServerFileAppender.strategy.type = DefaultRolloverStrategy</span><br><span class=\"s1\">appender.ServerFileAppender.strategy.max = 14</span><br><span class=\"s1\">appender.ServerFileAppender.filter.threshold.type= ThresholdFilter</span><br><span class=\"s1\">appender.ServerFileAppender.filter.threshold.level= INFO</span></code></pre>"} {"page_content": "<p><br><strong>Setup: Oracle to Synapse Initial Load</strong></p>\n<p>CREATE APPLICATION ORA_IL_SYNAPSE;</p>\n<p>CREATE SOURCE SRC_ORA_IL_SYNAPSE USING Global.DatabaseReader ( <br> DatabaseProviderType: 'Default', <br> FetchSize: 100, <br> Username: 'qatest', <br> adapterName: 'DatabaseReader', <br> QuiesceOnILCompletion: false, <br> Password_encrypted: 'true', <br> ConnectionURL: 'jdbc:oracle:thin:@localhost:1521:XE', <br> Tables: 'QATEST.HCAHPS', <br> Password: '&lt;&gt;' ) <br>OUTPUT TO SRC_ORA_IL_SYNAPSE_STREAM;</p>\n<p>CREATE OR REPLACE TARGET TGT_ORA_IL_SYNAPSE USING Global.AzureSQLDWHWriter ( <br> StorageAccessDriverType: 'ABFS', <br> Username: '&lt;&gt;', <br> Password: '&lt;&gt;', <br> Password_encrypted: 'true', <br> AccountAccessKey_encrypted: 'true', <br> ConnectionURL: 'jdbc:sqlserver://&lt;&gt;.database.windows.net:1433;database=dbname;', <br> AccountName: 'gen2adls2', <br> uploadpolicy: 'eventcount:10000,interval:1m', <br> columnDelimiter: '|', <br> Tables: 'QATEST.HCAHPS,dbo.HCAHPS_TEST KEYCOLUMNS(HCAHPS_ID) COLUMNMAP(hcahps_id=HCAHPS_ID)', <br> AccountAccessKey: '&lt;&gt;', <br> Mode: 'MERGE', <br> adapterName: 'AzureSQLDWHWriter' ) <br>INPUT FROM SRC_ORA_IL_SYNAPSE_STREAM;</p>\n<p>END APPLICATION ORA_IL_SYNAPSE;</p>\n<p><strong>Issue: AzureSQLDWHWriter failed with following error</strong></p>\n<p>2020-07-21 14:27:35,083 @admin.ORA_IL_SYNAPSE -ERROR com.webaction.context.DWHContext.createExternalTable() : Creating External table failed.<br> SQL {CREATE EXTERNAL TABLE dbo.striim_admin_TGT_ORA_IL_SYNAPSE_HCAHPS_TEST_EXTERNAL (<br>[hcahps_id] numeric(10,0),<br>[city] varchar(50),<br>[county] varchar(40),<br>[measure_start_dt] date,<br> SeqNo BigInt, <br> isDeleted BIT <br>) WITH (<br> LOCATION = '/.striim/admin/TGT_ORA_IL_SYNAPSE/dbo.HCAHPS_TEST/current/', <br> DATA_SOURCE = striim_admin_TGT_ORA_IL_SYNAPSE_EXT_DATA_SRC, <br> FILE_FORMAT = striim_admin_TGT_ORA_IL_SYNAPSE_FILE_FORMAT, <br> REJECT_TYPE = VALUE,<br> REJECT_VALUE = 0 );}<br>Error message {Managed Service Identity has not been enabled on this server. Please enable Managed Service Identity and try again.} </p>\n<p><strong>Cause &amp; Fix:</strong></p>\n<p><span>AzureSQLDWHWriter failed due to missing database credential. A database credential is not mapped to a server login or database user. The credential is used by the database to access to the external location anytime the database is performing an operation that requires access.</span></p>\n<pre><span>SELECT * FROM sys.database_scoped_credentials;</span></pre>\n<pre><span><span class=\"hljs-keyword\">CREATE</span> <span class=\"hljs-keyword\">DATABASE</span> <span class=\"hljs-keyword\">SCOPED</span> <span class=\"hljs-keyword\">CREDENTIAL</span> <span class=\"hljs-variable\">credential_name</span> <br><span class=\"hljs-keyword\">WITH</span> <span class=\"hljs-keyword\">IDENTITY</span> = <span class=\"hljs-string\">'identity_name'</span> [ , <span class=\"hljs-keyword\">SECRET</span> = <span class=\"hljs-string\">'secret'</span> ]</span></pre>\n<p><span>Before creating a database scoped credential, the database must have a master key to protect the credential</span></p>"} {"page_content": "<p>In a MSSqlServer to MSSqlServer CDC App following error can be seen from the DatabaseWriter when the Target table has computed columns.</p>\n<p> </p>\n<pre>BatchUpdateException | CDC_Target_MSSql<br>The column \"full_name\" cannot be modified because it is either a <br>computed column or is the result of a UNION operator.COLLAPSE</pre>\n<p> </p>\n<p>A computed column is computed from an expression that can use other columns in the same table. Unless otherwise specified, computed columns are virtual columns that are not physically stored in the table. Their values are recalculated every time they are referenced in a query.</p>\n<p>CREATE TABLE person (<br>id int primary key,<br>first_name varchar(10),<br>last_name varchar(10),<br>full_name as (first_name+last_name)<br>);</p>\n<p>Such columns can be identified from sys.columns using following query</p>\n<pre data-renderer-start-pos=\"355\">SELECT name, <strong>is_computed</strong> FROM sys.columns where <br>Object_ID = Object_ID(N'dbo.person')<br><br>id 0<br>first_name 0<br>last_name 0<br><strong>full_name 1</strong></pre>\n<p> </p>\n<p>Insert into person (id,first_name,last_name,full_name) values (100,'striim','user','striim user');</p>\n<p>Above insert would fail even when run directly in SQL Server with following message</p>\n<pre>Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The column <br>\"full_name\" cannot be modified because it is either a computed column <br>or is the result of a UNION operator.</pre>\n<p> </p>\n<p>The error from the DatabaseWriter can be bypassed by ignoring the computed column. This needs to be done on the Source CDC App (MSSqlReader) so that the computed column values are not passed to the target DatabaseWriter. The target SQL Server would compute the column value based on the expression and values are not lost.</p>\n<p> </p>\n<pre>CREATE OR REPLACE SOURCE CDC_Source_MSSql USING Global.MSSqlReader ( <br>ExcludedTables: 'dbo.sys%;dbo.v_%', <br>Tables: 'dbo.%;dbo.person(-full_name);', </pre>\n<p> </p>\n<p>If having more than one computed column comma separate with the negate sign for each of those columns.</p>\n<p>If having explicit table names specify the exclusion list only for those tables like below</p>\n<p> </p>\n<pre>CREATE OR REPLACE SOURCECDC_Source_MSSql USING Global.MSSqlReader ( <br>Tables: 'dbo.sales;dbo.person(-full_name,-another_computed_column);',</pre>"} {"page_content": "<h3> </h3>\n<h3>Goal:</h3>\n<p>This note shows in detail all the steps needed to make a setup using SSL communication between the Oracle Database and a JDBC client.</p>\n<h3>Steps: </h3>\n<p>1] Create directory structure on the Database server to hold the wallet:</p>\n<pre>[oracle@OGGBLR oracle]$ mkdir -p /u01/app/oracle/wallet<br>[oracle@OGGBLR oracle]$<br>[oracle@OGGBLR oracle]$ cd /u01/app/oracle/wallet<br>[oracle@OGGBLR wallet]$ pwd<br>/u01/app/oracle/wallet</pre>\n<p>2] Create the an empty wallet for the Oracle Database server:</p>\n<p><strong>orapki wallet create -wallet ./server_wallet -auto_login -pwd &lt;WALLET PASSWORD&gt;</strong></p>\n<p>This command created a directory named server_wallet and inside the next two files:</p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet create -wallet ./server_wallet -auto_login -pwd oracle123<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>[oracle@OGGBLR wallet]$<br>[oracle@OGGBLR wallet]$ ls -lrt /u01/app/oracle/wallet/server_wallet/*<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:53 /u01/app/oracle/wallet/server_wallet/ewallet.p12.lck<br>-rw-------. 1 oracle oinstall 75 May 12 01:53 /u01/app/oracle/wallet/server_wallet/ewallet.p12<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:53 /u01/app/oracle/wallet/server_wallet/cwallet.sso.lck<br>-rw-------. 1 oracle oinstall 120 May 12 01:53 /u01/app/oracle/wallet/server_wallet/cwallet.sso<br>[oracle@OGGBLR wallet]$</pre>\n<p><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;\">3] Add a self signed certificate into the wallet:</span></p>\n<p><strong>orapki wallet add -wallet ./server_wallet -dn \"CN=`hostname`\" -keysize 2048 -sign_alg sha512 -self_signed -validity 365 -pwd &lt;WALLET PASSWORD&gt;</strong></p>\n<p>Here we are using `hostname` as the CN to get the actual server name in it from the hostname command execution. Also we are using the sha512 signature algorithm specified by the sing_alg parameter.</p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet add -wallet ./server_wallet -dn \"CN=OGGBLR.localdomain\" -keysize 2048 -sign_alg sha512 -self_signed -validity 365 -pwd oracle123<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.</pre>\n<p>4] We can display the content of the wallet as follows:</p>\n<p><strong>orapki wallet display -wallet ./server_wallet</strong></p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet display -wallet ./server_wallet<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>Requested Certificates:<br>User Certificates:<br>Subject: CN=OGGBLR.localdomain<br>Trusted Certificates:<br>Subject: CN=OGGBLR.localdomain<br>[oracle@OGGBLR wallet]$ </pre>\n<p>5] Export the server's certificate</p>\n<p><strong>orapki wallet export -wallet ./server_wallet -dn \"CN=`hostname`\" -cert ./server_wallet/server_cert.txt</strong></p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet export -wallet ./server_wallet -dn \"CN=OGGBLR.localdomain\" -cert ./server_wallet/server_cert.txt<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>[oracle@OGGBLR wallet]$ ls -lrt /u01/app/oracle/wallet/server_wallet/*<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:53 /u01/app/oracle/wallet/server_wallet/ewallet.p12.lck<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:53 /u01/app/oracle/wallet/server_wallet/cwallet.sso.lck<br>-rw-------. 1 oracle oinstall 3792 May 12 01:54 /u01/app/oracle/wallet/server_wallet/ewallet.p12<br>-rw-------. 1 oracle oinstall 3837 May 12 01:54 /u01/app/oracle/wallet/server_wallet/cwallet.sso<br>-rw-------. 1 oracle oinstall 986 May 12 01:55 /u01/app/oracle/wallet/server_wallet/server_cert.txt<br>[oracle@OGGBLR wallet]$</pre>\n<p>6] Proceed with the same steps for the client side. Create the an empty wallet for the client server:</p>\n<p><strong>orapki wallet create -wallet ./client_wallet -auto_login -pwd &lt;WALLET PASSWORD&gt;</strong></p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet create -wallet ./client_wallet -auto_login -pwd oracle123<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>[oracle@OGGBLR wallet]$<br>[oracle@OGGBLR wallet]$ ls -lrt /u01/app/oracle/wallet/client_wallet/*<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:55 /u01/app/oracle/wallet/client_wallet/ewallet.p12.lck<br>-rw-------. 1 oracle oinstall 75 May 12 01:55 /u01/app/oracle/wallet/client_wallet/ewallet.p12<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:55 /u01/app/oracle/wallet/client_wallet/cwallet.sso.lck<br>-rw-------. 1 oracle oinstall 120 May 12 01:55 /u01/app/oracle/wallet/client_wallet/cwallet.sso<br>[oracle@OGGBLR wallet]$</pre>\n<p>7] Add a self signed certificate into the wallet:</p>\n<p><strong>orapki wallet add -wallet ./client_wallet -dn \"CN=client\" -keysize 2048 -sign_alg sha512 -self_signed -validity 365 -pwd &lt;WALLET PASSWORD&gt;</strong></p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet add -wallet ./client_wallet -dn \"CN=client\" -keysize 2048 -sign_alg sha512 -self_signed -validity 365 -pwd oracle123<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.</pre>\n<p>[oracle@OGGBLR wallet]$</p>\n<p>8] Display the client's wallet content:</p>\n<p><strong>orapki wallet display -wallet ./client_wallet</strong></p>\n<pre>oracle@OGGBLR wallet]$ orapki wallet display -wallet ./client_wallet<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>Requested Certificates:<br>User Certificates:<br>Subject: CN=client<br>Trusted Certificates:<br>Subject: CN=client<br>[oracle@OGGBLR wallet]$</pre>\n<p>9] Export the client's certificate:</p>\n<p><strong>orapki wallet export -wallet ./client_wallet -dn \"CN=client\" -cert ./client_wallet/client_cert.txt</strong></p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet export -wallet ./client_wallet -dn \"CN=client\" -cert ./client_wallet/client_cert.txt<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>[oracle@OGGBLR wallet]$<br>[oracle@OGGBLR wallet]$ ls -lrt /u01/app/oracle/wallet/client_wallet/*<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:55 /u01/app/oracle/wallet/client_wallet/ewallet.p12.lck<br>-rw-rw-rw-. 1 oracle oinstall 0 May 12 01:55 /u01/app/oracle/wallet/client_wallet/cwallet.sso.lck<br>-rw-------. 1 oracle oinstall 3736 May 12 01:56 /u01/app/oracle/wallet/client_wallet/ewallet.p12<br>-rw-------. 1 oracle oinstall 3781 May 12 01:56 /u01/app/oracle/wallet/client_wallet/cwallet.sso<br>-rw-------. 1 oracle oinstall 953 May 12 01:56 /u01/app/oracle/wallet/client_wallet/client_cert.txt<br>[oracle@OGGBLR wallet]$</pre>\n<p>10] Exchange certificates in the wallets:</p>\n<p><br><strong>orapki wallet add -wallet ./server_wallet -trusted_cert -cert ./client_wallet/client_cert.txt -pwd &lt;WALLET PASSWORD&gt;</strong></p>\n<p><br><strong>orapki wallet add -wallet ./client_wallet -trusted_cert -cert ./server_wallet/server_cert.txt -pwd &lt;WALLET PASSWORD&gt;</strong></p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet add -wallet ./server_wallet -trusted_cert -cert ./client_wallet/client_cert.txt -pwd oracle123<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>[oracle@OGGBLR wallet]$<br>[oracle@OGGBLR wallet]$ orapki wallet add -wallet ./client_wallet -trusted_cert -cert ./server_wallet/server_cert.txt -pwd oracle123<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>[oracle@OGGBLR wallet]$</pre>\n<p>11] Display the wallets content:</p>\n<p><strong>orapki wallet display -wallet ./server_wallet</strong></p>\n<p><strong>orapki wallet display -wallet ./client_wallet</strong></p>\n<pre>[oracle@OGGBLR wallet]$ orapki wallet display -wallet ./server_wallet<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>Requested Certificates:<br>User Certificates:<br>Subject: CN=OGGBLR.localdomain<br>Trusted Certificates:<br>Subject: CN=client<br>Subject: CN=OGGBLR.localdomain<br>[oracle@OGGBLR wallet]$<br>[oracle@OGGBLR wallet]$ orapki wallet display -wallet ./client_wallet<br>Oracle PKI Tool : Version 12.1.0.2<br>Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br><br>Requested Certificates:<br>User Certificates:<br>Subject: CN=client<br>Trusted Certificates:<br>Subject: CN=client<br>Subject: CN=OGGBLR.localdomain<br>[oracle@OGGBLR wallet]$</pre>\n<p> </p>\n<h3>Configure Oracle Database to use TCPS with the wallet created above.</h3>\n<p>1] Make sure the listener.ora has similar lines as follows. Be very careful with the wallet location and the port used for the TCPS protocol:</p>\n<table style=\"height: 43px;\" width=\"695\">\n<tbody>\n<tr>\n<td style=\"width: 682.8px;\">\n<p>[oracle@OGGBLR admin]$ cat listener.ora<br># listener.ora Network Configuration File: /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora<br># Generated by Oracle configuration tools.</p>\n<p>SSL_CLIENT_AUTHENTICATION = FALSE<br>WALLET_LOCATION =<br>(SOURCE =<br>(METHOD = FILE)<br>(METHOD_DATA =<br>(DIRECTORY = /u01/app/oracle/wallet/server_wallet)<br>)<br>)<br>SID_LIST_LISTENER =<br>(SID_LIST =<br>(SID_DESC =<br>(GLOBAL_DBNAME = ORCL)<br>(ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)<br>(SID_NAME = orcl)<br>)<br>)</p>\n<p>LISTENER =<br>(DESCRIPTION_LIST =<br>(DESCRIPTION =<br>(ADDRESS = (PROTOCOL = TCP)(HOST = OGGHOLBLR)(PORT = 1539))<br>(ADDRESS = (PROTOCOL = TCPS)(HOST = OGGHOLBLR)(PORT = 2484))<br>(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))<br>)<br>)</p>\n<span style=\"font-size: 15px;\"></span>\n</td>\n</tr>\n</tbody>\n</table>\n<p> </p>\n<p>2] Edit the sqlnet.ora file and make sure it has similar lines as follows:</p>\n<table style=\"height: 43px;\" width=\"692\">\n<tbody>\n<tr>\n<td style=\"width: 680.4px;\">\n<p># sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora<br># Generated by Oracle configuration tools.<br>WALLET_LOCATION =<br>(SOURCE =<br>(METHOD = FILE)<br>(METHOD_DATA =<br>(DIRECTORY = /u01/app/oracle/wallet/server_wallet)<br>)<br>)</p>\n<p>SSL_CLIENT_AUTHENTICATION = FALSE<br>SSL_VERSION = 1.0<br>SSL_CIPHER_SUITES = (SSL_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA)<br>SQLNET.AUTHENTICATION_SERVICES = (TCPS,NTS,BEQ)</p>\n<font face=\"-apple-system, system-ui, Segoe UI, Helvetica, Arial, sans-serif\"><span style=\"font-size: 15px;\"> </span></font>\n</td>\n</tr>\n</tbody>\n</table>\n<p>3] Restart the listener</p>\n<p>lsnrctl stop<br>lsnrctl start</p>\n<p> </p>\n<h3><span id=\"kmPgTpl:r1:ot71\" class=\"kmContent\"><strong>Java Client Testing with TCPS</strong></span></h3>\n<p> </p>\n<p>1] Copy the client wallet to the client</p>\n<p>mkdir -p $HOME/SSL/wallets<br>cd $HOME/SSL/wallets<br>scp -r oracle@&lt;HOST&gt;:/home/oracle/SSL/wallets/client_wallet .</p>\n<pre>velus-MacBook-Pro:wallet velu$ scp oracle@192.168.0.118:/u01/app/oracle/wallet/client_wallet/* .<br><br>velus-MacBook-Pro:wallet velu$ ls -lrrt<br><br>-rw------- 1 velu staff 953 May 12 14:28 client_cert.txt<br>-rw------- 1 velu staff 4557 May 12 14:28 cwallet.sso<br>-rw-r--r-- 1 velu staff 0 May 12 14:28 cwallet.sso.lck<br>-rw------- 1 velu staff 4512 May 12 14:28 ewallet.p12<br>-rw-r--r-- 1 velu staff 0 May 12 14:28 ewallet.p12.lck </pre>\n<p>we can proceed to test the connection using a java stand-alone program.<br>For this, we can use the program (ssl-jdbc-demos.zip) provided in Note: 762286.1.</p>\n<p>2] Move to the working directory to place the files for the tests</p>\n<pre>mkdir -p /Users/velu/network/SSL_jdbc</pre>\n<p>3] Put the ssl-jdbc-demos.zip in this directory and unzip the content:</p>\n<pre> velus-MacBook-Pro:ssl-jdbc-demos velu$ ls -lrrt<br>total 7912<br>-rw-r--r--@ 1 velu staff 2365 May 7 15:12 JDBCSSLTester.java<br>-rw-r--r--@ 1 velu staff 211 May 7 15:12 db.properties<br>-rw-r--r--@ 1 velu staff 152 May 7 15:12 test1.properties</pre>\n<p>4] Using JDK 8 ( Check the java version )</p>\n<p> </p>\n<pre>velus-MacBook-Pro:ssl-jdbc-demos velu$ java -version<br>java version \"1.8.0_231\"<br>Java(TM) SE Runtime Environment (build 1.8.0_231-b11)<br>Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)<br>velus-MacBook-Pro:ssl-jdbc-demos velu$ </pre>\n<p>5] Copy the necessary jar files form the Oracle Client software Home:</p>\n<pre><br>cd /Users/velu/network/SSL_jdbc<br><br>cp $ORACLE_HOME/jlib/osdt_cert.jar .<br>cp $ORACLE_HOME/jlib/osdt_core.jar . <br>cp $ORACLE_HOME/jdbc/ojdbc8.jar .<br>cp $ORACLE_HOME/jlib/oraclepki.jar .<br><br>velus-MacBook-Pro:ssl-jdbc-demos velu$ ls -lrt<br>total 9312<br>-rw-r--r--@ 1 velu staff 2365 May 7 15:12 JDBCSSLTester.java<br>-rw-r--r--@ 1 velu staff 211 May 7 15:12 org_db.properties<br>-rw-r--r--@ 1 velu staff 152 May 7 15:12 test1.properties<br>-rw-r--r--@ 1 velu staff 4036257 May 7 15:14 ojdbc8.jar<br>-rw-r--r-- 1 velu staff 180 May 7 15:18 db.properties<br>-rw-r--r-- 1 velu staff 193803 May 7 15:59 osdt_cert.jar<br>-rw-r--r-- 1 velu staff 276360 May 7 15:59 osdt_core.jar<br>-rw-r--r-- 1 velu staff 234329 May 7 15:59 oraclepki.jar</pre>\n<p>6] Compile the program</p>\n<p> </p>\n<pre>javac -cp .:ojdbc8.jar:oraclepki.jar:osdt_cert.jar:osdt_core.jar JDBCSSLTester.java</pre>\n<p>7] Edit db.properties to specify your database connect string and username/password.</p>\n<p>Note: The connect string should include TCPS</p>\n<p> </p>\n<pre>dbURL=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=&lt;HOST&gt;)(PORT=&lt;TCPS PORT&gt;))(CONNECT_DATA=(SERVICE_NAME=&lt;SERVICE NAME&gt;)))<br>dbuser=&lt;USER&gt;<br>dbpassword=&lt;PASSWORD&gt;<br><br>velus-MacBook-Pro:ssl-jdbc-demos velu$ cat db.properties<br>dbURL=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)\\<br>(HOST=192.168.0.118)(PORT=2484))\\<br>(CONNECT_DATA=(SERVICE_NAME=pdborcl)))<br>dbuser=qatest<br>dbpassword=qatest</pre>\n<p>8] Edit the test1.properties file, which we use to specify our truststore details.</p>\n<pre>javax.net.ssl.trustStore=/home/oracle/SSL/wallets/client_wallet/ewallet.p12<br>javax.net.ssl.trustStoreType=PKCS12<br>javax.net.ssl.trustStorePassword=&lt;WALLET PASSWORD&gt;<br><br>velus-MacBook-Pro:ssl-jdbc-demos velu$ cat test1.properties<br>javax.net.ssl.trustStore=/Users/velu/wallet/ewallet.p12<br>javax.net.ssl.trustStoreType=PKCS12<br>javax.net.ssl.trustStorePassword=oracle123</pre>\n<p>9] Run as follows, passing in the properties file for the truststore details to be read by the program:</p>\n<pre>oracle@client SSL]$ java -cp .:ojdbc7.jar:oraclepki.jar:osdt_cert.jar:osdt_core.jar JDBCSSLTester test1.properties<br><br>velus-MacBook-Pro:ssl-jdbc-demos velu$ java -cp .:ojdbc8.jarr:oraclepki.jar:osdt_cert.jar:osdt_core.jar JDBCSSLTester test1.properties <br>Start: Thu May 07 16:13:15 IST 2020<br>Conncted as DATABASE USER QATEST2<br>Ended: Thu May 07 16:13:16 IST 2020</pre>\n<p> </p>\n<p> </p>"} {"page_content": "<p>Release notes is also here <a href=\"https://www.striim.com/docs/en/release-notes.html\">https://www.striim.com/docs/en/release-notes.html</a></p>\n<h1>Striim® 4.1.2 release notes</h1>\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h2 class=\"title\">Release notes<span id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_UUID-0481c1f0-481f-2a54-88a8-c7680678db01\"></span>\n</h2>\n</div>\n</div>\n</div>\n<p>The following are the release notes for<span> </span><span class=\"phrase\">Striim Platform</span><span> </span><span class=\"phrase\">4.1.2</span>.</p>\n<h3 id=\"idm46587288464880\" class=\"bridgehead\">Known issue in 4.1.2</h3>\n<section id=\"requirements-404\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm13361473948742\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Requirements</h3>\n</div>\n</div>\n</div>\n<p>Ubuntu 14.04 LTS and Windows Server 2012, which were certified for previous releases, are not certified for 4.1.2. We strongly recommend that before upgrading to 4.1.2 you upgrade to a certified operating system.</p>\n<p>See<span> </span><a class=\"xref linktype-component\" title=\"System requirements\" href=\"https://www.striim.com/docs/platform/en/system-requirements.html\"><span class=\"xreftitle\">System requirements</span></a>.</p>\n</section>\n<section id=\"changes-that-may-require-modification-of-your-tql-code--workflow--or-environment\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm13361473376084\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Changes that may require modification of your TQL code, workflow, or environment</h3>\n</div>\n</div>\n</div>\n<div id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_itemizedlist-idm13287332654646\" class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>Starting with release 4.1.2, the default alerts for server low memory and high disk usage and application halts and crashes have been replaced by a new set of default alerts. After upgrading to 4.1.2, you will need to redo any customizations you made to the old default alerts (see<span> </span><a class=\"xref linktype-component\" title=\"Managing system alerts\" href=\"https://www.striim.com/docs/platform/en/managing-system-alerts.html\"><span class=\"xreftitle\">Managing system alerts</span></a>). If you previously configured alerts for Slack, after upgrading go to the<span> </span><a class=\"xref linktype-component\" title=\"Alert Manager page\" href=\"https://www.striim.com/docs/platform/en/web-ui-guide.html#alert-manager-page\"><span class=\"xreftitle\">Alert Manager page</span></a><span> </span>and click<span> </span><span class=\"bold\"><strong>Auto Correct Slack</strong></span>.</p>\n</li>\n<li class=\"listitem\">\n<p>Starting with release 4.1.2, BigQuery Writer supports the Storage Write API (see<span> </span><a class=\"link\" href=\"https://cloud.google.com/bigquery/docs/write-api\" target=\"_blank\" rel=\"noopener\">BigQuery &gt; Documentation &gt; Guides &gt; Batch load and stream data with BigQuery Storage Write API</a>).</p>\n<div class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>When you upgrade from an earlier release of Striim, applications using the BigQuery legacy streaming API will be updated to use the Storage Write API. If for any reason you want to use the legacy streaming API, you may switch back (see the notes on Streaming Configuration in<span> </span><a class=\"xref linktype-component\" title=\"BigQuery Writer properties\" href=\"https://www.striim.com/docs/platform/en/bigquery-writer.html#bigquery-writer-properties\"><span class=\"xreftitle\">BigQuery Writer properties</span></a>).</p>\n</li>\n<li class=\"listitem\">\n<p>Applications using the Load API will not be updated.</p>\n</li>\n</ul>\n</div>\n</li>\n<li class=\"listitem\">\n<p>If you installed the Snowflake JDBC driver, remove it before upgrading to 4.1.2. The required driver is now bundled with Striim Server and the old driver could cause conflicts. The driver is still required if running Snowflake Writer in a Forwarding Agent.</p>\n</li>\n<li class=\"listitem\">\n<p>Support for Kafka 0.8, 0.9, and 0.10 has been deprecated. Those versions of Kafka Reader and Kafka Writer will still work in release 4.1.2, but support may be removed in a future release.</p>\n</li>\n<li class=\"listitem\">\n<p>Starting with release 4.1.2, the application state CRASH has been renamed TERMINATED. When upgrading from a previous release, alerts (see<span> </span><a class=\"xref linktype-component\" title=\"Creating and managing custom alerts\" href=\"https://www.striim.com/docs/platform/en/creating-and-managing-custom-alerts.html\"><span class=\"xreftitle\">Creating and managing custom alerts</span></a>) with the condition<span> </span><span class=\"bold\"><strong>App crashed</strong></span><span> </span>will automatically be changed to condition<span> </span><span class=\"bold\"><strong>App terminated</strong></span><span> </span>and<span> </span><code class=\"code\">EXCEPTIONHANDLER</code><span> </span>(see<span> </span><a class=\"xref linktype-component\" title=\"Handling exceptions\" href=\"https://www.striim.com/docs/platform/en/handling-exceptions.html\"><span class=\"xreftitle\">Handling exceptions</span></a>)<span> </span><code class=\"code\">CRASH</code><span> </span>actions will automatically be changed to<span> </span><code class=\"code\">STOPPED</code>.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>OJet SSL configuration</strong></span>: Starting with release 4.1.2, the wallet location is specified in the SSL Config property (see<span> </span><a class=\"xref linktype-component\" title=\"OJet properties\" href=\"https://www.striim.com/docs/platform/en/oracle-database-cdc.html#ojet-properties\"><span class=\"xreftitle\">OJet properties</span></a>). It is no longer necessary to set an environment variable on the Striim server, though if you have done that it will still work (though support for this may be discontinued in a future release).</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Upgrading if you use OJet</strong></span>: After upgrading and before running OJet, run<span> </span><code class=\"code\">setupOJet.sh</code><span> </span>again as described in<span> </span><a class=\"xref linktype-fork\" title=\"Running the OJet setup script on Oracle\" href=\"https://www.striim.com/docs/platform/en/oracle-database-cdc.html#running-the-ojet-setup-script-on-oracle\"><span class=\"xreftitle\">Running the OJet setup script on Oracle</span></a><span> </span>(if OJet reads from a single primary database or a logical standby) or<span> </span><a class=\"xref linktype-fork\" title=\"Configuring Active Data Guard to use OJet\" href=\"https://www.striim.com/docs/platform/en/oracle-database-cdc.html#configuring-active-data-guard-to-use-ojet\"><span class=\"xreftitle\">Configuring Active Data Guard to use OJet</span></a><span> </span>(if OJet reads from a downstream database).</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Handling of underscore characters in Tables and Excluded Tables properties</strong></span>: Underscore characters are now always literal underscores. In previous releases, they were sometimes treated as single-character wildcards, in which cases escaping them with a backslash ()<code class=\"code\">\\_</code>) indicated a literal underscore. Also, it is no longer necessary to escape backslashes when<span> </span><a class=\"xref linktype-component\" title=\"Using non-default case and special characters in table identifiers\" href=\"https://www.striim.com/docs/platform/en/using-source-and-target-adapters-in-applications.html#using-non-default-case-and-special-characters-in-table-identifiers\"><span class=\"xreftitle\">Using non-default case and special characters in table identifiers</span></a>.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Importing TQL may fail with missing property errors</strong></span>: In previous releases, any reader, parser, writer, or formatter properties that have default values could be omitted from TQL and on import they would be given those default values. In this release, some TQL that imported without error in previous releases may fail on import due when these properties are not specified. The error message will indicate what property you must add for import to work. This may also happen with sample code in the documentation.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Oracle Reader's XStream support has been deprecated</strong></span>.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Salesforce Reader</strong></span><span> </span>: The sObject property has been removed in favor of the new Objects property. If you use the export-import method to upgrade from an earlier Striim release, the value of the sObject property will be copied to the Objects property.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Oracle Reader's DDL capture mode has been deprecated in favor of schema evolution</strong></span>: The DDL support described in \"Including DDL operations in Oracle Reader output\" in the 3.10.3 documentation has been deprecated in Striim 4.x, Oracle Reader's DDL Capture Mode property does not appear in the UI, and creating new applications using this deprecated feature is no longer supported.. Applications created in earlier releases that use this feature are still supported but schema evolution cannot be enabled for those applications. If you wish to migrate from the old feature to schema evolution, you must create a new application.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>New Halt state &amp; alerts</strong></span>: The new HALT status indicates that an application failure is due to an external cause, such as a source or target database being offline. If you have any alerts on application Crash, you should create additional alerts for Halt. For more in formation, see<span> </span><a class=\"xref linktype-component linktextconsumer\" title=\"Sending alerts about servers and applications\" href=\"https://www.striim.com/docs/platform/en/sending-alerts-about-servers-and-applications.html\"><span class=\"xreftitle\">Sending alerts about servers and applications</span></a>.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>An in-place upgrade from Striim 3.x will delete all data persisted to Elasticsearch</strong></span>. To preserve this data, use the export-import method instead. See<span> </span><a class=\"xref linktype-component\" title=\"In-place upgrade\" href=\"https://www.striim.com/docs/platform/en/in-place-upgrade.html\"><span class=\"xreftitle\">In-place upgrade</span></a>.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>BigQuery expects time-partitioned tables by<span> </span></strong></span>default: By default, starting in 4.0, BigQuery expects target tables to be partitioned by ingestion time or DATE, DATETIME, or TIMESTAMP columns. See \"Improving performance by partitioning BigQuery tables\" in<span> </span><a class=\"xref linktype-component\" title=\"BigQuery Writer\" href=\"https://www.striim.com/docs/platform/en/bigquery-writer.html\"><span class=\"xreftitle\">BigQuery Writer</span></a>.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Oracle Reader transaction buffer defaults have changed</strong></span>: By default, Oracle Reader now automatically buffers transactions larger than 100MB to disk. See \"Transaction Buffer Type\" in<span> </span><a class=\"xref linktype-component\" title=\"Oracle Reader properties\" href=\"https://www.striim.com/docs/platform/en/oracle-database-cdc.html#oracle-reader-properties\"><span class=\"xreftitle\">Oracle Reader properties</span></a>.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Kafka SASL configuration has changed</strong></span>: Starting in Striim 4.0, SASL properties are specified in the Kafka Config property of Kafka Reader and Kafka Writer. If you connect to a Kafka cluster that uses SASL, see<span> </span><a class=\"xref linktype-component\" title=\"Configuring Kafka for persisted streams\" href=\"https://www.striim.com/docs/platform/en/configuring-kafka-for-persisted-streams.html\"><span class=\"xreftitle\">Configuring Kafka for persisted streams</span></a><span> </span>for details.</p>\n</li>\n<li class=\"listitem\">\n<p><span class=\"bold\"><strong>Legacy SQL Server JDBC drivers</strong></span>: SQL Server 2008 requires an older version of Microsoft's JDBC driver that is not compatible with the most recent SQL Server versions. See<span> </span><a class=\"xref linktype-fork\" title=\"Install the Microsoft JDBC Driver for SQL Server 2008 in a Striim server\" href=\"https://www.striim.com/docs/platform/en/installing-third-party-drivers-in-striim-platform.html#install-the-microsoft-jdbc-driver-for-sql-server-2008-in-a-striim-server\"><span class=\"xreftitle\">Install the Microsoft JDBC Driver for SQL Server 2008 in a Striim server</span></a><span> </span>or<span> </span><a class=\"xref linktype-fork\" title=\"Install the Microsoft JDBC Driver in a Forwarding Agent\" href=\"https://www.striim.com/docs/platform/en/striim-forwarding-agent-installation-and-configuration.html#install-the-microsoft-jdbc-driver-in-a-forwarding-agent\"><span class=\"xreftitle\">Install the Microsoft JDBC Driver in a Forwarding Agent</span></a><span> </span>for more information.</p>\n</li>\n</ul>\n<p> </p>\n</div>\n</section>\n<section id=\"customer-reported-issues-fixed-in-release-4-1-2-2\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm4572336071326433640200297882\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Customer-reported issues fixed in release 4.1.2.2</h3>\n</div>\n</div>\n</div>\n<div class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-36151: Docker image supports additional Kubernetes arguments</p>\n</li>\n</ul>\n<p> </p>\n</div>\n</section>\n<section id=\"customer-reported-issues-fixed-in-release-4-1-2-1\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm13361473366342\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Customer-reported issues fixed in release 4.1.2.1</h3>\n</div>\n</div>\n</div>\n<div class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-24253: Database Reader &gt; Database Writer issues with PostgreSQL partitioned table</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-28098: Database Writer SQL Server issue converting nvarchar to numeric</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34632: GGTrailReader \"RMIWebSocket.handleMessageException\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34691: CREATE VAULT logging issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34706: issues with multi-server cluster after 4.1.0.1 &gt; 4.1.0.4 upgrade</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34722: application remains in Stopping state for a long time</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34742: Salesforce Reader recovery checkpoint Issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34801: Mongo CosmosDB Reader &gt; MongoDB Writer application issue after 4.1.0 &gt; 4.1.2 upgrade</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34840: MSSQL Reader \"Table Metadata does not match metadata in ResultSet\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34897: unable to save multiple email addresses for custom alert</p>\n</li>\n</ul>\n<p> </p>\n</div>\n</section>\n<section id=\"customer-reported-issues-fixed-in-release-4-1-2\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm13361473358296\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Customer-reported issues fixed in release 4.1.2</h3>\n</div>\n</div>\n</div>\n<div id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_itemizedlist-idm13348003495362\" class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-23078: Oracle Reader ORA-00310 error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-24646: GG Trail Parser checkpoint issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-27396: log contains many unneeded INFO level messages from RMIWebSocket</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-27491: Oracle Reader sending all columns when Compression=True</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-29118: JMX Reader \"javax.management.AttributeNotFoundException\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-29662: MON failure in 4.0.5.1B</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30421: MS SQL Reader Azure Active Directory authentication failure</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30423: DROP failure with JMS Reader</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30605: log4j issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30627: Embed Dashboard button is visible to non-admin user</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30921: Oracle Reader issue reading CLOB</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30940: monitoring stops updating</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31101: S3 Reader error \"Unexpected character ('a' (code 97))\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31299: OJet KeyColumns issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31306: MySQL Reader error \"Binlog Client closed abruptlyBinLog\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31397: System$Notification.NotificationSourceApp issue with multiple servers</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31398: OJet: can't use property variable</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31424: Oracle Reader: LogMiner stops working after DataGuard switchover to physical standby and back</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31525: Oracle Reader error \"Component Name: xxx. Component Type: SOURCE. Cause: null\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31540: OJet issues with ConnectionURL and DownstreamConnectionURL</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31541: OJet issue with database name</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31609: MySQL Reader and MariaDB Xpand Reader checkpointing issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31630: security issue in 4.0.5.1A</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31640: Database Reader Sybase &gt; BigQuery Writer issue with BIT type</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31641: OJet \"Downstream capture: \"missing multi-version data dictionary\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31651: Oracle Reader not using UNIQUE INDEX as key</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31711: OJet \"Failed to reposition by SCN\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31878: OJet \"Could not find column ... in cached metadata of table\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32052: OJet issue with BLOB and RAW types</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32083: Databricks Writer \"HiveSQLException:Invalid SessionHandle\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32255: GG Trail Reader \"ColumnTypeMismatchException\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32328: OJet java.lang.NullPointerException in server log after undeploy</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32415: OJet ORA-01013 error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32485: REST API output for DESCRIBE is incomplete</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32501: Mongo CosmosDB Writer is slow</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32503: \"duplicate key value violates unique constraint 'billing_cycle_uk_idx'\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32568: Database Reader with SQL Server &gt; BigQuery RWriter \"\"Invalid datetime string\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32570: Mongo CosmosDB Reader &gt; Mongo CosmosDB Writer changed ObjectID to String</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32582: Oracle Reader &gt; Databricks Writer is missing target acknowledged position</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32595: PostgreSQL Reader wildcard issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32628: Tables property value changed after export-import upgrade from 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32822: Web UI slow in 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32877: MSSQL Reader issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32906: MSSQL Reader issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32915: Database Reader &gt; Database Writer app has NullPointerException error after upgrade from 4.0.5.1.B</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33043: Oracle Reader &gt; Snowflake Writer \"JDBC driver encountered communication error. Message: HTTP status=403” error after auto-resume</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33089: Incremental Batch Reader \"For input string: \"'5475856979'\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33158: Snowflake Writer java.lang.IndexOutOfBoundsException error after upgrading to 4.1.0.2</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33169: EXPORT does not work with REST API</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33175: \"Failed to get monitoring data with invalid Token Exception\" error with 4.0.5.1B</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33286: Oracle Reader &gt; Databricks Writer for Azure Databricks \"cannot resolve '<span class=\"MathJax_Preview\"></span><span id=\"MathJax-Element-1-Frame\" class=\"mjx-chtml MathJax_CHTML\" tabindex=\"0\" data-mathml=\"data-mathml\">_c0\" role=\"presentation\" style=\"box-sizing: border-box; display: inline-block; line-height: 0; text-indent: 0px; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 19.68px; letter-spacing: normal; overflow-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; margin: 0px; padding: 1px 0px; position: relative;\"<span id=\"MJXc-Node-1\" class=\"mjx-math\" aria-hidden=\"true\"><span id=\"MJXc-Node-2\" class=\"mjx-mrow\"><span id=\"MJXc-Node-3\" class=\"mjx-mstyle\"><span id=\"MJXc-Node-4\" class=\"mjx-mrow\"><span id=\"MJXc-Node-5\" class=\"mjx-mo\"><span class=\"mjx-char MJXc-TeX-main-R\">_</span></span><span id=\"MJXc-Node-6\" class=\"mjx-mi\"><span class=\"mjx-char MJXc-TeX-math-I\">c</span></span><span id=\"MJXc-Node-7\" class=\"mjx-mn\"><span class=\"mjx-char MJXc-TeX-main-R\">0</span></span></span></span></span></span><span class=\"MJX_Assistive_MathML\" role=\"presentation\">_�0</span></span>' given input columns\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33336: Oracle Reader &gt; Databricks Writer for Azure Databricks has duplicate records in target</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33409: OJet does not halt when required archived logs are missing</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33492: CPU Utilization 100% with 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33534: SMTP configuration issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33559: CPU Utilization 100% with 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33560: PostgreSQL Reader issue with wal2json 2.4</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33656: Azure Synapse Writer table \"does not exist\" error with wildcard when table name includes underscore</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34073: Databricks Writer \"Integration failed for table\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34314: SalesForce Reader doesn't capture data when using certain valid Start Time values</p>\n</li>\n</ul>\n</div>\n</section>\n<section id=\"resolved-issues\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm13361473351168\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Resolved issues</h3>\n</div>\n</div>\n</div>\n<p>The following previously reported known issue was fixed in this release:</p>\n<div class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-27520: Snowflake Writer can not be used when Striim is running in Microsoft Windows.</p>\n</li>\n</ul>\n</div>\n</section>\n<section id=\"known-issues-from-past-releases\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm13361473343552\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Known issues from past releases</h3>\n</div>\n</div>\n</div>\n<div class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-5701: Dashboard queries not dropped with the dashboard or overwritten on import</p>\n<p>When you drop a dashboard, its queries are not dropped. If you drop and re-import a dashboard, the queries in the JSON file do not overwrite those already in Striim.</p>\n<p>Workaround: drop the namespace or LIST NAMEDQUERIES, then manually drop each one.</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-8142: SORTER objects do not appear in the UI</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-8933: DatabaseWriter shows no error in UI when MySQL credentials are incorrect</p>\n<p>If your DatabaseWriter Username or Password values are correct, you will see no error in the UI but no data will be written to MySQL. You will see errors in webaction.server.log regarding DatabaseWriter containing \"Failure in Processing query\" and \"command denied to user.\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-11305: DatabaseWriter needs separate checkpoint table for each node when deployed on multiple nodes</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-17653: Import of custom Java function fails</p>\n<p><code class=\"code\">IMPORT STATIC</code><span> </span>may fail. Workaround: use lowercase<span> </span><code class=\"code\">import static</code>.</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-19903: When DatabaseReader Tables property uses wildcard, views are also read</p>\n<p>Workaround: use Excluded Tables to exclude the views.</p>\n</li>\n</ul>\n</div>\n</section>\n<section id=\"third-party-apis--clients--and-drivers-used-by-readers-and-writers\" class=\"section\" dir=\"ltr\" data-origin-id=\"\" data-legacy-id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_section-idm13361473277428\" data-publication-date=\"2023-04-24\">\n<div class=\"titlepage\">\n<div>\n<div class=\"title\">\n<h3 class=\"title\">Third-party APIs, clients, and drivers used by readers and writers</h3>\n</div>\n</div>\n</div>\n<div id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_itemizedlist-idm13144615012098\" class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>Azure Event Hub Writer uses the azure-eventhubs API version 3.0.2.</p>\n</li>\n<li class=\"listitem\">\n<p>Azure Synapse Writer uses the bundled SQL Server JDBC driver.</p>\n</li>\n<li class=\"listitem\">\n<p>BigQuery Writer uses google-cloud-bigquery version 2.3.3.</p>\n</li>\n<li class=\"listitem\">\n<p>Cassandra Cosmos DB Writer uses cassandra-jdbc-wrapper version 3.1.0</p>\n</li>\n<li class=\"listitem\">\n<p>Cassandra Writer uses cassandra-java-driver version 3.6.0.</p>\n</li>\n<li class=\"listitem\">\n<p>Cloudera Hive Writer uses hive-jdbc version 3.1.3.</p>\n</li>\n<li class=\"listitem\">\n<p>CosmosDB Reader uses Microsoft Azure Cosmos SDK for Azure Cosmos DB SQL API 4.29.0.</p>\n</li>\n<li class=\"listitem\">\n<p>CosmosDB Writer uses documentdb-bulkexecutor version 2.3.0.</p>\n</li>\n<li class=\"listitem\">\n<p>Databricks Writer in AWS uses aws-java-sdk-sts version 1.11.320, aws-java-sdk-s3 version 1.11.320 , and aws-java-sdk-kinesis version1.11.240.</p>\n</li>\n<li class=\"listitem\">\n<p>Derby: the internal Derby instance is version 10.9.1.0.</p>\n</li>\n<li class=\"listitem\">\n<p>Elasticsearch: the internal Elasticsearch cluster is version 5.6.4.</p>\n</li>\n<li class=\"listitem\">\n<p>GCS Writer uses the google-cloud-storage client API version 1.106.0.</p>\n</li>\n<li class=\"listitem\">\n<p>Google PubSub Writer uses the google-cloud-pubsub client API version 1.110.0.</p>\n</li>\n<li class=\"listitem\">\n<p>HBase Writer uses HBase-client version 2.4.13.</p>\n</li>\n<li class=\"listitem\">\n<p>Hive Writer and Hortonworks Hive Writer use hive-jdbc version 3.1.3.</p>\n</li>\n<li class=\"listitem\">\n<p>The HP NonStop readers use OpenSSL 1.0.2n.</p>\n</li>\n<li class=\"listitem\">\n<p>JMS Reader and JMS Writer use the JMS API 1.1.</p>\n</li>\n<li class=\"listitem\">\n<p>Kafka: the internal Kafka cluster is version 0.11.0.1.</p>\n</li>\n<li class=\"listitem\">\n<p>Kudu: the bundled Kudu Java client is version 1.13.0.</p>\n</li>\n<li class=\"listitem\">\n<p>Kinesis Writer uses aws-java-sdk-kinesis version 1.11.240.</p>\n</li>\n<li class=\"listitem\">\n<p>MapR DB Writer uses hbase-client version 2.4.10.</p>\n</li>\n<li class=\"listitem\">\n<p>MapR FS Reader and MapR FS Writer use Hadoop-client version 3.3.4.</p>\n</li>\n<li class=\"listitem\">\n<p>MariaDB Reader uses maria-binlog-connector-java-0.2.3-WA1.</p>\n</li>\n<li id=\"UUID-32dfbcee-6da6-df1e-c8c0-d04d987fec70_listitem-idm13347839067734\" class=\"listitem\">\n<p>MariaDB Xpand Reader uses mysql-binlog-connector-java version 0.21.0 and mysql-connector-java version 8.0.27.</p>\n</li>\n<li class=\"listitem\">\n<p>Mongo Cosmos DB Reader, MongoDB Reader, and MongoDB Writer use mongodb-driver-sync version 4.6.0.</p>\n</li>\n<li class=\"listitem\">\n<p>MySQL Reader uses mysql-binlog-connector-java version 0.21.0 and mysql-connector-java version 8.0.27.</p>\n</li>\n<li class=\"listitem\">\n<p>Oracle: the bundled Oracle JDBC driver is ojdbc-21.1.jar.</p>\n</li>\n<li class=\"listitem\">\n<p>PostgreSQL: the bundled PostgreSQL JDBC 4.2 driver is version 42.4.0</p>\n</li>\n<li class=\"listitem\">\n<p>Redshift Writer uses aws-java-sdk-s3 1.11.320.</p>\n</li>\n<li class=\"listitem\">\n<p>S3 Reader and S3 Writer use aws-java-sdk-s3 1.11.320.</p>\n</li>\n<li class=\"listitem\">\n<p>Salesforce Reader uses the Force.com REST API version 53.1.0.</p>\n</li>\n<li class=\"listitem\">\n<p>Salesforce Writer: when Use Bulk Mode is True, uses Bulk API 2.0 Ingest; when Use Bulk Mode is False, uses the Force.com REST API version 53.1.0.</p>\n</li>\n<li class=\"listitem\">\n<p>Snowflake Writer: when Streaming Upload is False, uses snowflake-jdbc version.3.13.15; when Streaming Upload is True, uses Snowflake Ingest SDK 1.0.2-beta.5.</p>\n</li>\n<li class=\"listitem\">\n<p>Spanner Writer uses the google-cloud-spanner client API version 1.28.0 and the bundled JDBC driver is google-cloud-spanner-jdbc version 1.1.0.</p>\n</li>\n<li class=\"listitem\">\n<p>SQL Server: the bundled Microsoft SQL Server JDBC driver is version 7.2.2.</p>\n</li>\n</ul>\n</div>\n</section>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p>Striim app is configured to read from oracle database reader to Filewriter with dsvformatter header property enabled. The app fails with below exception.</p>\n<pre><br><span class=\"wysiwyg-font-size-small\"><em>com.webaction.runtime.components.Target.receive (Target.java:231) Got exception Failure in writing event to the file. . Cause: Error while generating header</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.striim.io.target.commons.LocalFileWriter.writeEvent(LocalFileWriter.java:427)</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.striim.io.target.commons.LocalFileWriter.receiveImpl(LocalFileWriter.java:360)</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.webaction.proc.BaseProcess.receive(BaseProcess.java:384)</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.webaction.runtime.components.Target.receive(Target.java:174)</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.webaction.runtime.DistributedRcvr.doReceive(DistributedRcvr.java:245)</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.webaction.runtime.DistributedRcvr.onMessage(DistributedRcvr.java:110)</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.webaction.jmqmessaging.InprocAsyncSender.processMessage(InprocAsyncSender.java:52)</em></span><br><span class=\"wysiwyg-font-size-small\"><em> at com.webaction.jmqmessaging.AsyncSender$AsyncSenderThread.run(AsyncSender.java:136)</em></span><br><span class=\"wysiwyg-font-size-small\"><em>When header property is disabled in dsvformatter, it does not crash. When header is enabled, database reader will fetch the column list from database. The above exception might occur if the metadata is not fetched properly. Prior to the error below exception can be noticed</em></span></pre>\n<p><span class=\"wysiwyg-font-size-small\"> </span></p>\n<pre><span class=\"wysiwyg-font-size-medium\"><em>2020-04-10 13:23:03,332 @striim_mac_server @admin.STRIIM_UAT_FINACLE_TEST -WARN StartSources-STRIIM_UAT_FINACLE_TEST com.webaction.proc.SourceProcess.createTypesFromTableDef (SourceProcess.java:468) Type creation failed for SourceProcess: admin.STRIIM_UAT_FINACLE_TEST, reason: Problem creating type: admin.STRIIM_UAT_FINACLE_TEST_STRIIM_CUST_FIRC_DETAILS_Type</em></span><br><span class=\"wysiwyg-font-size-medium\"><em>java.lang.RuntimeException: Problem creating type: admin.STRIIM_UAT_FINACLE_TEST_STRIIM_CUST_FIRC_DETAILS_Type</em></span></pre>\n<p><br><strong>Cause: </strong></p>\n<p>ojdbc14.jar and ojdbc8.jar both are installed in &lt;Striim_HOME&gt;/lib. Striim server was using ojdbc14.jar which is not certified yet and not returning the output during metadata fetch.</p>\n<p>If column list is not returned properly the following error message is seen</p>\n<pre>com.webaction.source.cdc.common.TableMD.updateDuplicateColumns (TableMD.java:248) <strong>Original column list : [] Modified List : []</strong></pre>\n<p><strong>Solution:</strong></p>\n<p>Remove ojdbc14.jar and use ojdbc8.jar. Striim server should be restarted if a jar file is removed.<br>With Ojdbc8.jar the metdata fetch should show message like the following</p>\n<pre>com.webaction.source.cdc.common.TableMD.updateDuplicateColumns (TableMD.java:248) Original column list : [R_ID, F_NUM, F_ISS_DATE, REMARKS, ENTITY_FLG, D_FLG, L_USER, L_TIME, R_USER, R_TIME, B_ID, I_CODE, F_AMT, TRAN_FLG]<br><br>Modified List : [R_ID, F_NUM, F_ISS_DATE, REMARKS, ENTITY_FLG, D_FLG, L_USER, L_TIME, R_USER, R_TIME, B_ID, I_CODE, F_AMT, TRAN_FLG]</pre>\n<p> </p>\n<p>If proper ojdbc version is not used, it will result in unexpected error and errors could be misleading. We have seen issues with databasewriter when use older version (ojdbc7.jar). More information please below note</p>\n<pre><a href=\"https://support.striim.com/hc/en-us/articles/360034740513-Database-Writer-Fails-With-Invalid-Conversion-Requested\">https://support.striim.com/hc/en-us/articles/360034740513-Database-Writer-Fails-With-Invalid-Conversion-Requested</a></pre>"} {"page_content": "<p>After a node reboot Striim server in a Cluster fails to start with following error seen in the terminal or nohup or striim-node.log</p>\n<p> </p>\n<p><img src=\"https://support.striim.com/hc/article_attachments/360065332513/mceclip0.png\" alt=\"mceclip0.png\" width=\"810\" height=\"169\"></p>\n<p> </p>\n<p>Elasticsearch uses port 9300 for TCP. The error happens when nodes are not able to ping each other on that port. This needs to be both inbound/ outbound and firewall rules needs to set accordingly.</p>\n<p>If you get \"no route to host\" please have your system admin to verify the network / firewall setttings</p>\n<p>$ curl -v \"telnet://&lt;node&gt;:9300\"</p>\n<p>or</p>\n<p>$ telnet &lt;node&gt; 9300</p>\n<p>Note: This needs to be verified across all nodes in the cluster</p>\n<p>After resolving the firewall issue following needs to be done</p>\n<p>1. stop striim</p>\n<p>2. delete the elasticsearch directory in striim home</p>\n<p>3. start striim on the chosen primary node first and once it starts successfully start the remaining nodes in a sequential order</p>"} {"page_content": "<h2><strong>Problem:</strong></h2>\n<p><br> I am doing an initial load. Source is Sql server and target is Snowflake. some columns are DATETIME2 in source and TIMESTAMP_NTZ(9) in target.<br> The application puts lots default values for this columns in its max value:<br> e.g., '9999-12-31 23:59:59:9999999'.</p>\n<p>I understand Striim currently support only to milli-second for Sql server, which is fine. However, DatabaseReader captures this value and rounds it to \"10000-01-01 00:00:00.000\", which is beyond of year range of TIMESTAMP_NT in target (in fact, this is beyond date datatype range in most databases, like Oracle).</p>\n<h2><strong>Solution:</strong></h2>\n<p><br> In DatabaseReader, set a property:</p>\n<pre>ReturnDateTimeAs: 'string', </pre>"} {"page_content": "<p><span class=\"wysiwyg-underline\"><strong>Symptoms:</strong></span></p>\n<p>The error message would be one of the following in the Striim server logs</p>\n<p><span>Caused by: java.util.concurrent.ExecutionException: com.hazelcast.core.<strong>OperationTimeoutException</strong>: CollectionAddOperation got rejected before execution due to not starting within the operation-call-timeout of: 60000 ms. Current time: 2019-10-07 00:18:15.148. Start time: 2019-10-07 00:18:15.147. Total elapsed time: 1 ms. </span></p>\n<p><span>Caused by: com.hazelcast.core.OperationTimeoutException: CollectionAddOperation got rejected before execution due to not starting within the operation-call-timeout of: 60000 ms. Current time:</span></p>\n<p><span>2019-09-30 20:22:53,519 @Server @ -ERROR AppManagerWorkerThread com.webaction.appmanager.AppManagerWorker.run (AppManagerWorker.java:157) got exception <strong>IsLocked</strong>Operation got rejected before execution due to not starting within the <strong>operation-call-timeout</strong> of: <strong>60000</strong> ms. Current time: 2019-09-30 20:22:53.518. Start time: 2019-09-30 20:22:53.517. Total elapsed time: 1 ms. </span></p>\n<p><span><strong>OperationTimeoutException</strong> |<br><strong>CollectionGetAllOperation</strong> got rejected before execution due to not starting within the <strong>operation-call-timeout</strong> of: 60000 ms. <br>Current time: 2019-12-19 09:18:18.213. Start time: 2019-12-19 09:17:08.541. Total elapsed time: 69672 ms. <br>Invocation{op=com.hazelcast.collection.impl.collection.operations.CollectionGetAllOperation{serviceName='hz:impl:setService', <br></span></p>\n<p><span class=\"wysiwyg-underline\"><strong>Cause:</strong></span></p>\n<p>When the Striim server is very busy the default hazelcast.operation.call.timeout of <span>60000 ms (1 min) can lead to application manager lock to timeout causing above errors</span></p>\n<p><span class=\"wysiwyg-underline\"><strong>Resolution:</strong></span></p>\n<p><span>Increasing the timeout settings like below to say 300000 ms (5 min) would help if the activity is intermittent.</span></p>\n<p><span>1. Stop Striim server.<br>2. on each node:</span></p>\n<p><span>Add following to startUp.properties</span></p>\n<pre class=\"p1\"><span class=\"s1\">ClusterHeartBeatTimeout=300</span></pre>\n<p><strong>OR</strong></p>\n<p><span>Edit the server.sh script in the &lt;striim_home&gt;/bin directory and add the following line <br> -Dhazelcast.operation.call.timeout.millis=300000 \\<br><br>For example:<br></span></p>\n<pre>${JAVA} \\<br> $JVM_DEBUG_OPTS \\<br> $GC_SETTINGS \\<br>-Dhazelcast.logging.type=\"none\" \\<br>-Dhazelcast.operation.call.timeout.millis=300000 \\</pre>\n<p><span>3. Restart Striim server</span></p>"} {"page_content": "<p><span class=\"wysiwyg-underline\"><strong>Symptoms</strong></span></p>\n<p>BigQueryWriter crash with following error</p>\n<p>2020-01-02 18:00:18,200 @S10_246_48_17 @admin.GGTRAIL_GCP -ERROR admin.TGT_GG_BIGQUERY-0 com.striim.bigquery.BigQueryIntegrationTask.execute (BigQueryIntegrationTask.java:163) Failed to upload data from File: [/u01/rajesh/Striim/TGT_GG_BIGQUERY/V500_TRAIL_TEST.LONG_TEXT/V500_TRAIL_TEST.LONG_TEXT_908.csv.gz] Exception:[com.google.cloud.bigquery.BigQueryException: 410 Gone<br>{<br> \"error\": {<br> \"code\": 500,<br> \"message\": \"An internal error occurred and the request could not be completed.\",</p>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong> Cause:</strong></span></p>\n<p>As per<strong><span> </span><a href=\"https://cloud.google.com/bigquery/docs/error-messages\" rel=\"noreferrer\">https://cloud.google.com/bigquery/docs/error-messages</a><br><br></strong>The error code 500 may be due to network connection issue or server is overloaded.<strong><br></strong></p>\n<table style=\"width: 715px;\">\n<tbody>\n<tr>\n<td style=\"width: 94px;\">backendError</td>\n<td style=\"width: 83px;\">500 or 503</td>\n<td style=\"width: 507px;\">This error returns when there is a temporary server failure such as a network connection problem or a server overload.</td>\n</tr>\n</tbody>\n</table>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong>Solution:</strong></span></p>\n<p>As per <a href=\"https://cloud.google.com/bigquery/quotas\">https://cloud.google.com/bigquery/quotas</a> the maximum number of jobs per table per day is 1000</p>\n<p>The errors can be mitigated by changing the <strong>BatchPolicy </strong>in BigQueryWriter from the default of</p>\n<p><strong>BatchPolicy: 'eventCount:1000000,\\n Interval:60',</strong><span>​</span></p>\n<p>to</p>\n<p><span><strong>BatchPolicy: 'Interval:120',</strong>​</span></p>\n<p>Changing the Upload/ BatchPolicy based on interval only would help crossing the limit of 1000 batches a day.</p>\n<p>With Interval of 120 this limits to 720 jobs per day per table.</p>"} {"page_content": "<p>Problem :</p>\n<p>We want to replicat the commit SCN and transaction ID of the source transaction to a target table. But, Striim Filereader with GGtrailparser writes NULL value for Transaction ID and CSN</p>\n<p><strong>Cause 1:</strong></p>\n<p>Filereader was started from a trail file that has a partial transaction. The first record in the trail is in middle of transactionwith transind x01</p>\n<p><strong>Cause 2:</strong><br>First record in the transaction is not a interested table.</p>\n<p>eg:<br>Consider transaction with 4 records</p>\n<p>First record : Employee table<br>Second record : Dept Table<br>Third record. : Employee table<br>Fourth Record. : Dept Table</p>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong>Understanding GoldenGate Transaction Indicator</strong></span></p>\n<p>TransInd : (x00) First record in transaction<br>TransInd : (x01) Statement in middle of transaction<br>TransInd : (x02)Last statement in transaction<br>TransInd : (x03) Sole statement in transaction</p>\n<p>Oracle GoldenGate stores Transaction ID and Commit SCN. (CSN) in ggstokens Area in trail file. These informations will be present in first DML statment in the transaction (transind x00) and Sole DML statement (transind x03).<br>Also it stores ROWID for all the records.</p>\n<p>Striim filereader with GGtrailparser is specified with table Dept. Since ggstoken(CSN, TransactionID) is written in first record, GGtrailparser writes NULL values</p>\n<p><strong>Workaround</strong>:</p>\n<p><br>Configure the Oracle GoldenGate Extract to write CSN, Transaction ID to usertoken for all the records. Striim GGtrailparser will writes all ggstoken and usertoken to metadata.</p>\n<p><em>table scott.*, tokens (tk_scn = @GETENV('TRANSACTION','CSN'), tk_xid = @GETENV('TRANSACTION','XID'));</em></p>\n<p> </p>\n<p>The following is an example of sysout of the WAevent</p>\n<p>SYSOUT_BCOM: ogg2_cq_stream_Type_1_0{<br> data: [\"-2\",\"-2\"]<br> metadata: {\"TableID\":2,\"TableName\":\"SCOTT.S2\",\"TxnID\":null,<strong>\"tk_scn\":\"589518012\"</strong>,\"OperationName\":\"DELETE\",\"FileName\":\"e1000000078\",\"FileOffset\":2092,<strong>\"tk_xid\":\"1.19.7651\"</strong>,\"TimeStamp\":1575923982000,\"Oracle ROWID\":\"AAAXQBAAEAAAAftAAB\",\"CSN\":null,\"RecordStatus\":\"VALID_RECORD\"}<br> userdata: null<br> before: null<br> dataPresenceBitMap: \"Aw==\"<br> beforePresenceBitMap: \"AA==\"<br> typeUUID: {\"uuidstring\":\"01ea1dd4-a669-6a21-b27c-c2acfac16045\"}<br>};</p>\n<p>In your TQL, you can use this in COLUMNMAP to map the metadata information to target column.</p>\n<p>Tables: 'SCOTT.BANK_TRANSACTION,impala::default.BANK_TRANSACTION COLUMNMAP(BANK_TRANSACTION_ID=BANK_TRANSACTION_ID,COMMITSCN=@METADATA(TK_CSN),OperationName=@METADATA(OperationName),COMMIT_TIMESTAMP=@METADATA(TimeStamp));'</p>\n<p> </p>\n<p> </p>"} {"page_content": "<p>In-place upgrade is available starting version 3.9.6.<br> Online documentation (<a href=\"https://www.striim.com/docs/en/in-place-upgrade.html\">https://www.striim.com/docs/en/in-place-upgrade.html</a>) shows the steps for rpm/deb installation. This also applies to tgz installation with minor changes as following:<br> <br> Following shows an example of upgrading from 3.9.7.1 to 3.9.8.</p>\n<p>1. Stop existing version Striim server.<br> 2. Stop all Forwarding Agents, if related.<br> 3. install new version in another directory<br> 4. copy following files/directories<br> (1) on each server node:</p>\n<pre> cp &lt;old&gt;/conf/startUp.properties &lt;new&gt;/conf/<br>\n cp &lt;old&gt;/lib/&lt;customer_added_jar_files&gt; &lt;new&gt;/lib/ \n (e.g., jdbc driver jar files)<br>\n </pre>\n<p>(2) on MDR node only (new installation directory)</p>\n<pre> mv ./wactionrepos ./wactionrepos.backup<br>\n cp -Rp &lt;old&gt;/wactionrepos ./<br>\n </pre>\n<p>5. upgrade MDR<br> Using a client for your metadata repository host, run the appropriate script:<br> for Derby: ./conf/UpgradeMetadataReposDerbyTo398.sql<br> 6. On one server, enter the following<br> (1) if sks.jks snf sksKsy.pwd exist under &lt;new&gt;/conf/ directory, move them somewhere else or delete them.<br> (2) on one server node</p>\n<pre> OS&gt; ./bin/sksConfig.sh<br>\n </pre>\n<p>(3) Copy sks.jks snf sksKsy.pwd from ./conf/ on that node to ./conf/ on all other nodes.<br> 7. Upgrade and start all Forwarding Agents, if related.<br> 8. Start Striim server</p>\n<p><strong>Notes:</strong><br> 1) MDR is not changed within same version (the first three digits, like 3.9.6.1 and 3.9.6.2). However, a hotfix (e.g., 3.9.6.1B vs 3.9.6.1) may contain a MDR change, and in-place upgrade should not be used. Please contact Striim support when upgrading to a hotfix.</p>\n<p>2)for tgz with oracle or postgreSql as MDR, step #4(2) can be skipped, and sql file name in step #5 needs to be changed accordingly.<br> 3) if the new and old versions span more than 2 versions, related sql files should be run in right order.<br> e.g., from 3.9.6 to 3.9.8,<br> Step #5 may need to execute following files:<br> - UpgradeMetadataReposDerbyTo3971.sql<br> - UpgradeMetadataReposDerbyTo398.sql</p>\n<p><strong>List of In-place sql scripts:</strong></p>\n<p>1) 3.9.6:</p>\n<p><a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposDerbyTo3961.sql\" target=\"_self\">Derby_3.9.6.1</a><br> <a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposOracleTo3961.sql\" target=\"_self\">Oracle_3.9.6.1</a><br> <a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposPostgresTo3961.sql\" target=\"_self\">PostgreSql_3.9.6.1</a></p>\n<p>2) 3.9.7: </p>\n<p><a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposDerbyTo3971.sql\" target=\"_self\">Derby_3.9.7.1</a><br> <a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposOracleTo3971.sql\" target=\"_self\">Oracle_3.9.7.1</a><br> <a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposPostgresTo3971.sql\" target=\"_self\">PostgreSql_3.9.7.1</a></p>\n<p>3) 3.9.8:</p>\n<p><a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposDerbyTo398.sql\" target=\"_self\">Derby_3.9.8</a><br> <a href=\"ttps://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposOracleTo398.sql\" target=\"_self\">Oracle_3.9.8</a><br> <a href=\"https://striim-download.s3-us-west-1.amazonaws.com/Striimer_Release/In-place_Upgrade_sql_scripts/UpgradeMetadataReposPostgresTo398.sql\" target=\"_self\">PostgreSql_3.9.8</a></p>\n<p> </p>\n<p> </p>"} {"page_content": "<p>Applicable to versions prior to 3.9.10. Starting version 3.9.10.1 and above auto-flush is done by the PostgresReader as long as app has Recovery enabled.</p>\n<p> </p>\n<p><strong>Problem:</strong></p>\n<p><span>Striim Postgres Reader is configured against a replication slot and Replication slot size is continuously growing and occupying disk space. </span></p>\n<p> </p>\n<p><strong>Solution</strong></p>\n<p><span>Logical replication uses replication slot to reserve WAL logs on the postgres server and uses wal2json decoding plugin to capture the events of the interested tables. Replicat slots wont be cleared unless consumer(Striim Postgres reader) sends a feedback to the server with LSN that are already processed. Current Striim Postgres reader does not send any feedback with LSN value that are no longer required. </span></p>\n<p>We have a utility provided a Striim Development team, that will send a feedback to the server with the LSN that are no longer required by Postgres Reader.</p>\n<p>Use the below steps to send feedback to the postgres server with LSN.</p>\n<p>1. Download the file <a href=\"https://support.striim.com/hc/article_attachments/360055962873/pg_wal_flush_utility.zip\" target=\"_blank\" rel=\"noopener\">pg_wal_flush_utility.zip</a> and unzip the file.<br><span></span></p>\n<p><span>2. Stop the running striim application for which the WAL_SLOTs are bloating. </span></p>\n<p>Note: Application should be stopped and should not be started until flush utility completes. This is because only one application can connect to a replication slot.</p>\n<p><span>3. Describe the App and copy the Source restart position from checkpoint information. </span><span></span><span>Copy only the LSN value from Source restart position.</span><br><br><span>eg: </span></p>\n<p><span>From Tungsten console </span></p>\n<p><em>describe &lt;application_name&gt;</em></p>\n<p><br><span>Example:</span><br><span>CHECKPOINT (</span><br><span> ADMIN:SOURCE:PG_CDCD_REDER:2</span><br><span> SOURCE RESTART POSITION @</span><br><span class=\"wysiwyg-color-blue110\"><strong> {LSN[0/15AE460]-SeqNum[2]}</strong></span><br><span> SOURCE CURRENT POSITION @</span><br><span> {LSN[0/15AE508]-SeqNum[2]}</span><br><span>) </span></p>\n<p>4. Use to below query to determine the size of replication slot</p>\n<p><em>select slot_name, pg_size_pretty(pg_xlog_location_diff(pg_current_xlog_location(),restart_lsn)) as replicationSlotLag, active from pg_replication_slots;</em><br><br><span>5. Run the WAL flush utility by supplying the copied LSN.</span><br><br><span> <em>export CLASSPATH=pg_walflush.jar:postgresql-42.2.2.jar</em></span></p>\n<p><br><span> <em>java -cp $CLASSPATH:. com.striim.postgres.utility.walflush.Main &lt;ConnectionURL&gt; &lt;username&gt; &lt;password&gt; &lt;replication_slot_name&gt; &lt;WAL_FLUSH_LSN&gt;</em></span><br><br><span>Example:</span><br><em>java -cp $CLASSPATH:. com.striim.postgres.utility.walflush.Main jdbc:postgresql://localhost:5432/webaction waction ****** test_slot 0/15AE460</em><br><br><span>Example Output:</span><br><span></span></p>\n<p><span>Establishing connection with Postgres database...</span><br><span>Connection established successfully.</span><br><span>Starting flush Process for LSN: 0/15AE460</span><br><span>Opening PG Stream on test_slot</span><br><span>Setting PG Stream flush LSN : [0/15AE460]</span><br><span>Forcing update of flush.</span><br><span>Flush Completed Successfully</span><br><span>Closing Stream and Connections.</span><br><br><span>6. Once the utility completes, restart the stopped PG Applications. </span></p>\n<p>Note:</p>\n<p>i). <span> This utility will just send a feedback to postgres server that replication slot can be cleared till a particular LSN. Postgres server will take care of the clearing the replication slot depending on Postgres Server Checkpoint configuration and Striim does not have control over it. WAL Size will not reduce immediately. Generally, its 5 minutes. </span></p>\n<p><span><em>ii) Execute the below query and compare it with step 4 and see if the size is reducing. </em></span></p>\n<p><span><em>select slot_name, pg_size_pretty(pg_xlog_location_diff(pg_current_xlog_location(),restart_lsn)) as replicationSlotLag, active from pg_replication_slots;</em></span></p>"} {"page_content": "<h2 id=\"01GSXF98DHVY1Y6FRZK0MC16BZ\"><strong>Problem Statement</strong></h2>\n<p>Customer has a multicore machine and wants to put Striim along with other processes.<br>The recommended method is to install modern tools like docker , vmware etc.<br>If that is not possible, this article lists a few options with OS level utilities.</p>\n<h2 id=\"01GSXF98DHHC5JD9815YREQA46\"><strong>Solutions with examples on Centos</strong></h2>\n<p>1. A license for 2 CPUs will fail on a 8 CPU server.</p>\n<pre>$ bin/server.sh<br>\n Starting Striim Server - Version 3.9.3 (62c4fca069)<br>\n Starting Server on cluster : striim_114<br>\n Interfaces found in startup file : [192.55.21.114]<br>\n Using TcpIp clustering to discover the cluster members<br>\n [192.55.21.114, 192.55.21.76]<br>\n Resolved Cluster Members to join [192.55.21.114, 192.55.21.76]<br>\n DB details : 192.55.21.114:1527 , wactionrepos , waction<br>\n Current node started in cluster : striim_114, with Metadata Repository<br>\n Registered to: striim<br>\n ProductKey: xxxx<br>\n License Key: xxxx<br>\n License expires in 30 days 12 hours 22 minutes 2 seconds<br>\n Servers in cluster:<br>\n [this] S192_55_21_114 [309cb971-03b1-49e4-b1ff-a9b1ead5ce94]\n\n Cannot add this Striim server to the cluster because it would bring the total\n number of CPUs to 8.<br>\n The current license allows up to 2 CPU(s) in the cluster.\n\n</pre>\n<p><br>2. The server has 8 CPUs</p>\n<pre> $ cat /proc/cpuinfo | grep processor<br>\n processor : 0<br>\n processor : 1<br>\n processor : 2<br>\n processor : 3<br>\n processor : 4<br>\n processor : 5<br>\n processor : 6<br>\n processor : 7\n</pre>\n<p><br>3. Options</p>\n<p><strong><span class=\"wysiwyg-underline\">(1) use dynamic method 1: taskset</span></strong></p>\n<p>(A) install<br>Centos:<br>sudo yum install util-linux</p>\n<p>UBUNTU<br>sudo apt-get install util-linux</p>\n<p>Tarball<br>https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/v2.34/</p>\n<p>(B) start by specifying two CPUs (1 and 1 in the example)</p>\n<pre> $ taskset -ac 0,1 bash bin/server.sh<br>\n Starting Striim Server - Version 3.9.3 (62c4fca069)<br>\n Starting Server on cluster : striim_114<br>\n Interfaces found in startup file : [192.55.21.114]<br>\n Using TcpIp clustering to discover the cluster members<br>\n [192.55.21.114, 192.55.21.76]<br>\n Resolved Cluster Members to join [192.55.21.114, 192.55.21.76]<br>\n DB details : 192.55.21.114:1527 , wactionrepos , waction<br>\n Current node started in cluster : striim_114, with Metadata Repository<br>\n Registered to: striim<br>\n ProductKey: xxxx<br>\n License Key: xxxx<br>\n License expires in 30 days 12 hours 23 minutes 26 seconds<br>\n Servers in cluster:<br>\n [this] S192_55_21_114 [de843224-0959-4806-871c-5c2e7bc26ef3]\n\n started.<br>\n Please go to http://192.55.21.114:9080 or https://192.55.21.114:9081 to administer,\n or use console\n</pre>\n<p><br>-a mean pin all the threads too .<br>-c means logical cpu in list or mixed range form</p>\n<p>(C) confirm CPU 0 and 1 are in use for the Striim server process:</p>\n<pre># cat /proc/14037/status |grep -i Cpus_allowed_list\nCpus_allowed_list:\t0-1\n</pre>\n<p><br><strong><span class=\"wysiwyg-underline\">(2) use dynamic method 2: numactl</span></strong><br>(A) install<br>CENTOS<br>sudo yum install numactl</p>\n<p>UBUNTU<br>sudo apt-get install numactl</p>\n<p>(B) list available CPUs<br>$ numactl -H<br>available: 1 nodes (0)<br>node 0 cpus: 0 1 2 3 4 5 6 7<br>node 0 size: 32767 MB<br>node 0 free: 3048 MB<br>node distances:<br>node 0<br>0: 10</p>\n<p>(C) Pin the server process to specific CPUs</p>\n<pre> $ numactl -aC 0,1 bin/server.sh<br>\n Starting Striim Server - Version 3.9.3 (62c4fca069)<br>\n Starting Server on cluster : striim_114<br>\n Interfaces found in startup file : [192.55.21.114]<br>\n Using TcpIp clustering to discover the cluster members<br>\n [192.55.21.114, 192.55.21.76]<br>\n Resolved Cluster Members to join [192.55.21.114, 192.55.21.76]<br>\n DB details : 192.55.21.114:1527 , wactionrepos , waction<br>\n Current node started in cluster : striim_114, with Metadata Repository<br>\n Registered to: striim<br>\n ProductKey: xxxx<br>\n License Key: xxxx<br>\n License expires in 30 days 12 hours 14 minutes 33 seconds<br>\n Servers in cluster:<br>\n [this] S192_55_21_114 [9ba50957-8fbb-4e64-bbe1-059b7fa64dec]\n\n\n started.<br>\n Please go to http://192.55.21.114:9080 or https://192.55.21.114:9081 to administer,\n or use console\n</pre>\n<p><strong><span class=\"wysiwyg-underline\">(3) start with service for rpm installation</span></strong></p>\n<p>This example is for Linux centos 7:</p>\n<p>(<span>In Ubuntu, the folder for Striim service is installed at: </span><span>/lib/systemd/system)</span></p>\n<p>(A) as root, modify /etc/systemd/system/striim-node.service<br>add (or modify) a line under [Service]:</p>\n<pre>CPUAffinity=0 1</pre>\n<p>(B) after saving the change, run as root or sudo:<br>systemctl daemon-reload</p>\n<p>(C) start striim server:<br>systemctl start striim-dbms<br>systemctl start striim-node</p>\n<p>(D) confirm it uses the right CPUs</p>\n<pre> # cat /proc/17675/status |grep -i Cpus_allowed_list<br>\n Cpus_allowed_list: 0-1\n</pre>\n<p><br>4. Other options and limitations<br>- Other OS level utilities, such as cgroup, nice, may also be used. They are not tested by Striim.<br>- As the process is pinned to specific CPUs, if other process uses the same CPU(s), it may affect the performance of Striim.</p>"} {"page_content": "<h4><strong>Running Dockerized Striim on CentOS 7</strong></h4>\n<h4><strong>Step -1 Installing docker on CentOS 7</strong></h4>\n<p><a href=\"https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-docker-ce\"><span style=\"font-weight: 400;\">https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-docker-ce</span></a></p>\n<ol>\n<li>\n<span style=\"font-weight: 400;\">a) Uninstall old versions</span><span style=\"font-weight: 400;\"><br></span>\n</li>\n</ol>\n<p><strong># Older versions of Docker were called docker or docker-engine. If these are installed, uninstall them, along with associated dependencies.</strong></p>\n<p><span style=\"font-weight: 400;\"></span><span style=\"font-weight: 400;\">$ sudo yum remove docker \\</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> docker-common \\</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> docker-selinux \\</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> Docker-engine</span></p>\n<p><span style=\"font-weight: 400;\">Update yum package manager</span></p>\n<p><span style=\"font-weight: 400;\">$ sudo yum update</span></p>\n<ol>\n<li><span style=\"font-weight: 400;\">b) Install using the repository</span></li>\n</ol>\n<p><span style=\"font-weight: 400;\">Before you install Docker CE for the first time on a new host machine, you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.</span></p>\n<p> </p>\n<p><strong>#Install required packages. yum-utils provides the yum-config-manager utility, and device-mapper-persistent-data and lvm2 are required by the devicemapper storage driver.</strong><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo yum install -y yum-utils \\</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> device-mapper-persistent-data \\</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> lvm2</span></p>\n<p><span style=\"font-weight: 400;\">-Use the following command to set up the stable repository.</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo yum-config-manager \\</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> --add-repo \\</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> https://download.docker.com/linux/centos/docker-ce.repo</span></p>\n<ol>\n<li><span style=\"font-weight: 400;\">c) Install Docker Community Version</span></li>\n</ol>\n<p><strong># Install the latest version of Docker CE</strong><strong><br></strong><span style=\"font-weight: 400;\">$ sudo yum install docker-ce</span></p>\n<ol>\n<li>\n<span style=\"font-weight: 400;\">d) Start docker service</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo systemctl start docker</span>\n</li>\n<li>\n<span style=\"font-weight: 400;\">e) Verify that docker is installed correctly by running the hello-world image</span><span style=\"font-weight: 400;\">.</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo docker run hello-world</span>\n</li>\n</ol>\n<p><br><strong>Step - 2 Installing and running Striim </strong></p>\n<p><strong>#Login to Striim Docker registry hosted on Azure Container Registry</strong></p>\n<p><span style=\"font-weight: 400;\">$ sudo docker login striim.azurecr.io</span></p>\n<p><span style=\"font-weight: 400;\">Username : </span><span style=\"font-weight: 400;\">striim</span></p>\n<p><span style=\"font-weight: 400;\">Password: </span><span style=\"font-weight: 400;\">+/+nCHi+AA7pI++Cm++8JF+KsC=/fFXt</span></p>\n<p><strong>#Download the Striim Docker images</strong></p>\n<p><span style=\"font-weight: 400;\">$ sudo docker pull striim.azurecr.io/striim/striim-node</span></p>\n<p><span style=\"font-weight: 400;\">$ sudo docker pull striim.azurecr.io/striim/striim-mdr</span></p>\n<p><strong>Step -3 Install docker compose</strong></p>\n<p><span style=\"font-weight: 400;\">Now that you have Docker installed, let's go ahead and install Docker Compose. First, install python-pip as prerequisite:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo yum install epel-release</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo yum install -y python-pip</span></p>\n<p><span style=\"font-weight: 400;\">Then you can install Docker Compose:</span></p>\n<p><span style=\"font-weight: 400;\">$ sudo pip install docker-compose</span></p>\n<p><span style=\"font-weight: 400;\">Docker compose is written in python. Installing python package (pip) manager is required to get the tool . </span><span style=\"font-weight: 400;\"></span><span style=\"font-weight: 400;\">Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration</span><span style=\"font-weight: 400;\"></span></p>\n<p><strong>Step -4 Get the docker compose files</strong></p>\n<p><strong># If git is not installed , install it using the below command</strong><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo yum install git</span></p>\n<p><strong># Clone the repository</strong></p>\n<p><span style=\"font-weight: 400;\">$ git clone https://github.com/striim/striim-dockerfiles</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><strong># Get inside the downloaded cluster subdirectory inside striim-dockerfiles parent directory to update the striim-node.env</strong></p>\n<p><span style=\"font-weight: 400;\">$ cd striim-dockerfiles/cluster</span></p>\n<p><span style=\"font-weight: 400;\">The current file contains placeholder for license_key and product_key. Update the following variables mentioned below in striim-node.env using Vim editor.</span></p>\n<p><span style=\"font-weight: 400;\">Company Name: </span><span style=\"font-weight: 400;\">&lt;Update this with company received from Striim&gt;</span></p>\n<p><span style=\"font-weight: 400;\">Cluster Name:</span><span style=\"font-weight: 400;\"> &lt;Update with this the cluster name received from Striim&gt;</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">Product_key:</span><span style=\"font-weight: 400;\"> &lt;Update with this the product key received from Striim&gt;</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">License_key:</span><span style=\"font-weight: 400;\">&lt;Update with this the License key received from Striim&gt;</span></p>\n<p><span style=\"font-weight: 400;\">Cluster password and administrator password can also be changed by updating the same striim-node.env file.</span></p>\n<p><strong>Note - Make sure that you are in the cluster directory </strong></p>\n<p><strong>(striim-dockerfiles/cluster) before executing the docker-compose script to bring up the striim cluster </strong></p>\n<p><strong>Step -5 Start a two node Striim cluster in docker containers</strong><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><strong>#To bring up a 2 node cluster , just issue</strong><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo docker-compose up --scale striim-node=2</span><br><br></p>\n<p><strong>#The 9080 port is bound to a random port, verify the port on which it is bound</strong></p>\n<p><span style=\"font-weight: 400;\">$ sudo docker-compose port striim-node 9080</span></p>\n<p><span style=\"font-weight: 400;\">Output - 0.0.0.0:32773 // Port number output can be different</span><span style=\"font-weight: 400;\"><br><br></span></p>\n<p><span style=\"font-weight: 400;\">Striim WebUI can be accessed at http://&lt;hostname&gt;:&lt;portnumber&gt; </span></p>\n<p><span style=\"font-weight: 400;\"># Port number same as generated in last command</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">Verify via WebUI monitoring you have 2 nodes</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><strong>Accessing the docker machine </strong></p>\n<p> </p>\n<p><strong># To list the running docker containers </strong><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo docker ps</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">Below screenshot shows the three docker running containers</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">dockerfiles_striim_metadatarepo_1 - Contains the derby metadata repository</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">Dockerfiles_striim_node_1 - Striim Node 1</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">Dockerfiles_striim_node_2 - Striim Node 2</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span></p>\n<p> </p>\n<p><span style=\"font-weight: 400;\"><br></span><strong>Getting inside the container hosting node1</strong></p>\n<p> </p>\n<p><span style=\"font-weight: 400;\"># Using Container-Id of striim-node_1 from the screenshot</span></p>\n<p> </p>\n<p><span style=\"font-weight: 400;\">$ sudo docker exec -it 6f032fd81463 /bin/bash</span></p>\n<p><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"># Logging in to console </span></p>\n<p><span style=\"font-weight: 400;\">Change directory to get in Striim directory</span></p>\n<p> </p>\n<p><span style=\"font-weight: 400;\">$ cd opt/striim</span></p>\n<p><span style=\"font-weight: 400;\">$bin/console.sh -c &lt;CLUSTER_NAME&gt; -u &lt;USERNAME&gt; -p &lt;PASSWORD&gt;</span></p>\n<p><span style=\"font-weight: 400;\">$ bin/console.sh -c </span><span style=\"font-weight: 400;\">CGI_test</span><span style=\"font-weight: 400;\"> -u admin -p adminpass</span></p>\n<p><strong>Docker compose for limiting the cpu cores</strong></p>\n<p><span style=\"font-weight: 400;\">Stop the currently running docker using docker-compose</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">$ sudo docker-compose down</span></p>\n<p> </p>\n<p><span style=\"font-weight: 400;\">Edit the docker-compose.yml file </span></p>\n<p><span style=\"font-weight: 400;\">Add the parameter cpuset in striim-node object</span></p>\n<p><span style=\"font-weight: 400;\">cpuset: </span><span style=\"font-weight: 400;\">\"0-3” //Limits the cores per cpu to 4</span></p>\n<p><span style=\"font-weight: 400;\">cpuset: </span><span style=\"font-weight: 400;\">\"0-2” //Limits the cores per cpu to 3</span></p>\n<p> </p>\n<p><span style=\"font-weight: 400;\">docker-compose.yml</span></p>\n<p><span style=\"font-weight: 400;\">version: </span><span style=\"font-weight: 400;\">'2'</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">services:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> striim-metadatarepo:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> image: </span><span style=\"font-weight: 400;\">\"striim.azurecr.io/striim/striim-mdr\"</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> ports:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - 1527:1527</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> volumes:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - striim-data:/</span><span style=\"font-weight: 400;\">var</span><span style=\"font-weight: 400;\">/striim</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> striim-node:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> image: </span><span style=\"font-weight: 400;\">\"striim.azurecr.io/striim/striim-node:3.7.4C\"</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> cpuset: </span><span style=\"font-weight: 400;\">\"0-3\"</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> ports:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - </span><span style=\"font-weight: 400;\">\"9080\"</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> depends_on:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - </span><span style=\"font-weight: 400;\">\"striim-metadatarepo\"</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> env_file:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - striim-node.env</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> environment:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> STRIIM_METADATAREPO_ADDR: striim-metadatarepo:1527</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> volumes:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - ./extlib:/opt/striim/extlib</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - striim-data:/</span><span style=\"font-weight: 400;\">var</span><span style=\"font-weight: 400;\">/striim</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> - striim-waction:/opt/striim/data</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\">volumes:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> striim-data:</span><span style=\"font-weight: 400;\"><br></span><span style=\"font-weight: 400;\"> striim-waction:</span></p>\n<p> </p>\n<p><span style=\"font-weight: 400;\">Start the dockerized striim using scale command</span></p>\n<p><span style=\"font-weight: 400;\">$ sudo docker-compose up --scale striim-node=2</span></p>"} {"page_content": "<p>This is for version 3.8.2 or later, and need some manual works. Striim's RPM packages are <span class=\"s1\">relocatable however currently it does require some manual intervention until all the prefixes are in the spec file. </span></p>\n<p><span class=\"s1\">Note: sudo or root access it needed to execute the commands listed in these steps.</span></p>\n<p>For a clean start the rpm builds can be deleted using following steps.</p>\n<pre>sudo systemctl stop striim-node<br>sudo systemctl stop striim-dbms<br>sudo rpm -e striim-node<br>sudo rpm -e striim-dbms<br>sudo rm -rf /opt/striim<br>sudo rm -rf /var/log/striim<br>sudo rm -rf /etc/systemd/system/multi-user.target.wants/striim-dbms.service<br>sudo rm -rf /etc/systemd/system/multi-user.target.wants/striim-node.service<br>sudo rm -rf /etc/systemd/system/striim-dbms.service<br>sudo rm -rf /etc/systemd/system/striim-node.service</pre>\n<p><strong>1.</strong> download rpm builds</p>\n<p>[fzhang@centos-vm Downloads]$ ls -ltr<br>-rw-rw-r-- 1 fzhang fzhang 741429748 Jun 27 05:03 striim-node-3.8.4-Linux.rpm<br>-rw-rw-r-- 1 fzhang fzhang 5160712 Jun 27 05:03 striim-dbms-3.8.4-Linux.rpm</p>\n<p><strong>2.</strong> install striim-node</p>\n<p>[fzhang@centos-vm Downloads]$ sudo rpm -ivh --relocate /opt/striim=/u01/app/striim striim-node-3.8.4-Linux.rpm<br>Preparing... ################################# [100%]<br>Updating / installing...<br> 1:striim-node-3.8.4-1 ################################# [100%]<br>ln: failed to create symbolic link ‘/opt/striim/logs’: No such file or directory<br>ln: failed to create symbolic link ‘/etc/systemd/system/multi-user.target.wants/striim-node.service’: File exists<br>chown: cannot access ‘/opt/striim’: No such file or directory<br>warning: %post(striim-node-3.8.4-1.noarch) scriptlet failed, exit status 1</p>\n<p><strong>3.</strong> modify: /etc/systemd/system/striim-node.service<br>change following line:</p>\n<p>ExecStart=/u01/app/striim/sbin/striim-node start</p>\n<p><strong>4.</strong> install striim-dbms</p>\n<p>[fzhang@centos-vm Downloads]$ sudo rpm -ivh --prefix /u01/app/striim striim-dbms-3.8.4-Linux.rpm<br>Preparing... ################################# [100%]<br>Updating / installing...<br> 1:striim-dbms-3.8.4-1 ################################# [100%]<br>ln: failed to create symbolic link ‘/opt/striim/derby’: No such file or directory<br>ln: failed to create symbolic link ‘/opt/striim/wactionrepos’: No such file or directory<br>ln: failed to create symbolic link ‘/opt/striim/logs’: No such file or directory<br>ln: failed to create symbolic link ‘/etc/systemd/system/multi-user.target.wants/striim-dbms.service’: File exists<br>chown: cannot access ‘/opt/striim’: No such file or directory<br>warning: %post(striim-dbms-3.8.4-1.noarch) scriptlet failed, exit status 1</p>\n<p>it still installed the files to /var/ directory</p>\n<p><strong>5.</strong> copy from /var to new striim_home<br>(1) <br>mv /var/striim/derby /u01/app/striim/<br>mv /var/striim/wactionrepos /u01/app/striim/<br>(2) mkdir /u01/app/striim/logs<br>(3) chown -R striim:striim /u01/app/striim</p>\n<p>cleanup<br>(4) rm -rf /var/striim<br> rm -rf /var/log/striim</p>\n<p><strong>6.</strong> modify systemd file (similar to step #3)</p>\n<p>/etc/systemd/system/striim-dbms.service<br>/etc/systemd/system/striim-webconfig.service (if present)<br>/etc/systemd/system/striim-node.service</p>\n<p><strong>7.</strong> setup /u01/app/striim/conf/startUp.properties file, as normal</p>\n<p><strong>8.</strong> modify startup files with the new striim_home</p>\n<p>/u01/app/striim/sbin/striim-dbms<br>/u01/app/striim/sbin/striim-node</p>\n<p>In the given example the modified file would look like below</p>\n<pre>#!/bin/bash<br>#<br># Striim DBMS<br>#<br><br>cd <strong>/u01/app/striim/</strong><br>if [[ \"$1\" == \"start\" ]]; then<br> exec java -Dderby.stream.error.file=<strong>/u01/app/striim/logs/</strong>striim-dbms.log -jar derby/lib/derbyrun.jar server start -h 0.0.0.0 -noSecurityManager<br>else<br> exec java -Dderby.stream.error.file=<strong>/u01/app/striim/logs/</strong>striim-dbms.log -jar derby/lib/derbyrun.jar server shutdown -h 127.0.0.1<br>fi</pre>\n<pre>#!/bin/bash<br>#<br># Striim Cluster Node<br>#<br><br>LOGFILE=<strong>/u01/app/striim/logs</strong>/striim-node.log</pre>\n<p> </p>\n<p><strong>9.</strong> startup</p>\n<p><strong>systemctl daemon-reload</strong></p>\n<p>systemctl start striim-dbms<br>systemctl start striim-node</p>\n<p> </p>"} {"page_content": "<div class=\"page\" title=\"Page 1\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<table style=\"height: 151px; width: 545px;\" border=\"1\">\n<tbody>\n<tr>\n<td style=\"width: 105px;\"><strong>Version</strong></td>\n<td style=\"width: 106px;\">\n<p><strong>GA Date</strong></p>\n</td>\n<td style=\"width: 106px;\"><strong>Primary Support Ends</strong></td>\n<td style=\"width: 106px;\"><strong>Extended Support Ends</strong></td>\n<td style=\"width: 106px;\"><strong>Sustaining Support Ends</strong></td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">\n<p>4.2</p>\n</td>\n<td style=\"width: 106px;\"> JUN 2023</td>\n<td style=\"width: 106px;\"> </td>\n<td style=\"width: 106px;\"> </td>\n<td style=\"width: 106px;\">Indefinite</td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">\n<p>4.1</p>\n</td>\n<td style=\"width: 106px;\"> MAY 2022</td>\n<td style=\"width: 106px;\"> JUN 2023</td>\n<td style=\"width: 106px;\"> MAR 2024</td>\n<td style=\"width: 106px;\">Indefinite</td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">\n<p>4.0</p>\n</td>\n<td style=\"width: 106px;\"> OCT 2021</td>\n<td style=\"width: 106px;\"> MAY 2022</td>\n<td style=\"width: 106px;\"> FEB 2023</td>\n<td style=\"width: 106px;\">Indefinite</td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">\n<p>3.10</p>\n</td>\n<td style=\"width: 106px;\"> JUL 2020</td>\n<td style=\"width: 106px;\"> OCT 2021</td>\n<td style=\"width: 106px;\"> JUL 2022</td>\n<td style=\"width: 106px;\">Indefinite</td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">\n<p>3.9</p>\n</td>\n<td style=\"width: 106px;\"> JAN 2019</td>\n<td style=\"width: 106px;\"> JUL 2020</td>\n<td style=\"width: 106px;\"> APR 2021</td>\n<td style=\"width: 106px;\">Indefinite</td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">3.8</td>\n<td style=\"width: 106px;\"> JAN 2018</td>\n<td style=\"width: 106px;\">\n<p> JAN 2019</p>\n</td>\n<td style=\"width: 106px;\"> OCT 2019 </td>\n<td style=\"width: 106px;\">Indefinite</td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">3.7</td>\n<td style=\"width: 106px;\"> APR 2017</td>\n<td style=\"width: 106px;\"> JAN 2018</td>\n<td style=\"width: 106px;\"> OCT 2018</td>\n<td style=\"width: 106px;\">Indefinite</td>\n</tr>\n<tr>\n<td style=\"width: 105px;\">3.6</td>\n<td style=\"width: 106px;\"> JUL 2016</td>\n<td style=\"width: 106px;\"> APR 2017</td>\n<td style=\"width: 106px;\"> JAN 2018</td>\n<td style=\"width: 106px;\">\n<p>Indefinite</p>\n</td>\n</tr>\n</tbody>\n<caption> </caption>\n</table>\n<p> </p>\n<h3 id=\"01H8C944ZZ2VWQ5YT3TJ9KYAJ1\"><strong>Primary Support:</strong></h3>\n<p>Duration: Current version until the GA of next major release*.</p>\n<p>Patch Policy: Next Minor Patch Release, or Justified Hot Fix. Provides comprehensive maintenance and bug fixes.</p>\n<h3 id=\"01H8C944ZZTQEM0JHSCXNCQ2ZY\"><strong>Extended Support:</strong></h3>\n<p>Duration: Up to 9 months from GA Release of Next Major Release.</p>\n<p>Upgrade Plan: Puts you in control of your Striim Software upgrade strategy by providing<br>additional maintenance and upgrades for 9 months. The patch will on latest patchset.</p>\n<p> </p>\n<h3 id=\"01H8C944ZZBJ1Y37PF75YD58M7\"><strong>Sustaining Support:</strong></h3>\n<p>Duration: infinite, as long as support contract is in good standing.<br>This support maximizes your investment protection by providing maintenance for as long as you use Striim Software. Features include access to Striim online support tools, upgrade rights, pre-existing fixes and assistance from technical support experts.</p>\n<p> </p>\n<p>* Major releases:<br>Every 9-15 months<br>Change in First or Second Release Version Digit (e.g., 3.8 and 3.9) </p>\n<p> </p>\n<h3 id=\"01H8C944ZZV37N7C890CK6PAM5\"><strong>Previously Released Versions and Dates</strong></h3>\n<table style=\"width: 100%; height: 1056px;\">\n<tbody>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\"><strong>Versions</strong></td>\n<td style=\"height: 22px; width: 23.1429%;\"><strong>Release Date</strong></td>\n<td style=\"width: 37.7143%; height: 22px;\"><strong>Previous Fixes Included</strong></td>\n</tr>\n<tr>\n<td style=\"width: 39.1429%;\">4.2.0.3</td>\n<td style=\"width: 23.1429%;\">2023-08-26</td>\n<td style=\"width: 37.7143%;\">4.2.0.1</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.2.0.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-07-25</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.2.0 + 4.1.2.1C</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.2.0</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-06-16</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2.1B</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-04-21</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2.1 + Docker</td>\n</tr>\n<tr>\n<td style=\"width: 39.1429%;\">4.1.2.1F</td>\n<td style=\"width: 23.1429%;\">2023-08-21</td>\n<td style=\"width: 37.7143%;\">4.1.2.1E</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2.1E</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-07-25</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2.1D</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2.1D</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-07-20</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2.1C</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2.1C</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-06-22</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2.1B</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2.1B</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-05-12</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2.1A</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2.1A</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-04-21</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2.1</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-03-03</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.2</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2023-01-15</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.0.3</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.0.3</td>\n<td style=\"height: 22px; width: 23.1429%;\">2022-11-17</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.0.2</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.0.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2022-10-27</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.1.0.1</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.0.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2022-07-13</td>\n<td style=\"width: 37.7143%; height: 22px;\">\n<p>4.1.0</p>\n</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.1.0</td>\n<td style=\"height: 22px; width: 23.1429%;\">2022-05-19</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.0.5</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.0.5</td>\n<td style=\"height: 22px; width: 23.1429%;\">2022-01-27</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.0.4.3</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.0.4.3</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-12-08</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.0.4.2</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.0.4.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-11-29</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.0.4.1</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.0.4.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-11-04</td>\n<td style=\"width: 37.7143%; height: 22px;\">4.0.3</td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">4.0.3</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-10-13</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.8A</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-12-29</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.8</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-12-14</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.7</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-11-24</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.6</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-07-06</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.5</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-05-24</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.4</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-03-31</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.3</td>\n<td style=\"height: 22px; width: 23.1429%;\">2021-02-22</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-12-18</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-11-24</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.3</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-09-18</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.2.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-09-29</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-09-11</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.1.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-09-04</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.10.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-07-07</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.8</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-09-23</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.7</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-09-04</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.6</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-06-02</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.5</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-05-12</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.4</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-04-25</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.3</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-04-03</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-03-14</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-02-05</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.8</td>\n<td style=\"height: 22px; width: 23.1429%;\">2020-01-10</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.7.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2019-12-19</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.7.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2019-10-28</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.6.3</td>\n<td style=\"height: 22px; width: 23.1429%;\">2019-08-12</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.6.2</td>\n<td style=\"height: 22px; width: 23.1429%;\">2019-07-24</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n<tr style=\"height: 22px;\">\n<td style=\"height: 22px; width: 39.1429%;\">3.9.6.1</td>\n<td style=\"height: 22px; width: 23.1429%;\">2019-06-02</td>\n<td style=\"width: 37.7143%; height: 22px;\"> </td>\n</tr>\n</tbody>\n</table>\n</div>\n</div>\n</div>\n</div>"} {"page_content": "<p>Following shows an example of the effect of locale on Striim</p>\n<p>oracle@raj-centos /u02/app/Striim_3963$locale<br>LANG=it_IT.UTF-8<br>LC_CTYPE=\"it_IT.UTF-8\"<br>LC_NUMERIC=it_IT.UTF-8<br>LC_TIME=\"it_IT.UTF-8\"<br>LC_COLLATE=it_IT.UTF-8<br>LC_MONETARY=\"it_IT.UTF-8\"<br>LC_MESSAGES=\"it_IT.UTF-8\"<br>LC_PAPER=\"it_IT.UTF-8\"<br>LC_NAME=\"it_IT.UTF-8\"<br>LC_ADDRESS=\"it_IT.UTF-8\"<br>LC_TELEPHONE=\"it_IT.UTF-8\"<br>LC_MEASUREMENT=\"it_IT.UTF-8\"<br>LC_IDENTIFICATION=\"it_IT.UTF-8\"<br>LC_ALL=</p>\n<p class=\"p1\"><span class=\"s1\">SQL&gt; update striim.customer_order_item set price=1.88;</span></p>\n<p class=\"p1\">Snippet of WAEvent from OracleReader</p>\n<p class=\"p1\"><span class=\"s1\">sysout: WAEvent{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>data: [\"3\",null,null,null,<strong>\"1,99\"]</strong></span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>metadata: {\"...\"}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>userdata: null</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>before: [\"3\",null,null,null,<strong>\"1,88\"</strong>]</span></p>\n<p class=\"p1\"><span class=\"s1\">This may cause failure downstream with java.lang.NumberFormatException due to the format.</span></p>\n<p class=\"p1\"><span class=\"s1\">This is because of the decimal separator changing to , instead of .</span></p>\n<p class=\"p1\"><span class=\"s1\">oracle@raj-centos /u02/app/Striim_3963$<strong>locale -k LC_NUMERIC</strong><br><strong>decimal_point=\",\"</strong><br>thousands_sep=\"\"<br>grouping=-1;-1<br>numeric-decimal-point-wc=44<br>numeric-thousands-sep-wc=0<br>numeric-codeset=\"UTF-8\"</span></p>\n<p class=\"p1\">As a workaround following can be done at session level before starting the striim process</p>\n<p class=\"p1\">oracle@<span class=\"s1\">raj-centos</span> /u02/app/Striim_3963<strong>$export LC_ALL=en_US.UTF-8</strong><br>oracle@<span class=\"s1\">raj-centos</span> /u02/app/Striim_3963<strong>$locale -k LC_NUMERIC</strong><br><strong>decimal_point=\".\"</strong></p>"} {"page_content": "<h2><strong>Problem:</strong></h2>\n<p><em>W (admin) &gt; LOAD OPEN processor \"/opt/striim/modules/My_test.scm\";<br>Processing - LOAD OPEN processor \"/opt/striim/modules/My_test.scm\"<br>-&gt; FAILURE<br>java.util.concurrent.ExecutionException: com.hazelcast.core.HazelcastException: java.io.FileNotFoundException: /opt/striim/modules/My_test.scm (No such file or directory):<br>line:1 LOAD OPEN processor \"/opt/striim/modules/My_test.scm\";</em></p>\n<p> </p>\n<h2><strong>Solution:</strong></h2>\n<p>This error is due to the specified scm file does not exist at other node(s) in the cluster.</p>\n<p>Solution is to copy the scm file to other nodes also, in the same directory.</p>"} {"page_content": "<h2><strong>Problem:</strong></h2>\n<p>My Stream server is on a 8 core server, and two applications and each has one KafkaWriters (KW) that has ParallelThreads=4. After starting the app, the cpu usage jumped from 10% to 100%.</p>\n<p> </p>\n<h2><strong>Cause and Troubleshooting:</strong></h2>\n<p>This could be caused by disruptor WaitStrategy setting. By default it is Yielding, which may use more CPU.</p>\n<p>To confirm this is the cause, on Linux, you may do following:</p>\n<p>1. get Striim server process id</p>\n<p>2. get jstack</p>\n<p>jstack &lt;pid&gt; &gt; my.jstack</p>\n<p>3. get thread id</p>\n<p>example:</p>\n<pre>striim@mytest.com:/prd/cdc/cdcp&gt; top -n 1 -H -p 10001<br>\ntop - 17:38:17 up 294 days, 1:14, 2 users, load average: 8.90, 8.84, 9.03<br>\nThreads: 403 total, 8 running, 395 sleeping, 0 stopped, 0 zombie<br>\n%Cpu(s): 28.0 us, 72.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st<br>\nKiB Mem : 13186572+total, 28511532 free, 7944660 used, 95409536 buff/cache<br>\nKiB Swap: 6291452 total, 6271572 free, 19880 used. 11885377+avail Mem<br>\n <br>\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<br>\n10667 cdcp 20 0 55.699g 5.557g 75392 R 99.9 4.4 594:04.97 java<br>\n10610 cdcp 20 0 55.699g 5.557g 75392 R 99.9 4.4 594:11.57 java<br>\n14109 cdcp 20 0 55.699g 5.557g 75392 R 99.9 4.4 544:07.82 java<br>\n10548 cdcp 20 0 55.699g 5.557g 75392 R 93.8 4.4 594:56.21 java<br>\n10638 cdcp 20 0 55.699g 5.557g 75392 R 93.8 4.4 594:17.18 java<br>\n14080 cdcp 20 0 55.699g 5.557g 75392 R 93.8 4.4 543:27.34 java<br>\n14136 cdcp 20 0 55.699g 5.557g 75392 R 93.8 4.4 543:20.03 java<br>\n14163 cdcp 20 0 55.699g 5.557g 75392 R 93.8 4.4 542:53.86 java<br>\n10129 cdcp 20 0 55.699g 5.557g 75392 S 6.2 4.4 0:11.33 java<br>\n10184 cdcp 20 0 55.699g 5.557g 75392 S 6.2 4.4 0:19.83 java<br>\n</pre>\n<p>Above high CPU thread ids are in hex as following:<br>29AB|2972|371D|2934|298E|3700|3738|3753</p>\n<p>4. Search those hex values from jstack file :</p>\n<pre>$ egrep -i \"29AB|2972|371D|2934|298E|3700|3738|3753\" my.jstack<br>\nadmin.SRC1_CDC_Kafka_writer-1568101809574\" #1302 prio=5 os_prio=0 tid=0x00007f8fa87fe800 nid=0x3753 runnable [0x00007f8e89913000]<br>\nadmin.SRC1_CDC_Kafka_writer-1568101809364\" #1267 prio=5 os_prio=0 tid=0x00007f8fa8117800 nid=0x3738 runnable [0x00007f8e8e93e000]<br>\nadmin.SRC1_CDC_Kafka_writer-1568101808939\" #1232 prio=5 os_prio=0 tid=0x00007f8fa86ad800 nid=0x371d runnable [0x00007f8e8c19b000]<br>\nadmin.SRC1_CDC_Kafka_writer-1568101808643\" #1197 prio=5 os_prio=0 tid=0x00007f8fa88f2000 nid=0x3700 runnable [0x00007f8e87ef9000]<br>\nadmin.SRC2_CDC_Kafka_writer-1568098722639\" #315 prio=5 os_prio=0 tid=0x00007f8f344b0800 nid=0x29ab runnable [0x00007f8e93698000]<br>\nadmin.SRC2_CDC_Kafka_writer-1568098718693\" #280 prio=5 os_prio=0 tid=0x00007f8f3408d800 nid=0x298e runnable [0x00007f8e951b3000]<br>\nadmin.SRC2_CDC_Kafka_writer-1568098715100\" #245 prio=5 os_prio=0 tid=0x00007f8f34123800 nid=0x2972 runnable [0x00007f8e96cce000]<br>\nadmin.SRC2_CDC_Kafka_writer-1568098670566\" #209 prio=5 os_prio=0 tid=0x00007f8f3410a000 nid=0x2934 runnable [0x00007f8eebccc000]<br>\n</pre>\n<p>Above shows the cpu problem is due to KWs. Each app has one KW with 4 parallel threads, so total is 8 threads. The server has 8 cores, which is why 100% are used.</p>\n<h2><strong><span class=\"wysiwyg-font-size-large\">Solution:</span></strong></h2>\n<p>The disruptor WaitStrategy may be changed to sleeping from default Yielding.</p>\n<p>The setting is at ./bin/server.sh file.</p>\n<p>(1) in version 3.8 and earlier:</p>\n<pre class=\"code-java\">-Dcom.webaction.config.waitStrategy.admin.SRC1_CDC_Kafka_writer=\"sleep\" \\<br>-Dcom.webaction.config.waitStrategy.admin.SRC2_CDC_Kafka_writer=\"sleep\" \\</pre>\n<p>(2) in version 3.9 and later, it is a parameter (parameter name is case sensitive) in startUp.properties file.</p>\n<pre><span>WaitStrategy=Admin.KW1:sleep,Admin.KW_2:sleep</span></pre>\n<p>For cluster, the changes are required on all the nodes. Striim server needs to be restarted for the change to be effective.</p>\n<p> </p>\n<h4>Note: Please specify the KafkaWriter name and not the application name in the WaitStrategy</h4>\n<p> </p>\n<h3>References:</h3>\n<p>https://support.striim.com/hc/en-us/articles/4407528999703-How-to-find-the-Striim-Server-PID-Process-ID</p>"} {"page_content": "<p><strong>Issue:</strong></p>\n<p>User has upgraded striim version to 3.9.6.3. Existing application crashed with below error</p>\n<pre><span class=\"wysiwyg-font-size-small\"><strong>com.webaction.common.exc.SystemException: Error in processing event for DatabaseWriter {Invalid conversion requested}</strong></span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.proc.DatabaseWriter_1_0.processEvent(DatabaseWriter_1_0.java:210)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.proc.RetriableWriter.handleEvent(RetriableWriter.java:253)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.proc.RetriableWriter.receive(RetriableWriter.java:93)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.runtime.components.Target.receive(Target.java:220)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.runtime.DistributedRcvr.doReceive(DistributedRcvr.java:250)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.runtime.DistributedRcvr.onMessage(DistributedRcvr.java:115)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.jmqmessaging.InprocAsyncSender.processMessage(InprocAsyncSender.java:52)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.jmqmessaging.AsyncSender$AsyncSenderThread.run(AsyncSender.java:134)</span><br><span class=\"wysiwyg-font-size-small\">C<strong>aused by: java.sql.SQLException: Invalid conversion requested</strong></span><br><span class=\"wysiwyg-font-size-small\"><strong> at oracle.jdbc.driver.OraclePreparedStatement.setObjectCritical(OraclePreparedStatement.java:9373)</strong></span><br><span class=\"wysiwyg-font-size-small\"><strong> at oracle.jdbc.driver.OraclePreparedStatement.setObjectInternal(OraclePreparedStatement.java:8954)</strong></span><br><span class=\"wysiwyg-font-size-small\"><strong> at oracle.jdbc.driver.OraclePreparedStatement.setObject(OraclePreparedStatement.java:9548)</strong></span><br><span class=\"wysiwyg-font-size-small\"> at oracle.jdbc.driver.OraclePreparedStatementWrapper.setObject(OraclePreparedStatementWrapper.java:249)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.TypeHandler.DateTimeToTimestampHandler.bind(TypeHandler.java:548)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.OperationHandler.UpdateHandler.bindAfter(UpdateHandler.java:95)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.OperationHandler.UpdateHandler.bind(UpdateHandler.java:66)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.Policy.DefaultExecutionPolicy.execute(DefaultExecutionPolicy.java:44)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.databasewriter.EventProcessor.execute(EventProcessor.java:814)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.Policy.Policy.execute(Policy.java:104)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.Policy.CommitPolicy.execute(CommitPolicy.java:72)</span><br><span class=\"wysiwyg-font-size-small\"> at com.webaction.Policy.Policy.execute(Policy.java:102)</span></pre>\n<p><strong>Cause:</strong></p>\n<p>This issue is caused due to incompatible ojdbc version. It is recommended to use ojdbc8 version for Striim version 3.9.6.3 version.</p>\n<p><br><strong>Solution:</strong></p>\n<p>1. Download ojdbc8.jar and place it under &lt;striim install location&gt;/lib and remove old ojdbc&lt;version&gt;.jar</p>\n<p>https://www.oracle.com/technetwork/database/features/jdbc/jdbc-ucp-122-3110062.html</p>\n<p>2. Restart the striim server</p>"} {"page_content": "<h4>Please refer to <a href=\"https://www.striim.com/docs/platform/en/release-notes.html\">https://www.striim.com/docs/platform/en/release-notes.html</a><span class=\"s1\"> for a complete list of fixes and any known issues. A copy of the same for the latest release is available <a href=\"https://support.striim.com/hc/en-us/articles/360047415273-Latest-Striim-Version-Release-Notes\" target=\"_self\">here</a></span>\n</h4>\n<p>Following has a list of version numbers (release date) and fixes part of that release. </p>\n<p> </p>\n<h2 id=\"UUID-b2bb3f38-054b-c3f9-4c2f-343e85f3f7c6_bridgehead-idm13331040467392\" class=\"bridgehead\">Customer-reported issues fixed in release 4.2.0 ( June 16th, 2023):</h2>\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-11526: Oracle Reader hangs after network outage</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-19035: unable to drop the JMSReader application</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-22094: PostgreSQL Reader &gt; DAtabase Writer with Oracle fails on timestamp with time zone</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-25489: Oracle Reader not checking supplemental logging</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-26693: Incremental Batch Reader with PostgreSQL error with TIMESTAMPTZ</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-27618: Mongo CosmosDB Writer is slow for initial load</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-28054: LEE does not show correct values when the app contains an open processor</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-28500: GGTrail Reader not capturing ROWID in before images</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-28534: alert manager SMTP reset issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-28785: can't view Apps page</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-29264: REPORT LEE fails with com.webaction.wactionstore.Utility.reportExecutionTime error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-29970: web UI message log missing messages</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30476: high memory usage when GG Trail Reader processes large LOB data</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31352: after making some changes in Flow Designer and dropping the app, the web UI hangs</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31423: error in UpgradeMetadataReposOracleTo4101.sql</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31579: notification issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31993: MySQL Reader fails on a DDL change to a table not specified in the Tables property</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32268: PostgreSQL Reader issue with non-lowercase schemas</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32275: MongoDB Reader timed out after 10000 ms while waiting to connect</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32632: MS SQL Reader &gt; Database Writer with SQL Server missing events</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33086: MongoDB Reader can't read from MongoDB version 5.0.13</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33166: Databricks Writer \"This request is not authorized to perform this operation using this permission\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33426: Databricks Writer \"table not found\" error when table exists</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33510: MongoDB CDC sending Insert operation as Update operation</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33543: exported TQL has stream and router DDL out of order</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34346: MongoDB Reader with SSL &gt; S3Writer crash</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34365: MariaDBReader does not halt when when binlog file is not present</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34399: Role tab of User page in web UI is blank</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34551: GG Trail Reader uses old an old TDR record to create type and app crashes with ColumnType mismatch</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34623: Azure Synapse Writer \"Incorrect syntax near 'PERCENT'\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34725: PostgreSQL Reader &gt; Database Writer with PostgreSQL JSON operator error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34768: unable to run setupOjet due to missing ojdbc-21.1.jar</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34874: cannot enable CDDL Capture with Start Time/ Start Position</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34926: UI slow</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34966: Database Reader converting NULLs to 0 when selecting from int (unsigned) columns</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35054: Oracle Reader SQLIntegrityConstraintViolationException: Column 'FILENAME' cannot accept a NULL value.</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35096: Alert Manager: email address with special characters not accepted</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35164: Apps page keeps reloading after dropping app</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35195: ADLS Gen2 Writer error \"Component Type: TARGET. Cause: null\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35196: Issues when receiving alert mail during app crash</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35349: BigQueryWriter in streaming mode \"Could not parse '2023-03-01 12:41:16.216614+00' as a timestamp\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35379: Database Reader with SQL Server \"Error occured while creating type for table, Problem creating type\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35405: OJet firstSCN issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35428: MySQL Reader ignores DDL if there is a space before the schema name in the Tables string</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35429: can't deploy app with router component after upgrade</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35481: MS SQL Reader &gt; Database Writer with Oracle NO_OP_UPDATE exception</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35548: HTTP Reader is binding to non-SSL port</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35581: exceptions not showing up in exception store</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35654: property variable created in the web UI doesn't work in app</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35790: BEFORE() function issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35881: Snowflake Writer \"Timestamp '2023-01-01 ��:��:��.000000000 ' is not recognized\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35974: Alert Manager page is blank</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-35994: All vaults lost after restarting Striim Platform or Striim Cloud</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-36135: app goes into quiesce state every time it is restarted</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-36157: Database Reader with MySql or MariaDB \"Error occured while creating type for table {xxx.xxx}\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-36158: OJet issue when table has both primary and unique keys</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-36166: MariaDB \"Out of range value for column 'asn' : value 4220006002 is not in class java.lang.Integer range\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-36307: \"invalid bytecode org.objectweb.asm.tree.analysis.AnalyzerException\" when starting Striim</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-36308: upgrade fails with \"metadataDB field is not set to one of the options derby, oracle, or postgres\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-36352: MariaDB Reader &gt; DatabaseWriter with MySQL \"java.lang.Integer cannot be cast to java.lang.Short\"</p>\n</li>\n</ul>\n<h2 id=\"UUID-b2bb3f38-054b-c3f9-4c2f-343e85f3f7c6_bridgehead-idm13331040467392\" class=\"bridgehead\">Customer-reported issues fixed in release 4.1.2.2 ( April 21st, 2023):</h2>\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-36151: Docker image supports additional Kubernetes arguments</p>\n</li>\n</ul>\n<h2 id=\"UUID-b2bb3f38-054b-c3f9-4c2f-343e85f3f7c6_bridgehead-idm13331040467392\" class=\"bridgehead\">Customer-reported issues fixed in release 4.1.2.1 ( March 3rd, 2023):</h2>\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-24253: Database Reader &gt; Database Writer issues with PostgreSQL partitioned table</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-28098: Database Writer SQL Server issue converting nvarchar to numeric</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34632: GGTrailReader \"RMIWebSocket.handleMessageException\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34691: CREATE VAULT logging issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34706: issues with multi-server cluster after 4.1.0.1 &gt; 4.1.0.4 upgrade</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34722: application remains in Stopping state for a long time</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34742: Salesforce Reader recovery checkpoint Issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34801: Mongo CosmosDB Reader &gt; MongoDB Writer application issue after 4.1.0 &gt; 4.1.2 upgrade</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34840: MSSQL Reader \"Table Metadata does not match metadata in ResultSet\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34897: unable to save multiple email addresses for custom alert</p>\n</li>\n</ul>\n<h2 id=\"UUID-b2bb3f38-054b-c3f9-4c2f-343e85f3f7c6_bridgehead-idm13331040467392\" class=\"bridgehead\">Customer-reported issues fixed in release 4.1.2 (Jan 25th, 2023):</h2>\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-23078: Oracle Reader ORA-00310 error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-24646: GG Trail Parser checkpoint issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-27396: log contains many unneeded INFO level messages from RMIWebSocket</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-27491: Oracle Reader sending all columns when Compression=True</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-29118: JMX Reader \"javax.management.AttributeNotFoundException\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-29662: MON failure in 4.0.5.1B</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30421: MS SQL Reader Azure Active Directory authentication failure</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30423: DROP failure with JMS Reader</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30605: log4j issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30627: Embed Dashboard button is visible to non-admin user</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30921: Oracle Reader issue reading CLOB</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30940: monitoring stops updating</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31101: S3 Reader error \"Unexpected character ('a' (code 97))\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31299: OJet KeyColumns issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31306: MySQL Reader error \"Binlog Client closed abruptlyBinLog\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31397: System$Notification.NotificationSourceApp issue with multiple servers</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31398: OJet: can't use property variable</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31424: Oracle Reader: LogMiner stops working after DataGuard switchover to physical standby and back</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31525: Oracle Reader error \"Component Name: xxx. Component Type: SOURCE. Cause: null\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31540: OJet issues with ConnectionURL and DownstreamConnectionURL</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31541: OJet issue with database name</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31609: MySQL Reader and MariaDB Xpand Reader checkpointing issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31630: security issue in 4.0.5.1A</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31640: Database Reader Sybase &gt; BigQuery Writer issue with BIT type</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31641: OJet \"Downstream capture: \"missing multi-version data dictionary\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31651: Oracle Reader not using UNIQUE INDEX as key</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31711: OJet \"Failed to reposition by SCN\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31878: OJet \"Could not find column ... in cached metadata of table\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32052: OJet issue with BLOB and RAW types</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32083: Databricks Writer \"HiveSQLException:Invalid SessionHandle\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32255: GG Trail Reader \"ColumnTypeMismatchException\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32328: OJet java.lang.NullPointerException in server log after undeploy</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32415: OJet ORA-01013 error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32485: REST API output for DESCRIBE is incomplete</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32501: Mongo CosmosDB Writer is slow</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32503: \"duplicate key value violates unique constraint 'billing_cycle_uk_idx'\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32568: Database Reader with SQL Server &gt; BigQuery RWriter \"\"Invalid datetime string\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32570: Mongo CosmosDB Reader &gt; Mongo CosmosDB Writer changed ObjectID to String</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32582: Oracle Reader &gt; Databricks Writer is missing target acknowledged position</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32595: PostgreSQL Reader wildcard issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32628: Tables property value changed after export-import upgrade from 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32822: Web UI slow in 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32877: MSSQL Reader issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32906: MSSQL Reader issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32915: Database Reader &gt; Database Writer app has NullPointerException error after upgrade from 4.0.5.1.B</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33043: Oracle Reader &gt; Snowflake Writer \"JDBC driver encountered communication error. Message: HTTP status=403” error after auto-resume</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33089: Incremental Batch Reader \"For input string: \"'5475856979'\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33158: Snowflake Writer java.lang.IndexOutOfBoundsException error after upgrading to 4.1.0.2</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33169: EXPORT does not work with REST API</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33175: \"Failed to get monitoring data with invalid Token Exception\" error with 4.0.5.1B</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33286: Oracle Reader &gt; Databricks Writer for Azure Databricks \"cannot resolve '<span class=\"MathJax_Preview\"></span><span id=\"MathJax-Element-1-Frame\" class=\"mjx-chtml MathJax_CHTML\" tabindex=\"0\" data-mathml=\"data-mathml\">_c0\" role=\"presentation\" style=\"box-sizing: border-box; display: inline-block; line-height: 0; text-indent: 0px; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 16.66px; letter-spacing: normal; overflow-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; margin: 0px; padding: 1px 0px; position: relative;\"<span id=\"MJXc-Node-1\" class=\"mjx-math\" aria-hidden=\"true\"><span id=\"MJXc-Node-2\" class=\"mjx-mrow\"><span id=\"MJXc-Node-3\" class=\"mjx-mstyle\"><span id=\"MJXc-Node-4\" class=\"mjx-mrow\"><span id=\"MJXc-Node-5\" class=\"mjx-mo\"><span class=\"mjx-char MJXc-TeX-main-R\">_</span></span><span id=\"MJXc-Node-6\" class=\"mjx-mi\"><span class=\"mjx-char MJXc-TeX-math-I\">c</span></span><span id=\"MJXc-Node-7\" class=\"mjx-mn\"><span class=\"mjx-char MJXc-TeX-main-R\">0</span></span></span></span></span></span><span class=\"MJX_Assistive_MathML\" role=\"presentation\">_�0</span></span>' given input columns\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33336: Oracle Reader &gt; Databricks Writer for Azure Databricks has duplicate records in target</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33409: OJet does not halt when required archived logs are missing</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33492: CPU Utilization 100% with 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33534: SMTP configuration issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33559: CPU Utilization 100% with 4.1.0.1C</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33560: PostgreSQL Reader issue with wal2json 2.4</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-33656: Azure Synapse Writer table \"does not exist\" error with wildcard when table name includes underscore</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34073: Databricks Writer \"Integration failed for table\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-34314: SalesForce Reader doesn't capture data when using certain valid Start Time values</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-27520: Snowflake Writer can not be used when Striim is running in Microsoft Windows.</p>\n</li>\n</ul>\n<h2 id=\"UUID-b2bb3f38-054b-c3f9-4c2f-343e85f3f7c6_bridgehead-idm13331040467392\" class=\"bridgehead\">Customer-reported issues fixed in release 4.1.0.3 (Nov 17th, 2022):</h2>\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-32384: Oracle Reader &gt; BigQuery incorrect BLOB values</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32627: Upgrade to Apache Commons Text 1.10.0 to fix security issue</p>\n</li>\n</ul>\n<h2 id=\"UUID-b2bb3f38-054b-c3f9-4c2f-343e85f3f7c6_bridgehead-idm13331040467392\" class=\"bridgehead\">Customer-reported issues fixed in release 4.1.0.2 (Oct 27th, 2022):</h2>\n<div class=\"itemizedlist\">\n<ul class=\"itemizedlist\">\n<li class=\"listitem\">\n<p>DEV-29906: Azure SQL Server - MS SQL Reader &gt; Database Writer: \"Cannot find either column \"SYS\" or the user-defined function or aggregate SYS.fn_cdc_get_min_lsn\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30605: log4j upgrade issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30628: MS Jet &gt; Azure Synapse Writer: \"Updating a distribution key column in a MERGE statement is not supported\" error</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-30921: Oracle Reader failed: Failed to process DML Record Encountered error : nullwhile processing query</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31030: OJet to_date function: java.time.LocalDateTime cannot be converted to DateTime</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31106: Database Writer with SQL Server: Failed to publish Notification for \"Connection successful\" for UUID null due to the following exception. null</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31299: OJet: KEYCOLUMNS requires supplemental logging on all the columns.</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31306: MySQL Reader: halts with \"Problem processing event\" error.</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31398: OJet: Propertyvariable doesn't work properly</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31496: OJet: App connect to downstream database failed with ORA-00904: \"VALID\": invalid identifier</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31541: OJet: Downstream does not work if database name contain “.” with domain name</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31583: GGTrailReader:read trail file in wrong order when trail file pattern contains number</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31588: Oracle Reader: StartSCN is not cleared in OracleTxnCacheLayer for rollbacks causing memory issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31641: OJet downstream capture: missing multi-version data dictionary</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31651: Oracle Reader: will not use unique index as a key</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31711: OJet: crashed with \"Failed to 'reposition by SCN\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-31878: OJet: failed with error \"Message: Could not find column ... in cached metadata of table\"</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32009: Oracle Reader: crashed With NPE due to unsupported operation</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32031: GGTrailReader: double-byte characters are broken</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32038: Oracle Reader: StartSCN is not cleared in OracleTxnCacheLayer for rollbacks causing memory issue</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32052: OJet: does not return binary value as HexString</p>\n</li>\n<li class=\"listitem\">\n<p>DEV-32082: DBReader (Oracle) -&gt; AzureSQLDWHWriter: failed with \"type mismatch or invalid character for the specified codepage\"</p>\n</li>\n</ul>\n</div>\n<h2>Customer-reported issues fixed in release 4.1.0.1 (July 13th, 2022):</h2>\n<ul>\n<li>DEV-24163: Kafka Reader \"Offset ... is out of range\" errors: if you encounter this error,<span> </span><a href=\"https://www.striim.com/contact/\">contact Striim support</a><span> </span>for assistance in configuring the<span> </span><code>IgnorableExceptions</code><span> </span>property (new in 4.1.0.1) to avoid it</li>\n<li>DEV-25700: Database Reader / Teradata: \"Failure in connecting to Database\" error</li>\n<li>DEV-27444: Elasticsearch uses too much disk space and application crashes</li>\n<li>DEV-27520: Snowflake Writer on Windows: JDBC \"Unexpected local file path from GS\"</li>\n<li>DEV-28713: Database Reader / Oracle &gt; BigQuery Writer: \"error creating metadata objects from json string\"</li>\n<li>DEV-29095: Flow Designer: horizontal scroll bar missing after \"Arrange all components</li>\n<li>DEV-29606: MongoDB Reader with MongoPartialRecordPolicy returns date as timestamp</li>\n<li>DEV-29615: GG Trail Reader timezone issue</li>\n<li>DEV-29853: recovery issue when using ROUTER component</li>\n<li>DEV-29870: some .jar files no longer loading after upgrade to 4.0.6</li>\n<li>DEV-29883: BigQuery Writer: \"Exceeded rate limits: too many api requests per user per method for this user_method\"</li>\n<li>DEV-29891: PostgreSQL lReader: PK_UPDATE included in metadata when there is no primary key update</li>\n<li>DEV-29930: NullPointerException at getAppMetaObjectBelongsTo when deploying app</li>\n<li>DEV-29932: MongoDB Reader with MongoPartialRecordPolicy: no MongoClient instance after restart</li>\n<li>DEV-29933: MongoPartialRecordPolicy fails due to not retrying</li>\n<li>DEV-29989: UI navigation is slow</li>\n<li>DEV-30035: MongoDB Reader with MongoPartialRecordPolicy returns number as object</li>\n<li>DEV-30037: special character support: properly escaped column name containing space not read correctly</li>\n<li>DEV-30070: Kafka Reader 2.1 cannot specify Start Offset and Start Timestamp for individual partitions. In 4.1.0.1 this is supported for Kafka 2.1 by specifying<span> </span><code>&lt;partition number&gt;=&lt;offset&gt;</code><span> </span>or<span> </span><code>&lt;partition number&gt;=&lt;timestamp&gt;</code><span> </span>for each partition, separated by semicolons, for example,<span> </span><code>startOffset: '0=1024; 1=2048'</code>\n</li>\n<li>DEV-30121: Kudu Writer: high CPU and memory usage</li>\n<li>DEV-30225: After in-place upgrade to 4.1.0, exported TQL includes invalid<span> </span><code>LOADLEVEL null</code><span> </span>resulting in error on import</li>\n<li>DEV-30444: Application Management REST API: after upgrading from 3.10.3 to 4.1.0, can no longer create applications that include an open processor</li>\n<li>DEV-30464: metadata import hangs when upgrading from 3.10.3.5 to 4.1.0</li>\n</ul>\n<h2>Customer-reported issues fixed in release 4.1.0/4.0.6 (May 19th, 2022):</h2>\n<ul>\n<li>DEV-14600: GGTrailParser: \"Unsupported record, skipping it\" error</li>\n<li>DEV-15431: OracleReader &gt; KuduWriter: race condition error</li>\n<li>DEV-16218: GTrailParser issue</li>\n<li>DEV-17194: MySQL Reader issue when database timezone is different from Striim timezone</li>\n<li>DEV-19271: Snowflake Writer in Windows: invalid UTF8 detected in string</li>\n<li>DEV-21122: Database Reader / Oracle issue reading views</li>\n<li>DEV-21503: BigQuery Writer: JobId sometimes missing from server log</li>\n<li>DEV-21785: EmailAdapter issue with getting subject line from field values</li>\n<li>DEV-22224: MS SQL Reader / Azure SQL DB: issue accessing replica that is not in the PRIMARY or SECONDARY role</li>\n<li>DEV-22253: DSV Formatter useQuotes issue</li>\n<li>DEV-22737: Flow Designer issue with CQ that includes user-defined function</li>\n<li>DEV-22830: JMS Reader issue after dropping and recreating</li>\n<li>DEV-23193: Avro Formatter and Parquet Formatter: special character $ support issue</li>\n<li>DEV-23477: Mongo DB Reader codec error</li>\n<li>DEV-23645: Kafka Reader logs unnecessary error messages</li>\n<li>DEV-23718: GG File Reader: Kryo serialisation issue</li>\n<li>DEV-23772: DatabaseWriter / MySQL Connection Retry Policy issue</li>\n<li>DEV-23815: GG File Reader unexpected quiesce</li>\n<li>DEV-24204: Dashboard query issue after deleting WActionStore</li>\n<li>DEV-24862: Oracle Reader with 19c MissingFileException error</li>\n<li>DEV-25315: MySQL Reader ArrayIndexOutOfBoundsExceptionCause error</li>\n<li>DEV-25414: Flow Designer error after editing CQ</li>\n<li>DEV-25799: GG Trail Reader issue with extracting ROWID from metadata</li>\n<li>DEV-26026: email alerts about application crash / quiesce are delayed</li>\n<li>DEV-26498: errors running Striim as a service on Windows</li>\n<li>DEV-26567: GG Trail Reader missed one delete record</li>\n<li>DEV-26750: Striim installed using RPM: StriimDiagUtility issue</li>\n<li>DEV-26999: Striim installed using RPM: StriimDiagUtility issue</li>\n<li>DEV-27012: Exception thrown by open processor not handled as expected</li>\n<li>DEV-27074: SQL Server &gt; Filewriter memory issue</li>\n<li>DEV-27099: console issue</li>\n<li>DEV-27106: putUserData function should not require so much memory</li>\n<li>DEV-27133: custom Java function not available after using LOAD command</li>\n<li>DEV-27144: MySQL Reader schema evolution issue</li>\n<li>DEV-27160: GG Trail Reader schema evolution includes tables that are not specified in Tables property</li>\n<li>DEV-27169: GG Trail Reader does not quiesce on DDL when CDDLAction is set to quiesce</li>\n<li>DEV-27171: GG Trail Reader schema evolution issue</li>\n<li>DEV-27241: Snowflake Writer: issues due to three-part table names not being specified</li>\n<li>DEV-27291: false low disk space message</li>\n<li>DEV-27388: Oracle Reader &gt; MSJet: memory issue on 30-million-row insert</li>\n<li>DEV-27389: Oracle Reader connection retry isue</li>\n<li>DEV-27420: MSSQLReader error due to invalid startposition format</li>\n<li>DEV-27518: Oracle Reader: OperationName UNSUPPORTED errors</li>\n<li>DEV-27657: Snowflake Writer: issue with NUMBER(38,0) when number has 38 digits</li>\n<li>DEV-27730: FileReader: file lineage issue when FileReader is running on Forwarding Agent</li>\n<li>DEV-27885: GG Trail Parser / Reader: issue with very large transaction</li>\n<li>DEV-28068: CPU usage higher after upgrading from 3.10.3 to 4.0.4</li>\n<li>DEV-28240: MongoDBReader: error due to unexpected error in oplog</li>\n<li>DEV-28247: MySQL Reader: incorrect time coversion</li>\n<li>DEV-28264: Oracle Reader: recovery checkpoint issue after upgradingfrom 3.10.3.6 to 3.10.3.8A</li>\n<li>DEV-28265: Azure Synapse Writer: must specify three-part table name when using wildcard with ColumnMap</li>\n<li>DEV-28272: LEE report issue</li>\n<li>DEV-28310: Azure Synapse Writer: issue with CLOB values when size &gt; 8000</li>\n<li>DEV-28333: Oracle Reader: recovery checkpoint issue</li>\n<li>DEV-28379: OracleReader: issue with CLOB value update</li>\n<li>DEV-28431: Snowflake Writer: identity column not incrementing</li>\n<li>DEV-28452: MongoDB Reader: issue due to timeout by MongoDB</li>\n<li>DEV-28460: MSSQL Reader: issue due to missing SQL Server role, need better error message</li>\n<li>DEV-28467: SpannerBatchReader failed with DEADLINE exception</li>\n<li>DEV-28493: JMS Reader: typo in doc</li>\n<li>DEV-28509: MongoDB Reader: recovery issue</li>\n<li>DEV-28516: high memory usage issue</li>\n<li>DEV-28527: MySQL Reader: issue with special characters in table names</li>\n<li>DEV-28535: MySQL Reader: SAVEPOINT issue</li>\n<li>DEV-28541: Oracle Reader: database sessions created by Striim are not cleared as expected</li>\n<li>DEV-28593: Spanner Writer: \"transaction contains too many mutations\" error</li>\n<li>DEV-28708: GG Trail Reader missing trail sequence number error</li>\n<li>DEV-28712: GG Trail Parser: issue when operation spans multiple records</li>\n<li>DEV-28766: File Reader: header row read again after recovery</li>\n<li>DEV-28777: Database Writer: vendor exception code for unique index violation appears in server log but not in UI</li>\n<li>DEV-28778: Salesforce Reader reads objects older than StartTimestamp</li>\n<li>DEV-28802: error in documentation for in-place upgrade</li>\n<li>DEV-28921: GG Trail Reader: issue when operation spans multiple records</li>\n<li>DEV-29174: Kafka Reader with JSON Parser: value with 20 digits exceeds upper bound of long data type</li>\n<li>DEV-29291: false \"Alert application is not running\" error</li>\n<li>DEV-29366: MariaDB Reader: misleading error message when binlog configuration is wrong</li>\n<li>DEV-29476: Database Writer / SQL Server: issue with checkpoint table</li>\n<li>DEV-29671: GG Trail Reader: ALTER TABLE not being handled by CDDL Process</li>\n<li>DEV-29713: Spanner Batch Reader: issue with Augment Query Clause</li>\n</ul>\n<h2>Customer-reported issues fixed in release 4.0.5 (Jan 27th, 2022):</h2>\n<ul>\n<li>DEV-17194: MySQL Reader crashes if Start Timstamp is specified and database timezone is different from Striim timezone</li>\n<li>DEV-24147: OracleReader: 'No log files found' when some RAC nodes are down</li>\n<li>DEV-25412: GG Trail Parser &gt; Kafka: crash after upgrade from 3.10.3.3 to 3.10.3.5</li>\n<li>DEV-26139: Apps fail with org.zeromq.ZMQException: Errno 156384819 : errno 156384819</li>\n<li>DEV-26910: MongoDBWriter disappears from Flow Designer after making changes</li>\n<li>DEV-27051: Oracle &gt; SQL Server: crash with \"Arithmetic overflow error converting nvarchar to data type numeric\" error</li>\n<li>DEV-27316: metadata repository error after upgrading from 3.10.3.6 to 4.0.4.1</li>\n<li>DEV-27386: out of memory crash</li>\n<li>DEV-27477: Oracle Reader: performance issue when inserting many LOBs</li>\n<li>DEV-27712: slow initial load with Mongo CosmosDB Writer</li>\n<li>DEV-27729: PostgreSQL Reader issues when operationType=null</li>\n<li>DEV-27904: Oracle Reader: recovery checkpoint issue</li>\n<li>DEV-27734: Initial Load Application (From Cosmos to Mongo) Crashed Due To \"Out Of Memory\"</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 4.0.4.3 (Dec 8th, 2021):</h2>\n<ul>\n<li>DEV-26220: BigQuery Writer connection retry issue</li>\n<li>DEV-27102: custom Java function load issue</li>\n<li>DEV-27153: custom Java function exception handling issue</li>\n<li>DEV-27272: Azure Synapse Writer: connection failure<br>- see notes for Connection Retry Policy in<span> </span><em>Striim 4.0.4 Documentation &gt; Adapters Guide &gt; Writers &gt; Azure Synapse Writer</em>\n</li>\n<li>DEV-27342: MS SQL Reader &gt; Azure Synapse Reader \"type mismatch or invalid character for the specified codepage\"</li>\n<li>DEV-27522: application crashes after upgrade to 4.0</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 4.0.4.2 (Nov 29th, 2021):</h2>\n<ul>\n<li>DEV-25594: MS SQL Reader &gt; Database Writer Sybase wrong mapping for money types</li>\n<li>DEV-27333: MS SQL Reader &gt; Database Writer Sybase crash when updating identity column</li>\n<li>DEV-27347: MS SQL Reader &gt; Database Writer Sybase crash on insert to table with an identity column that is not the primary key</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 4.0.4.1 (Nov 4th, 2021):</h2>\n<ul>\n<li>DEV-26406: MongoDB Writer crashes when target is Cosmos DB using the Azure Cosmos DB API for MongoDB. This is supported by the new MongoDB Cosmos DB Writer.</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 4.0.3 (Oct 13th, 2021):</h2>\n<ul>\n<li>DEV-21629: MS SQL Reader fails to start when tablename contains space and '$', as well as TransactionSupport=true</li>\n<li>DEV-21774: Oracle &gt; Snowflake initial load: RAW value causes crash</li>\n<li>DEV-22010: Cannot drop admin.LDAP1 property set</li>\n<li>DEV-22534: MS SQL Reader: should exclude system table and views when wild card is used for TABLES</li>\n<li>DEV-22643: MySQL Reader &gt; Cloud SQL for MySQL: mapping failed for Bit data type</li>\n<li>DEV-22964: MSSQL Reader &gt; BigQuery Writer: mapping failed for Bit to Boolean</li>\n<li>DEV-23375: GG Trail Parser: if there is no new data to process, app hangs at RECOVERING_SOURCES state</li>\n<li>DEV-24146: Cannot drop component</li>\n<li>DEV-24462: App crashes with low disk space error when 34% free disk space</li>\n<li>DEV-24489: GG Trail Reader &amp; Avro formatter: data type mismatch</li>\n<li>DEV-24497: cash not loaded</li>\n<li>DEV-24563: MON: \"Could not initialize class com.webaction.runtime.monitor.MonitorModel\" error</li>\n<li>DEV-24652: Database Reader with MariaDB: false \"You have an error in your SQL syntax\" error</li>\n<li>DEV-24733: Application takes too long to deploy/start/stop/undeploy/quiesce</li>\n<li>DEV-24999: File Reader &gt; Kudu Writer: fails with \"{[Ljava.lang.Object;} to TargetType {STRING(12)} is not supported for target\" error</li>\n<li>DEV-25033: \"Couldnot monitor table level metric since TableName donot exist in events metadata\" in server log</li>\n<li>DEV-25238: MySQL Reader: fails with false \"Unable to connect to the binlog file\" error</li>\n<li>DEV-25240: SQL Server &gt; Azure Blob Writer: crash with \"Unknown datum type java.lang.Short\" error</li>\n<li>DEV-25289: Kafka: cleartext SQSL password in logs</li>\n<li>DEV-25336: Azure Blob Writer with Parquet Formatter: fails with \"Not in union [\"null\",\"long\"]\" error</li>\n<li>DEV-25380: Oracle Reader &gt; PostgreSQL: fails with \":ERROR: \"syntax error at end of input\" error</li>\n<li>DEV-25451: KafkaReader: connection fails when using propertyvariable</li>\n<li>DEV-25452: REST API POST /tungsten: 'describe user' is not allowed for non-admin user</li>\n<li>DEV-25490: Oracle Reader: supplemental logging was not enabled, not caught by adapter</li>\n<li>DEV-25518: GG Trail Reader &gt; Spanner: data missing in target after agent failover and switch back to original agent</li>\n<li>DEV-25526: MSS SQL Reader: does not preserve case of column names in type created</li>\n<li>DEV-25541: GG Trail Reader: View File LineAge missing from UI</li>\n<li>DEV-25694: Oracle Reader &gt; Azure Synapse Writer: failed with false \"Table not found\" errors</li>\n<li>DEV-25715: GG Trail Reader &gt; BigQuery Writer: application in Stopping state for over 24 hours</li>\n<li>DEV-25775: End-to End Lag: incorrect unit label in web UI</li>\n<li>DEV-25796: Database Reader &gt; Azure Blob Writer: excessive memory use, server shut down</li>\n<li>DEV-25816: Oracle Reader: gap in file sequence</li>\n<li>DEV-25824: End-to End Lag: not handling discarded events correctly</li>\n<li>DEV-25868: MySQL Reader: binlog connection issue</li>\n<li>DEV-25869: MariaDB &gt; MySQL: \"ClassCastException: java.lang.Integer cannot be cast to java.lang.Short\" error</li>\n<li>DEV-25956: Oracle Reader: corrupt and trimmed CLOB data</li>\n<li>DEV-25957: Oracle Reader: crash with \"Unable to enrich the following partial WAEvent\" error/li&gt;</li>\n<li>DEV-26017: LDAP configuration error cannot be resolved</li>\n<li>DEV-26151: MySQL Reader: starts from BinlogFileName although checkpoint position is higher</li>\n<li>DEV-26332: MS SQL Reader: bad numeric conversion</li>\n<li>DEV-26569: Oracle Reader: fails with \"File seems to be missing for starting SCN\" error</li>\n<li>DEV-26572: Database Reader MySQL &gt; Database Writer MySQL: app in Starting status for up to 24 hours</li>\n<li>DEV-26582: MS SQL Reader &gt; Azure Synapse: data type conversion error</li>\n<li>DEV-26779: Oracle Reader: crash with \"Unable to enrich the following partial WAEvent\" error/li&gt;</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 3.10.3.8A (Dec 29th, 2021):</h2>\n<ul>\n<li>DEV-27768: CDB:- App crash due to maximum open cursors exceeded in ALM</li>\n<li>DEV-27693: log4j security vulnerability: CVE-2021-45046</li>\n<li>DEV-27477: When LOB data is inserted , Performance is low</li>\n<li>DEV-27385: OracleReader Restart position does not move</li>\n<li>DEV-27330: OracleReader recovers from wrong commitscn</li>\n<li>DEV-27323: Oracle CDC Loggs All Columns For Delete with Compression=True in CDB Env</li>\n<li>DEV-21823: init operations happening twice in Database Reader (backport from DEV-20202)</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 3.10.3.8 (Dec 14th, 2021):</h2>\n<ul>\n<li>DEV-26727: MS SQL Reader \"Error converting data type nvarchar to numeric\"</li>\n<li>DEV-27163: update Salesforce API version</li>\n<li>DEV-27197: update BigQuery client</li>\n<li>DEV-27272: Azure Synapse Writer: connection failure<br>- see notes for Connection Retry Policy in<span> </span><em>Striim 4.0.4 Documentation &gt; Adapters Guide &gt; Writers &gt; Azure Synapse Writer</em>\n</li>\n<li>DEV-27342: MS SQL Reader &gt; Azure Synapse Reader \"type mismatch or invalid character for the specified codepage\"</li>\n<li>DEV-27648: update log4j for security fix</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 3.10.3.7 (Nov 24th, 2021):</h2>\n<ul>\n<li>DEV-23375: GG Trail Parser: if there is no new data to process, app hangs at RECOVERING_SOURCES state</li>\n<li>DEV-25518: FileReader + GG Trail Parser &gt; Spanner Writer: data loss after Forwarding Agent failover and switch back</li>\n<li>DEV-25594: MS SQL Reader&gt; Database Writer Sybase wrong mapping for money types</li>\n<li>DEV-25694: Oracle Reader &gt; Azure Synapse Writer: failed with false \"Table not found\" error</li>\n<li>DEV-25715: GG Trail Reader &gt; BigQuery Writer: application in Stopping state for over 24 hours</li>\n<li>DEV-25796: Database Reader &gt; Azure Blob Writer: excessive memory use, server shut down</li>\n<li>DEV-27333: MS SQL Reader&gt; Database Writer Sybase crash when updating identity column</li>\n<li>DEV-27347: MS SQL Reader&gt; Database Writer Sybase crash on insert to table with an identity column that is not the primary key</li>\n</ul>\n<h2>Bug fixes part of release fixed in release 3.10.3.6 (July 6th, 2021):</h2>\n<ul>\n<li>DEV-24950: TQL import fails on Linux</li>\n<li>DEV-24981: GG Trail Reader: drooping source fails with java.lang.NullPointerException</li>\n<li>DEV-25332: GG Trail Reader &gt; BigQuery Writer: app hangs on stop</li>\n<li>DEV-25425: Flush Target's in-memory checkpoint to MDR on stop</li>\n<li>DEV-25292 - TCP Reader: memory leak issue</li>\n<li>DEV-22948 - external cache hangs/crashes</li>\n<li>DEV-25007 - Unable to start the Agent on windows, version 3.10.3.4A</li>\n<li>DEV-25070 - Oracle Reader: 45K CLOB data split into two records</li>\n<li>DEV-25455 - Oracle Reader &gt; Kafka Writer: Oracle connection not dropped when application is stopped</li>\n<li>DEV-25198 - PostgreSQL Reader \"source table does not exist\" error does not specify table name</li>\n<li>DEV-21618 - Kafka Reader does not pass message key and header</li>\n<li>DEV-23747 - Kafka Reader: metadata does not contain message timestamp</li>\n<li>DEV-25111 - Augmenting SSL config parameters in connection URL for postgres MDR</li>\n<li>DEV-25138 - external cache does not reconnect or crash after database connection issues</li>\n<li>DEV-25168 - crash with error \"Problem obtain cache data\"</li>\n<li>DEV-25324 - error starting server when repository database is hosted on PostgreSQL and application contains a Router component</li>\n<li>DEV-24226 - Database Reader with Oracle &gt; KafkaWriter: in async mode with Quiesce on IL Completion enabled, hangs when quiescing</li>\n</ul>\n<h2>Bug fixes part of release 3.10.3.5 (May 24th, 2021):</h2>\n<ul>\n<li>DEV-21544: HP NonStop Enscribe Reader: logs flooded while stopping app post CPU failure</li>\n<li>DEV-22452: Oracle Reader &gt; PostgreSQL Databasea Writer: issue with ChangeOperationToInsert</li>\n<li>DEV-22948: external cache with incorrect URL does not crash promptly</li>\n<li>DEV-23316: error message is too large for exception store</li>\n<li>DEV-24144: dashboard is very slow</li>\n<li>DEV-24145: heap issue</li>\n<li>DEV-24433: Oracle Reader &gt; Kudu Writer: frequent crashes</li>\n<li>DEV-24607: issue when joining Oracle Reader and Database Reader output</li>\n<li>DEV-24657: Forwarding Agent: deploy fails with \"FLOW already deployed on this server\"</li>\n<li>DEV-24680: GG Trail Reader sequence number issue</li>\n<li>DEV-24701: support alert when app is quiesced</li>\n<li>DEV-24713: MySQL Reader hangs on Starting Sources when Start Timestamp is set</li>\n<li>DEV-24719: GG Trail Reader recovery checkpoint issues</li>\n<li>DEV-24776: heap issue</li>\n<li>DEV-24796: Spanner Writer: label sessions</li>\n<li>DEV-24802: GG Trail Reader missing some functionality of GG Trail Parser</li>\n<li>DEV-24815: monitoring UI disabled when ClusterQuorumSize is set</li>\n<li>DEV-24817: Snowflake Writer: crash with SnowflakeSQLException error</li>\n<li>DEV-24864: MS SQL Reader &gt; Snowflake Writer: no data flowing, no errors in log</li>\n<li>DEV-24899: GG Trail Reader error</li>\n<li>DEV-24923: Oracle Reader &gt; BigQuery Writer: \"Syntax error: Illegal escape sequence\"</li>\n<li>DEV-24928: frequent timeouts when using external caches</li>\n<li>DEV-24933: GG Trail Reader &gt; Snowflake Writer: java.lang.IndexOutOfBoundsException</li>\n<li>DEV-24934: BigQuery Writer: add batch and table details to error messages</li>\n<li>DEV-24938: GG Trail Reader &gt; BigQuery Writer: error when large operations are split between trail files</li>\n<li>DEV-24958: Salesforce Reader: in 3.10.3.4, will not accept Start Timestamp value</li>\n<li>DEV-25008: Oracle Reader: in 11g, failure with \"column name not found\" when CLOB contains single quote</li>\n<li>DEV-25040: 3.10.3.3: undeploy failure</li>\n<li>DEV-25121: GG Trail Reader &gt; BigQuery Writer: crash with \"WaitingNodes is not 0\" error</li>\n<li>DEV-25143: heap issue</li>\n<li>DEV-25144: BigQuery Writer: primary key update fails when data contains single quote</li>\n</ul>\n<h2>Bug fixes part of release 3.10.3.4 (Mar 31st, 2021):</h2>\n<ul>\n<li>DEV-21913: MSSQL Reader to Database Writer SQL Server: java.lang.Long cannot be cast to org.joda.time.DateTime</li>\n<li>DEV-22275: OracleReader: Bidirectional setting does not work with CDB</li>\n<li>DEV-22548: Schema conversion utility issue with string_agg</li>\n<li>DEV-23143: BigQuery Writer with timestamp with embedded T and (+0000) timezone specifier</li>\n<li>DEV-23530: Salesforce Reader: for delete + insert + update, only delete and update are captured</li>\n<li>DEV-23532: Salesforce Reader in incremental mode missing some events</li>\n<li>DEV-23817: MSSQL Reader fails to crash after connection failure</li>\n<li>DEV-24164: MaskGeneric function does not work if the string contains multibyte character</li>\n<li>DEV-24268: MySQL Reader with StartTimestamp specified selects the wrong binlog file</li>\n<li>DEV-24315: Persisted stream doesn't work when Kafka uses SASL (Kerberos) authentication with SSL encryption</li>\n<li>DEV-24380: Salesforce Reader crashes when typeUUID is null</li>\n<li>DEV-24391: Monitoring app crashes when metadata repository is hosted on PostgreSQL</li>\n<li>DEV-24408: Salesforce Reader \"Could not send data to the platform: null\" error\"</li>\n<li>DEV-24493: BigQuery Writer \"unexpected time partitioning MONTH\" error; to fix this issue, google-cloud-bigquery client for Java API version has been upgraded to 1.127.0.</li>\n</ul>\n<h2>Bug fixes part of release 3.10.3.3 (Feb 22nd, 2021):</h2>\n<ul>\n<li>DEV-22650: Snowflake Writer mistakenly deployed on agent</li>\n<li>DEV-22904: MySQL Reader Start Timestamp issue</li>\n<li>DEV-23537: Web UI is slow</li>\n<li>DEV-23582: HP NonStop Enscribe Reader issue with two-byte characters</li>\n<li>DEV-23603: Oracle JDBC SSL connection issue</li>\n<li>DEV-23632: HP NonStop readers do not release listening port</li>\n<li>DEV-23718: GG Trail Parser datetime issue</li>\n<li>DEV-23733: STATUS command output is incomplete</li>\n<li>DEV-23759: Application moved from default group still appears in default group in UI</li>\n<li>DEV-23792: Kafka Writer fails when source is deployed on agent or another server and encryption is enabled</li>\n<li>DEV-23881: Oracle Reader checkpointing issue</li>\n<li>DEV-23882: Oracle Reader issue with long-running queries</li>\n<li>DEV-23897: Salesforce &gt; BigQuery issue with deleting custom Salesforce objects</li>\n<li>DEV-24044: Oracle Reader hangs instead of stopping</li>\n<li>DEV-24111: Updated Kudu Java client to version 1.13.0</li>\n<li>DEV-24117: MySQL Reader hangs</li>\n<li>DEV-24118: MySQL Reader hangs</li>\n<li>DEV-24126: System health REST API return is incomplete</li>\n<li>DEV-24213: Updated google-cloud-pubsub client API to version 1.108.2</li>\n</ul>\n<h2>Bug fixes part of release 3.10.3.2 (Dec 18th, 2020):</h2>\n<ul>\n<li>DEV-16873: Database Reader (MySQL) issue reading integer columns</li>\n<li>DEV-22651: Issue with hyphen in SQL Server database name</li>\n<li>DEV-23119: GG Trail Reader crash on DDL event</li>\n<li>DEV-23221: Stale output in<span> </span><code>MON &lt;source&gt;</code><span> </span>output</li>\n<li>DEV-23271: GG Trail Parser: DDL issue</li>\n<li>DEV-23286: GG Trail Parser: DDL issue</li>\n<li>DEV-23297: REST API token is invalid after associated user logs out from Striim UI</li>\n<li>DEV-23501: Oracle Reader file sequence gap</li>\n<li>DEV-23513: GG Trail Parser &gt; Database Writer error when primary key has timestamp data type</li>\n<li>DEV-23515: Oracle Reader crash after upgrade to 3.10.2.1</li>\n<li>DEV-23520: Kafka Reader stops capturing data</li>\n<li>DEV-23561: PostgreSQL Reader &gt; Database Wrier (PostgreSQL) ClassCastException</li>\n<li>DEV-23596: Database Reader (Oracle) ORA-01000 error during initial load</li>\n</ul>\n<h2>Bug fixes part of release 3.10.3.1 (Nov 24th, 2020):</h2>\n<ul>\n<li>DEV-21018: Database Reader row count warnings in log</li>\n<li>DEV-21818: MongoDB Reader JSONParseException error</li>\n<li>DEV-22636: Oracle Reader &gt; Database Writer PostgreSQL handling of single quote in data values</li>\n<li>DEV-22639: can't start Striim as a service in Windows</li>\n<li>DEV-22657: passwordEncryptor.bat null pointer exception</li>\n<li>DEV-22769: Oracle Reader can't find log file</li>\n<li>DEV-22890: Oracle Reader issue with Active Data Guard</li>\n<li>DEV-22941: app crashed with \"not enough servers\" when there were enough servers</li>\n<li>DEV-22945: GGTrail Parser issue</li>\n<li>DEV-22957: errors quiescing app with Kafka-persisted stream</li>\n<li>DEV-22971: app crashed with 90% memory usage though usage was not that high</li>\n<li>DEV-22975: Kafka Writer fails with network exception</li>\n<li>DEV-23089: Oracle Reader file sequence gap message</li>\n</ul>\n<h2>Bug fixes part of release 3.10.3 (Sep 18th, 2020):</h2>\n<ul>\n<li>DEV-19680: Google Cloud Platofrm: can not enter Product key for BYOL solution</li>\n<li>DEV-20672: HP NonStop readers: list of active transactions grows without limit when CDC process reads from a single auxiliary audit trail</li>\n<li>DEV-21398: Dashboard GRANT permission issue</li>\n<li>DEV-21471: Configure MEM_MAX in startUp.properties in Windows</li>\n<li>DEV-21818: MongoDB Reader error com.fasterxml.jackson.core.JsonParseException: Non-standard token 'NaN'</li>\n<li>DEV-22107: Property variable issue</li>\n<li>DEV-22508: Alerts not working</li>\n<li>DEV-22636: Single quote in Oracle CLOB becomes converted to double quotes PostgreSQL text</li>\n<li>DEV-22639: Unable to start Striim as a service in Windows</li>\n<li>DEV-22650: Snowflake Writer deployed on default deployment group deploys on agent</li>\n<li>DEV-22657: passwordencryptor.bat throws null pointer exception</li>\n<li>DEV-22769: Oracle Reader fails with missing log file error when the file exists</li>\n<li>DEV-22886: Importing TQL in web UI loses multi-byte character</li>\n<li>DEV-22935: DROP NAMESPACE ... CASCADE does not drop property variables</li>\n<li>DEV-22937: GG Trail Parser on agent: checkpoint is wrong after restart</li>\n<li>DEV-22938: GG Trail Parser on agent: file lineage is not avaialble</li>\n<li>DEV-22939: Can not save Kafka Writer created in UI with default batch size and mode</li>\n<li>DEV-22945: File Reader + GG Trail Parser not quiescing properly</li>\n<li>DEV-22950: Kafka Writer sometimes sticks on retry</li>\n<li>DEV-22952: MonitoringSourceStream error</li>\n<li>DEV-22957: Quiesce fails with error \"Could not get a lock on Kafka topic\"</li>\n<li>DEV-22971: When LEE is enabled, app crashes with 90% memory usage</li>\n<li>DEV-23066: Exported TQL includes namespace declarations</li>\n<li>DEV-23102: File Reader + GG Trail Parser &gt; File Writer sometimes missing data</li>\n<li>DEV-23156: \"Free disk space\" reported incorrectly as GB when it should be percent</li>\n<li>DEV-23172: GCS Writer: unexpected comma at the beginning of file</li>\n<li>DEV-23187: Kafka Writer hangs on stop with ActionNotFoundWarning</li>\n<li>DEV-23243: Data from Kafka stream is not received by target Database Writer</li>\n<li>DEV-23263: Spanner Writer: NullPointerException due to race condition</li>\n</ul>\n<h2>Bug fixes part of release 3.10.2.1 (Sep 29th, 2020):</h2>\n<ul>\n<li>DEV-21018: Database Reader row count warnings in log</li>\n<li>DEV-21818: MongoDB Reader JSONParseException error</li>\n<li>DEV-22636: Oracle Reader &gt; Database Writer PostgreSQL handling of single quote in data values</li>\n<li>DEV-22639: can't start Striim as a service in Windows</li>\n<li>DEV-22657: passwordEncryptor.bat null pointer exception</li>\n<li>DEV-22769: Oracle Reader can't find log file</li>\n<li>DEV-22890: Oracle Reader issue with Active Data Guard</li>\n<li>DEV-22941: app crashed with \"not enough servers\" when there were enough servers</li>\n<li>DEV-22945: GGTrail Parser issue</li>\n<li>DEV-22957: errors quiescing app with Kafka-persisted stream</li>\n<li>DEV-22971: app crashed with 90% memory usage though usage was not that high</li>\n<li>DEV-22975: Kafka Writer fails with network exception</li>\n<li>DEV-23089: Oracle Reader file sequence gap message</li>\n</ul>\n<h2>Bug fixes part of release 3.10.2 (Sep 11th, 2020):</h2>\n<ul>\n<li>DEV-22591: OPCUA Reader fails to validate client connection</li>\n</ul>\n<h2>Bug fixes part of release 3.10.1.1 (Sep 4th, 2020):</h2>\n<ul>\n<li>DEV-22048: Import of exported application fails</li>\n<li>DEV-22212: Unsupported adapters appear in Flow Designer</li>\n<li>DEV-22228: GGTrailParser crash</li>\n<li>DEV-22236: GGTrailParser does not send transaction boundaries properly</li>\n<li>DEV-22246: GGTrailParser \"Preview on run\" errors</li>\n<li>DEV-22249: OracleReader error with 19c</li>\n<li>DEV-22278: BigQueryWriter error with single quote in string</li>\n<li>DEV-22284: Azure Synaps Writer failed when writing more than 10000 Characters to VARCHAR(MAX)</li>\n<li>DEV-22295: High CPU and load average after upgrading to 3.10.1</li>\n<li>DEV-22306: OracleReader hangs</li>\n<li>DEV-22319: External cache issue</li>\n<li>DEV-22331: OracleReader issues with RAC source</li>\n<li>DEV-22334: BigQueryWriter error with LF or CR in string</li>\n<li>DEV-22353: OracleReader does not crash when log is missing</li>\n<li>DEV-22362: Can't change log level</li>\n<li>DEV-22375: OracleReader: some deletes missing in BigQueryWriter target</li>\n<li>DEV-22384: GGTrailParser fails on DDL operation</li>\n<li>DEV-22407: BigQueryWriter: striim.server.log flooded with multiple error messages</li>\n<li>DEV-22410: OracleReader does not crash when log is missing</li>\n<li>DEV-22425: Open processor will not load</li>\n<li>DEV-22545: BigQueryWriter: Striim server crashes</li>\n<li>DEV-22626: BigQueryWriter: checkpoint issue</li>\n<li>DEV-22687: Striim server out of memory errors</li>\n</ul>\n<h2>Bug fixes part of release 3.10.1 (July 7th, 2020):</h2>\n<ul>\n<li>DEV-10820: Oracle Reader startup delay when many tables</li>\n<li>DEV-12023: can't get WActionStore data with REST API</li>\n<li>DEV-16867: file lineage not available for source deployed on Forwarding Agent</li>\n<li>DEV-20557: system health REST API missing some information</li>\n<li>DEV-20929: monitoring: CPU usage value does not match graph</li>\n<li>DEV-21049: MSSQL Reader does not stop and app crashes</li>\n<li>DEV-21064: undocumented: when OracleReader is running, PDB must not be closed</li>\n<li>DEV-21173: BiqQuery Reader crash</li>\n<li>DEV-21209: derbyTool.sh issue</li>\n<li>DEV-21268: memory leak</li>\n<li>DEV-21255: Oracle Reader stops capturing events</li>\n<li>DEV-21326: can't configure debug logs</li>\n<li>DEV-21391: Oracle Reader crashes with ORA-01289 error</li>\n<li>DEV-21413: MetadataRepository.patchPasswordSalts errors at startup</li>\n<li>DEV-21415: Forwarding Agent connection to Google Cloud Platform fails after upgrading to 3.9.8</li>\n<li>DEV-21447: Oracle Reader to BigQuery Writer fails in MERGE mode with</li>\n<li>DEV-21456: some apps are invalid after export and import using tools.sh</li>\n<li>DEV-21482: KafkaReader with XMLparserV2 stops reading data</li>\n<li>DEV-21553: Forwarding Agent does not restart after Windows host is restarted</li>\n<li>DEV-21622: PostgreSQL Reader to BigQuery Writer stops sending data</li>\n<li>DEV-21628: BigQuery Writer fails with false \"table not found\" error</li>\n<li>DEV-21766: same as DEV-21268</li>\n<li>DEV-21787: Kafka Writer issue with SASL_SSL</li>\n<li>DEV-21804: memory leak</li>\n<li>DEV-21877: Oracle Reader to BigQuery Writer crashes after upgrading to 3.9.8</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8.8 (Sep 23rd, 2020):</h2>\n<ul>\n<li>DEV-22875: HP NonStop Enscribe Reader issue with variable-length records</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8.7 (Sep 4th, 2020):</h2>\n<ul>\n<li>DEV-22425: open processor will not load</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8.6 (June 2nd, 2020):</h2>\n<ul>\n<li>DEV-21268: memory leak with unsupported recovery topology</li>\n<li>DEV-21532: application crashes with org.zeromq.ZMQException error</li>\n</ul>\n<p>To resolve ZMQException error, edit startUp.properties, increase the value of ZMQMaxSockets (new in 3.9.8.6) from its default of 1024, and restart Striim.</p>\n<h2>Bug fixes part of release 3.9.8.5 (May 12th, 2020):</h2>\n<ul>\n<li>DEV-20585: console can't connect when HTTPS port is not 9081</li>\n<li>DEV-21258: PostgreSQL Reader issue after restart</li>\n<li>DEV-21415: Forwarding Agent can't connect when HTTPS port is not 9081</li>\n<li>DEV-21429: OracleReader crashes when source database is in EC2 and connection uses SSL</li>\n<li>DEV-21450: PostgreSQL Reader hangs on connection retry</li>\n<li>DEV-21463: PostgreSQL Reader does not recognize date with BC</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8.4 (April 25th, 2020):</h2>\n<ul>\n<li>DEV-14638: KafkaWriter writes null message ID</li>\n<li>DEV-19904: fixed in 3.9.8.3 by DEV-20974/li&gt;</li>\n<li>DEV-20442: error when Database Writer reads from persisted stream</li>\n<li>DEV-20622: Kudu Writer data type issue</li>\n<li>DEV-20629: sksConfig.sh issue in Linux</li>\n<li>DEV-20664: error saving CQ in Web UI</li>\n<li>DEV-20739: Hazelcast issue</li>\n<li>DEV-20775: issue with property variables</li>\n<li>DEV-20874: console does not connect if HTTPS is disabled</li>\n<li>DEV-20883: crash when using password from property variable</li>\n<li>DEV-20909: cannot export TQL on Windows</li>\n<li>DEV-20910: issue with property variables after upgrade</li>\n<li>DEV-20923: Open Processor issue</li>\n<li>DEV-20968: SSL connection issue with Amazon RDS Oracle</li>\n<li>DEV-20970: checkpoint issue with BigQuery Writer</li>\n<li>DEV-21028: compression issue with Oracle Reader</li>\n<li>DEV-21050: installer identifies 32-bit Java as compatible</li>\n<li>DEV-21072: Kafka Writer crashes with \"Problem while fetching latest position\" error</li>\n<li>DEV-21077: data validation API RMIWebSocket.handleMessageException error</li>\n<li>DEV-21135: HTTP Writer issue</li>\n<li>DEV-21161: to_string() function failed with java.time.ZonedDateTime</li>\n<li>DEV-21172: import of CQ with analytic function fails</li>\n<li>DEV-21189: binlog error with MySQL Reader</li>\n<li>DEV-21214: duplicate events from MSSQL Reader in target after in-place upgrade</li>\n<li>DEV-21227: MSSQL Reader crashes when maxRetries=0</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8.3 (April 3rd, 2020):</h2>\n<ul>\n<li>DEV-16751: deploy fails with invalid password message</li>\n<li>DEV-17238: SQL Server test connection fails</li>\n<li>DEV-18610: after upgrading to 3.9.6, Oracle Reader fails with BufferOverflowException</li>\n<li>DEV-18792: when source and targets are defined in separate apps, target application is invalid after stop</li>\n<li>DEV-19476: SQL Server JDBC error cannot update identity column</li>\n<li>DEV-20273: app is not_enough_servers status after restarting server nodes while leaving agent running</li>\n<li>DEV-20309: exported TQL includes namespace</li>\n<li>DEV-20485: BigQuery Writer issue with primary key updates</li>\n<li>DEV-20547: can't start second server node</li>\n<li>DEV-20592: exported TQL includes namespace</li>\n<li>DEV-20597: user with &lt;namespace&gt;.dev role can not deploy or undeploy application</li>\n<li>DEV-20618: external cache crashes after server restart</li>\n<li>DEV-20658: Spanner Writer \"was already created in this transaction\" error</li>\n<li>DEV-20659: unable to drop WActionStore using web UI</li>\n<li>DEV-20660: Oracle Reader to BigQuery Writer \"file deletion failed\" error</li>\n<li>DEV-20679: PostgreSQL Reader fails with ArrayIndexOutOfBoundsException for TOAST columns</li>\n<li>DEV-20684: ClassNotFoundException when running aksConfig.bat</li>\n<li>DEV-20688: GGTrailParser to BigQuery Writer checkpoint issue after restart</li>\n<li>DEV-20724: Email Adapter ccEmailList issue</li>\n<li>DEV-20750: can't start Striim after upgrading to 3.9.8.1</li>\n<li>DEV-20754: in-place upgrade from 3.9.7.1 to 3.9.8.1 fails</li>\n<li>DEV-20840: high CPU usage with SnowflakeWriter</li>\n<li>DEV-20843: Oracle Reader issue with TransactionBufferSpilloverSize</li>\n<li>DEV-20866: BigQuery Writer failed silently</li>\n<li>DEV-20893: PostgreSQL Reader latency / lag issue</li>\n<li>DEV-20907: Alert Manager alerts stop working</li>\n<li>DEV-20924: Google PubSub Writer retry issue</li>\n<li>DEV-20927: BigQuery Writer read timeout issue</li>\n<li>DEV-20939: recovery checkpoint not updated for ignored events</li>\n<li>DEV-20944: invalid apps after upgrading from 3.9.3 to 3.9.8.1 or 3.9.8.2</li>\n<li>DEV-20945: quiesce issues with BigQuery Writer</li>\n<li>DEV-20948: BigQuery Writer crashes when Optimized Merge is enabled</li>\n<li>DEV-20952: Oracle Reader to BigQuery Writer crashes with \"too many open files\" error</li>\n<li>DEV-20956: Database Writer crash with \"clearBuffer: Event not in the Buffer\" error</li>\n<li>DEV-20968: connection failure to Amazon RDS for Oracle when using SSL</li>\n<li>DEV-20971: some GGTrailParser WARN-level messages should be DEBUG-level</li>\n<li>DEV-20974: Database Writer MON output shows multiple Target Commit Position values</li>\n<li>DEV-21009: GGTrailParser restart after crash fails with \"crossed recovery point\" error</li>\n<li>DEV-21020: MSSQL Reader connection retry issues</li>\n<li>DEV-21040: when using ColumnMap and wildcard, SQL Server target tables have NULL values for columns that do not exist in the source</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8.2 (March 14th, 2020):</h2>\n<ul>\n<li>DEV-19754: BigQueryWriter null value error</li>\n<li>DEV-20399: BigQueryWriter does not crash if specified target table does not exist</li>\n<li>DEV-20536: BigQueryWriter reaches daily batch limit</li>\n<li>DEV-20596: BigQueryWriter issue with property variables</li>\n<li>DEV-20598: MSSQLReader duplicate events after restart</li>\n<li>DEV-20615: MSSQLReader duplicate events after restart</li>\n<li>DEV-20635: PostgreSQLReader null pointer exception with PostgreSQL 11 and later</li>\n<li>DEV-20680: BigQueryWriter crashes when application is stopped</li>\n<li>DEV-20688: Open processor checkpoint issue</li>\n<li>DEV-20773: java.io.FileNotFoundException when application is stopped</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8.1:</h2>\n<ul>\n<li>DEV-16732: property variable issue</li>\n<li>DEV-19085: MariaDB Reader DDL event issue</li>\n<li>DEV-19428: Open Processor does not load on restart</li>\n<li>DEV-19880, DEV-19889: Alert Manager sending one email per server in cluster</li>\n<li>DEV-20264: exported TQL can not be imported</li>\n<li>DEV-20332: PostgreSQL Reader fails with null pointer exception</li>\n<li>DEV-20395: error when using web UI Monitor page</li>\n<li>DEV-20436: apps will not run after upgrading to 3.9.8</li>\n<li>DEV-20440: deployment fails after upgrading to 3.9.8</li>\n<li>DEV-20441, DEV-20495: Snowflake Writer syntax error in Flow Designer after upgrading to 3.9.8</li>\n<li>DEV-20447: password error with PartialRecordPolicy Open Processor</li>\n<li>DEV-20452: Flow Designer copy/paste issue after upgrading to 3.9.8</li>\n<li>DEV-20456: Kafka Reader can not read messages in Avro format</li>\n<li>DEV-20457: Forwarding Agent does not connect when multiple IP addresses specified for striim.node.servernode.address</li>\n<li>DEV-20482: BigQueryWriter fails with null pointer exception</li>\n<li>DEV-20489: KuduWriter date conversion issue</li>\n<li>DEV-20516: app export fails</li>\n</ul>\n<h2>Bug fixes part of release 3.9.8:</h2>\n<ul>\n<li>DEV-17104: issue with special characters in column name</li>\n<li>DEV-18431: memory not released</li>\n<li>DEV-18492: crash with unknown exception</li>\n<li>DEV-18680: incorrect status information system heath REST API</li>\n<li>DEV-18681: application alert not working</li>\n<li>DEV-18785: deployment issue</li>\n<li>DEV-18925: SMTP authentication issue</li>\n<li>DEV-18935: directory indexing issue</li>\n<li>DEV-18936: security issue</li>\n<li>DEV-18964: invalid date format with OracleReader</li>\n<li>DEV-19113: exported TQL missing type fields</li>\n<li>DEV-19140: metadata repository issue with Oracle host</li>\n<li>DEV-19205: recovery hangs</li>\n<li>DEV-19212: failure sending HELO command to SMTP server)</li>\n<li>DEV-19222: app does not fail when MySQL is down</li>\n<li>DEV-19288: CPU alert issue</li>\n<li>DEV-19395: ADLSGen2Writer rollover issue</li>\n<li>DEV-19422: alert has wrong application status</li>\n<li>DEV-19477: Open Cursor limit has reached in the target database</li>\n<li>DEV-19482: DatabaseWriter issue</li>\n<li>DEV-19540: java.lang.NumberFormatException: Zero length BigInteger</li>\n<li>DEV-19564: SpannerWriter issue</li>\n<li>DEV-19593: GGTrailParser documentation isue</li>\n<li>DEV-19622: DBReader (Oracle source) to BigQuery - crashed</li>\n<li>DEV-19686: Docker image has expired license</li>\n<li>DEV-19701: SpannerWriter does not crash when target table does not exist</li>\n<li>DEV-19707: app is not checkpointing</li>\n<li>DEV-19751: server heap issues</li>\n<li>DEV-19775: treating MONTH as Ccse-sensitive causing \"DateTimeParseException\"</li>\n<li>DEV-19864: app not checkpointing when GoldenGate trail has no DML</li>\n<li>DEV-19892: event rate from REST Health API is wrong</li>\n<li>DEV-20011: \"com.striim.specialColumnTypes.SpecialColumnTypeHandler.getReplacedUpdateSQLRedo (SpecialColumnTypeHandler.java)\" errors</li>\n<li>DEV-20074: 3.9.7.1 Forwarding Agent missing MSSQLReader JAR</li>\n<li><span>DEV-18004: MySQLReader DDL CREATE and DROP operations not emitted</span></li>\n<li>\n<span>DEV-19281: Authentication failure with Java event publishing API when httpEnabled=False</span><span></span><span></span><span></span>\n</li>\n</ul>\n<h2>Bug fixes part of release 3.9.7.2:</h2>\n<ul>\n<li>DEV-18680: incorrect status information system heath REST API</li>\n<li>DEV-19535: SpannerWriter issue</li>\n<li>DEV-19564: SpannerWriter issue</li>\n<li>DEV-19701: SpannerWriter does not crash when target table does not exist</li>\n<li>DEV-19775: treating MONTH as case-sensitive causing DateTimeParseException</li>\n<li>DEV-19864: GGTrailParser checkpointing issue</li>\n<li>DEV-19892: event rate from REST Health API is wrong</li>\n<li>DEV-20059: GGTrailParser issue</li>\n<li>DEV-20236: SpannerWriter issue</li>\n<li>DEV-20240: GGTrailParser issue</li>\n</ul>\n<h2>Bug fixes part of release 3.9.7.1:</h2>\n<ul>\n<li>DEV-13961: monitoring UI does not show warning for node running Derby</li>\n<li>DEV-17187: MongoDBReader timeout in incremental mode</li>\n<li>DEV-17388: DEADLINE_EXCEEDED error with GooglePubSubWriter</li>\n<li>DEV-18110: PROPERTYVARIABLE not usable in KafkaConfig</li>\n<li>DEV-18392: issue with OracleReader checkpoint table</li>\n<li>DEV-18532: Subject line is wrong in EmailAdapter alert</li>\n<li>DEV-18573: unexpected results from AzureBlobWriter + AvroFormatter</li>\n<li>DEV-18727: BigQueryWriter crashes when source column name is reserved keyword</li>\n<li>DEV-18821: AzureBlobWriter + DSVFormatter crashes in Windows</li>\n<li>DEV-18828: cache will not deploy after app is modified in Flow Designer</li>\n<li>DEV-18924: Hazelcast security issue</li>\n<li>DEV-18927: provide script to change Derby password</li>\n<li>DEV-18932: configuration security issue</li>\n<li>DEV-18941: console and web UI are extremely slow</li>\n<li>DEV-18990: DatabaseWriter crashes when column name is reserved keyword</li>\n<li>DEV-19013: console login fails when HTTP is disabled</li>\n<li>DEV-3172: Objects with the same name do not appear in UI</li>\n<li>DEV-5987: OracleReader DateTime values do not include milliseconds</li>\n</ul>\n<h2>Bug fixes part of release 3.9.6.3:</h2>\n<ul>\n<li>DEV-18742: security issue with Source Preview</li>\n<li>DEV-18841: security issue with Hazelcast</li>\n<li>DEV-18842: security issue with internal API</li>\n<li>DEV-18843: security issue with Forwarding Agent</li>\n</ul>\n<h2>Bug fixes part of release 3.9.6.2:</h2>\n<ul>\n<li>DEV-17517: system health REST API returns incorrect app status</li>\n<li>DEV-17916: deployment fails in Azure</li>\n<li>DEV-18205: SMTP alert setup fails</li>\n<li>DEV-18284:<span> </span><code>tools.sh -A export</code><span> </span>fails</li>\n<li>DEV-18398: backpressure indicator missing from Flow Designer</li>\n<li>DEV-18439: CPU usage excessive when OracleReader query includes LIKE or OR</li>\n<li>DEV-18485: ExceptionInInitializerError on Forwarding Agent startup</li>\n<li>DEV-18572: S3Writer fails with Access Denied</li>\n</ul>\n<h2>Bug fixes part of release 3.9.6.1:</h2>\n<ul>\n<li>DEV-16785: bin\\derbyTools.bat missing</li>\n<li>DEV-17124: ORA-02290 error with Oracle as metadata repository</li>\n<li>DEV-17504: \"Too big integer constant\" when filtering Oracle SCNs with CQ</li>\n<li>DEV-17622: tools.sh export issues</li>\n<li>DEV-17746: MSSQLReader &gt; S3Writer missing data</li>\n<li>DEV-17901: application crash did not send alert</li>\n<li>DEV-17997: time zone issue with OracleReader &gt; SQL Server</li>\n<li>DEV-18017: issues reading health object</li>\n<li>DEV-18039: issue with CosmosDBWriter Collections property</li>\n<li>DEV-18047: \"Problem processing event on channel 0\" when using OracleReader</li>\n<li>DEV-18142: ORA-01407 error with GoldenGateTrailParser</li>\n<li>DEV-18303: JMSReader skipping messages</li>\n</ul>"} {"page_content": "<p><span><strong>Scope of the document:</strong> Currently logs pertaining to all Striim applications and platform related information are written to the generic <strong>striim.server.log</strong>. The changes suggested here are optional and if there is a requirement to have application specific logs following is just an idea on how to achieve using native scripting.</span></p>\n<p><span>Assuming the name of the application running in Striim UI is GE_GAC_CDC, create a file by name grep_log.sh with following contents </span></p>\n<pre>#!/bin/bash<br><br>string=\"GE_GAC_CDC\"<br><br>tail -n 0 -F /Users/rajesh/app/Striim_396/logs/striim.server.log | \\<br>while read LINE<br>do<br>echo \"$LINE\" | grep -q $string<br>if [ $? = 0 ]<br>then<br>echo -e \"$LINE\" &gt;&gt; app_GE_GAC_CDC.log<br>fi<br>done</pre>\n<p><span>The script would create the log file in the same location where grep_log.sh is created</span><br><br><span>$ chmod +x grep_log.sh</span><br><br><span>Update the script file with the location of your striim.server.log file which is usually in &lt;striim home&gt;/logs for non-rpm based installation.</span><br><br><span>Then run it like below</span></p>\n<pre>$nohup ./<span>grep_log</span>.sh 0&lt;&amp;- &amp;&gt;/dev/null &amp;</pre>\n<p><span>Above script would run continuously until it is killed. It can terminated by using kill</span><br><br><span>$ ps -ef | grep grep_log.sh</span><br>$ kill -9 &lt;pid from above ps command&gt;</p>\n<p><span>This is given for reference and please feel free to modify it </span></p>"} {"page_content": "<p>When required audit trails do not exist, NonStopreader will fail with error like following:</p>\n<p> </p>\n<pre>2019-07-04 21:38:43,579 @S192_168_1_81 @admin.eh -WARN pool-14-thread-3 com.webaction.proc.CDCReader.updateCDCProcessStatus\n (CDCReader.java:411) CDCProcess sqmxcdcp::W716-MERGE Status - STOPPED<br>\n 2019-07-04 21:38:43,582 @S192_168_1_81 @admin.eh -FATAL pool-14-thread-3 com.webaction.proc.CDCParser_1_0.hasNext\n (CDCParser_1_0.java:155) CDCProcess sqmxcdcp::W716-MERGE stopped with the error<br>\n Error Message:<br>\n ERROR in sqmxcdcp.W716 at (TMF_AuditReader.cpp 1394)<br>\n STRM-NSK-1062 ARREAD error -902\n\n 2019-07-04 21:38:43,622 @S192_168_1_81 @admin.eh -ERROR BaseServer_WorkingThread-3\n com.webaction.proc.BaseProcess.receive (BaseProcess.java:385) com.webaction.proc.HPNonStopSQLMXReader_1_0[null]\n Problem processing event on channel 0: null<br>\n java.lang.Exception<br>\n at com.webaction.proc.TCPReader_1_0.receiveImpl(TCPReader_1_0.java:117)<br>\n at com.webaction.proc.BaseProcess.receive(BaseProcess.java:360)<br>\n at com.webaction.runtime.components.Source.run(Source.java:152)<br>\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br>\n at java.util.concurrent.FutureTask.run(FutureTask.java:266)<br>\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)<br>\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)<br>\n at java.lang.Thread.run(Thread.java:748)<br>\n 2019-07-04 21:38:43,626 @S192_168_1_81 @admin.eh -WARN BaseServer_WorkingThread-3\n com.webaction.runtime.components.FlowComponent.notifyAppMgr (FlowComponent.java:295)\n received exception from component :smx1, of exception type : java.lang.Exception<br>\n java.lang.Exception<br>\n at com.webaction.proc.TCPReader_1_0.receiveImpl(TCPReader_1_0.java:117)<br>\n at com.webaction.proc.BaseProcess.receive(BaseProcess.java:360)<br>\n at com.webaction.runtime.components.Source.run(Source.java:152)<br>\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br>\n at java.util.concurrent.FutureTask.run(FutureTask.java:266)<br>\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)<br>\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)<br>\n at java.lang.Thread.run(Thread.java:748)<br>\n 2019-07-04 21:38:43,630 @S192_168_1_81 @admin.eh -WARN BaseServer_WorkingThread-3\n com.webaction.appmanager.NodeManager.recvExceptionEvent (NodeManager.java:179)\n Exception event is received from component SOURCE - ExceptionEvent : {<br>\n \"componentName\" : \"smx1\" , \"componentType\" : \"SOURCE\" , \"exception\" : \"java.lang.Exception\"\n , \"message\" : null , \"relatedEvents\" : [ ] , \"action\" : \"CRASH\" , \"exceptionType\"\n : \"UnknownException\" , \"epochNumber\" : -1<br>\n }<br>\n 2019-07-04 21:38:43,814 @S192_168_1_81 @admin.eh -WARN pool-14-thread-4 com.webaction.proc.CDCReader.updateCDCProcessStatus\n (CDCReader.java:411) CDCProcess sqmxcdcp::W716-MERGE Status - STOPPED<br>\n 2019-07-04 21:38:43,816 @S192_168_1_81 @admin.eh -FATAL pool-14-thread-4 com.webaction.proc.CDCParser_1_0.hasNext\n (CDCParser_1_0.java:155) CDCProcess sqmxcdcp::W716-MERGE stopped with the error<br>\n Error Message:<br>\n ERROR in sqmxcdcp.W716 at (ListenerServImpl.cpp 197)<br>\n Record Initialization is failed.\n\n 2019-07-04 21:38:44,049 @S192_168_1_81 @admin.eh -WARN pool-14-thread-5 com.webaction.proc.CDCReader.updateCDCProcessStatus\n (CDCReader.java:411) CDCProcess sqmxcdcp::W716-MERGE Status - STOPPED<br>\n 2019-07-04 21:38:44,049 @S192_168_1_81 @admin.eh -FATAL pool-14-thread-5 com.webaction.proc.CDCParser_1_0.hasNext\n (CDCParser_1_0.java:155) CDCProcess sqmxcdcp::W716-MERGE stopped with the error<br>\n Error Message:<br>\n ERROR in sqmxcdcp.W716 at (ListenerServImpl.cpp 454)<br>\n Failure in CDCProcess while retrieving data records\n</pre>\n<p><br> To specify the startLSN value:<br> 1. identify the available audit trails<br> example:</p>\n<p> </p>\n<p>FILEINFO *.ZTMFAT.*</p>\n<p> </p>\n<pre> $AUDITE.ZTMFAT<br>\n CODE EOF LAST MODIFIED OWNER RWEP PExt SExt<br>\n AA027717 134+ 5242880000 29JUN2019 21:41 255,255 GGGG 160000 160000<br>\n AA027729 134+ 5242859520 30JUN2019 4:05 255,255 GGGG 160000 160000<br>\n AA027741 134+ 5242880000 30JUN2019 14:53 255,255 GGGG 160000 160000<br>\n AA027753 134+ 5242880000 01JUL2019 1:29 255,255 GGGG 160000 160000<br>\n AA027765 134+ 5242880000 01JUL2019 6:31 255,255 GGGG 160000 160000<br>\n AA027777 134+ 5242875904 01JUL2019 18:28 255,255 GGGG 160000 160000<br>\n AA027789 134+ 5242826752 02JUL2019 2:32 255,255 GGGG 160000 160000<br>\n AA027801 134+ 5242880000 02JUL2019 8:31 255,255 GGGG 160000 160000<br>\n $AUDITD.ZTMFAT<br>\n CODE EOF LAST MODIFIED OWNER RWEP PExt SExt<br>\n AA027716 134+ 5242834944 29JUN2019 20:31 255,255 GGGG 160000 160000<br>\n AA027728 134+ 5242781696 30JUN2019 3:27 255,255 GGGG 160000 160000<br>\n AA027740 134+ 5242880000 30JUN2019 14:02 255,255 GGGG 160000 160000<br>\n AA027752 134+ 5242691584 01JUL2019 1:09 255,255 GGGG 160000 160000<br>\n ...<br>\n ...\n</pre>\n<p> </p>\n<p>2. specify startLSN in NosStopReader.<br> if the app is in recovery mode, the app needs to be recreated:<br> - export to tql<br> - drop the app<br> - modify startLSN in tql file<br> - import the tql file</p>\n<p>example:<br> we want to start from 2019-06-29 21:41 with trail file AA027717.</p>\n<pre> StartLSN: 'MERGE-1:27717:0:0:0:0:0;',\n </pre>\n<p> </p>"} {"page_content": "<p>When target processing rate is lower than source rate, the input stream will be backpressured. This will prevent holding too many events in stream. By default, the threshold for backpressure is 10K in the number of events.</p>\n<p>This setting can be adjusted in ./bin/server.sh file.</p>\n<p>Following example sets the threshold to 100K. </p>\n<pre> -Dcom.webaction.optimalBackPressureThreshold=100000 \\</pre>\n<p>For cluster, the change should be on all the nodes. Striim server restart is required for the change to be effective.</p>\n<p>Please note:</p>\n<p>1. setting high threshold may use more resource, especially memory.</p>\n<p>2. optimalBackPressureThreshold is not the exact number, but the <span>2^n number of events which is the smallest buffer that can hold the specified number.</span>. e.g., for default 10K, 2^13 = 8K that is smaller than 10K, and 2^14=16K. Here 16K is the exact number that may be hold in the buffer.</p>"} {"page_content": "<p>This is an example of PostgreS EDB Setup on Linux (Centos 7) for Striim PostgreSQLReader.</p>\n<p>1. Download Postgres AdavancedServer 10* from https://www.enterprisedb.com/software-downloads-postgres<br>2. started XQUartz; ssh -Y <a href=\"mailto:fzhang@192.168.56.3\">fzhang@192.168.56.3</a> (from MacOS)<br>3. login as root (sudo su)<br>4. start UI <br>./edb-as10-server-10.5.12-1-linux-x64.run<br>5. superuser password = edb<br>6. Do make changes in /home/webaction/PosgreS/as9.6/data/postgresql.conf (change listen_addresses = '*')<br>7. open pg_hba.conf (search for host and change 127.0.0.1/32 part as 0.0.0.0/0 )<br>8. stop service: systemctl stop edb-as-10<br>9. start service:systemctl start edb-as-10<br>10. All done.<br>11. Login to Postgres db prompt<br>bin$ ./psql -d edb -U enterprisedb</p>\n<p>create database fan_db;<br>create user fan;<br>GRANT CONNECT ON DATABASE fan_db to fan;<br>GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO fan;<br>GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO fan;<br>edb=# \\password fan<br>Enter new password: <br>Enter it again:</p>\n<p><br>12. to check available databases <br>$ ./psql -l</p>\n<p>List of databases</p>\n<p>Name | Owner | Encoding | Collate | Ctype | ICU | Access privileges</p>\n<p>-----------+--------------+----------+-------------+-------------+-----+-------------------------------</p>\n<p>edb | enterprisedb | UTF8 | en_US.UTF-8 | en_US.UTF-8 | |</p>\n<p> </p>\n<p>Striim Setup<br>13. for 9.6 postgres, copy the file from &lt;striim_home&gt;/native/wal2json to &lt;PG&gt;/lib/ directory.<br> for other version, make the file by yourself: https://github.com/eulerto/wal2json<br>(1)$ git clone https://github.com/eulerto/wal2json.git<br>(2) make file <br>$ cd wal2json<br>$ export PATH=/opt/edb/as10/bin:$PATH<br>$ USE_PGXS=1 make<br>$ USE_PGXS=1 make install<br>example<br>------<br>[root@centos-vm wal2json]# USE_PGXS=1 make<br>gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -DLINUX_OOM_ADJ=0 -O2 -DMAP_HUGETLB=0x40000 -fPIC -I. -I./ -I/opt/edb/as10/include/server -I/opt/edb/as10/include/internal -I/opt/local/Current/include -D_GNU_SOURCE -I/opt/local/20160428/649c6f94-f2c0-4703-b065-99d58ae4acc6/include/libxml2 -I/opt/local/20160428/649c6f94-f2c0-4703-b065-99d58ae4acc6/include -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -I/mnt/hgfs/edb-postgres.auto/server/source/libmm-edb.linux-x64/inst/include -c -o wal2json.o wal2json.c<br>gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -DLINUX_OOM_ADJ=0 -O2 -DMAP_HUGETLB=0x40000 -fPIC -L/opt/edb/as10/lib -L/opt/local/20160428/649c6f94-f2c0-4703-b065-99d58ae4acc6/lib -L/opt/local/20160428/649c6f94-f2c0-4703-b065-99d58ae4acc6/lib -L/opt/local/Current/lib -L/mnt/hgfs/edb-postgres.auto/server/source/libmm-edb.linux-x64/inst/lib -Wl,--as-needed -Wl,-rpath,'/opt/edb/as10/lib',--enable-new-dtags -shared -o wal2json.so wal2json.o<br>[root@centos-vm wal2json]# USE_PGXS=1 make install<br>/bin/mkdir -p '/opt/edb/as10/lib'<br>/usr/bin/install -c -m 755 wal2json.so '/opt/edb/as10/lib/'<br>-------<br>(3) cd &lt;PG&gt;/bin<br>[root@centos-vm bin]# pg_recvlogical -d fan_db --slot striim_slot --create-slot -P wal2json -U enterprisedb<br>Password:</p>\n<p>(4) create login user<br>fan_db=# CREATE ROLE striim WITH LOGIN PASSWORD 'striim' replication;<br>fan_db=# GRANT SELECT ON ALL TABLES IN SCHEMA public to striim;</p>\n<p>14. restart PG<br>15. Copy edb-jdbc17.jar (for deb) &amp; postgresql-42.0.0.jre7.jar (for postgres) to target/dependency<br>16. For EDB → Create a tql with connection string =&gt; ConnectionURL:'jdbc:postgresql://192.168.56.3:5444/fan_db'</p>\n<p> </p>"} {"page_content": "<p>1. For version 3.9.6 and 3.9.7, the <strong>Global.AlertingApp</strong> app may be recreated as following through Striim console:</p>\n<pre>W (admin) &gt; use Global;<br>W (Global) &gt; stop application AlertingApp;<br>W (Global) &gt; undeploy application AlertingApp;<br>W (Global) &gt; drop application AlertingApp cascade;</pre>\n<p>Then SMTP settings may be configured again.</p>\n<p>2. For versions &gt;=3.9.8 and &lt;=3.10.x, the <strong>System$Alerts.AlertingApp</strong> app may be recreated as following through Striim console:</p>\n<pre>W (admin) &gt; stop application System$Alerts.AlertingApp;</pre>\n<p>There is no need to drop the app.</p>\n<p>Next, please g<span>o to Alert Manager, and check out the page. It should say <strong>\"Not Processing\"</strong>. Once clicking it, it will allow you to <strong>REPAIR SYSTEM ALERTS </strong></span></p>\n<p><span>Note: Custom Alerts (app crash, memory etc) are still preserved although SMTP settings are modified.</span></p>\n<p> </p>\n<p><span><img src=\"https://support.striim.com/hc/article_attachments/1500020214081\" alt=\"Screen_Shot_2021-06-02_at_12.46.10_PM.jpg\" width=\"554\" height=\"208\"></span></p>\n<p> </p>\n<p><span>3. For versions 4.0 and later the <strong>System$Alerts.AlertingApp</strong> app may be modified as following </span></p>\n<p><span><strong>(a)</strong> to auto-repair without modifying SMTP settings</span></p>\n<pre>W (admin) &gt; stop application System$Alerts.AlertingApp;<br>W (admin) &gt; <span>undeploy</span> application System$Alerts.AlertingApp;</pre>\n<p> </p>\n<p><span><img src=\"https://support.striim.com/hc/article_attachments/4478198047511\" alt=\"mceclip0.png\" width=\"556\" height=\"205\"></span></p>\n<p> </p>\n<p><span>Click on AUTOCORRECT</span></p>\n<p><span><strong>(b)</strong> to modify the SMTP settings</span></p>\n<pre>W (admin) &gt; stop application System$Alerts.AlertingApp;<br>W (admin) &gt; <span>undeploy</span> application System$Alerts.AlertingApp;<br>W (admin) &gt; use System$Alerts;<br>W (System$Alerts) &gt; drop propertyvariable System$Alerts.SMTP_URL cascade;<br>W (System$Alerts) &gt; drop propertyvariable System$Alerts.SMTP_PORT cascade;<br>W (System$Alerts) &gt; drop propertyvariable System$Alerts.SMTP_AUTH cascade;<br>W (System$Alerts) &gt; drop propertyvariable System$Alerts.SMTP_START_TLS cascade;<br>W (System$Alerts) &gt; drop propertyvariable System$Alerts.SMTP_USER cascade;<br>W (System$Alerts) &gt; drop propertyvariable System$Alerts.SMTP_PASSWORD cascade;<br>W (System$Alerts) &gt; drop propertyvariable System$Alerts.SMTP_FROM cascade;<br><br></pre>\n<p><span>Click on <strong>AUTOCORRECT</strong> and following screen will show up</span><span></span></p>\n<p><span><img src=\"https://support.striim.com/hc/article_attachments/4478231609239\" alt=\"mceclip1.png\" width=\"548\" height=\"202\"></span></p>\n<p> </p>\n<p>Click on <strong>CONFIGURE SMTP</strong> and enter the config details</p>\n<p>Note: To add a new alerts or create one that was dropped accidently. </p>\n<p><a href=\"https://support.striim.com/hc/en-us/articles/8491908136855-How-To-Create-Recreate-The-alerts-in-Alert-Manager-GUI\">https://support.striim.com/hc/en-us/articles/8491908136855-How-To-Create-Recreate-The-alerts-in-Alert-Manager-GUI</a> </p>\n<p><span>4. For versions 4.2.0 and later, the setting can be modified from UI: </span></p>\n<p><span>\"Alert Manager\" -&gt; \"Edit Email setup\"</span></p>"} {"page_content": "<p><span>This may be achieved by replacing ./webui/app/images/striim-logo-icon.png with an alternate png file.</span></p>\n<p><span>For certain browsers, the cache may need to be cleared before seeing the change. For example, Chrome new Incognito Window may be launched to confirm the change.</span></p>"} {"page_content": "<p>Here are the steps for setting up MSSQLReader through SSL:</p>\n<p>1. download and copy sql server jdbc driver<br> for SSL connection, please use driver of version 7.2 (older one like 4.2 may not work)<br> (1) copy sqljdbc_auth.dll to $JAVA_HOME/bin and $JAVA_HOME/lib directory.<br> (2) copy jar file (e.g., mssql-jdbc-7.2.1.jre8.jar) to &lt;Striim_Home&gt;/lib/ directory<br> (3) restart Striim server<br> <br> 2. obtain certificate from DBA</p>\n<p>3. install certificate on client (the server where Striim is installed)<br> reference: <a href=\"https://docs.microsoft.com/en-us/sql/connect/jdbc/configuring-the-client-for-ssl-encryption?view=sql-server-2017\" target=\"_self\">https://docs.microsoft.com/en-us/sql/connect/jdbc/configuring-the-client-for-ssl-encryption?view=sql-server-2017</a></p>\n<p>e.g.,</p>\n<pre> keytool -import -v -trustcacerts -alias orastmdl002 -file SQLServerCert.cer -keystore\n truststore.ks\n </pre>\n<p>(assuming password is set to 'changeit').</p>\n<p>4. URL for SSL connection<br> reference: <a href=\"https://docs.microsoft.com/en-us/sql/connect/jdbc/connecting-with-ssl-encryption?view=sql-server-2017\" target=\"_self\">https://docs.microsoft.com/en-us/sql/connect/jdbc/connecting-with-ssl-encryption?view=sql-server-2017</a></p>\n<p>(1) trustServerCertificate=true<br> example:</p>\n<pre> jdbc:sqlserver://192.168.0.10:1433;databaseName=StriimTestDB;encrypt=true;trustServerCertificate=true\n </pre>\n<p>(2) trustServerCertificate=false</p>\n<pre> jdbc:sqlserver://192.168.0.10:1433;databaseName=StriimTestDB;encrypt=true;trustServerCertificate=false;hostNameInCertificate=hostName.caas;trustStore=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/lib/security/cacerts;trustStorePassword=changeit\n </pre>\n<p><br> Troubleshooting:</p>\n<p>If there is any connection issue to SQL Server with Striim app, please use attached jar file to test if the specified URL works. if not, please work with your DBA or vendor about the connection issue. if it works, you may open a support ticket to Striim support.</p>\n<p>(1) downlaod the attached sqlserver_sample.jar file to the server where Striim is installed.<br> (2) SQL Server jdbc drivers<br> - if not done yet, copy sqljdbc_auth.dll to JAVA_HOME/bin and JAVA_HOME/lib directory.<br> - copy jdbc jar (e.g., mssql-jdbc-7.2.1.jre8.jar) to $JAVA_HOME/lib/ext/ directory (e.g., /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/lib/ext/ )<br> (3) test with the syntax:<br> java - jar sqlserver_sample.jar connectionURL username password tableName<br> (escape is needed before semi colon in URL)</p>\n<p>example 1: no SSL</p>\n<pre> # java -jar sqlserver_sample.jar jdbc:sqlserver://192.168.0.10:1433\\;databaseName=StriimTestDB\n striim_user striim_pwd dbo.StriimTestTable<br>\n Database connection successfull<br>\n ------Number of columns for table dbo.StriimTestTable : 2 ----------\n </pre>\n<p>example 2: SSL + trustServerCertificate=true</p>\n<pre> # java -jar sqlserver_sample.jar jdbc:sqlserver://192.168.0.10:1433\\;databaseName=StriimTestDB\\;integratedSecurity=false\\;encrypt=true\\;trustServerCertificate=true\n striim_user striim_pwd dbo.StriimTestTable<br>\n Database connection successfull<br>\n ------Number of columns for table dbo.StriimTestTable : 2 ----------\n </pre>\n<p>example 3: SSL + trustServerCertificate=false</p>\n<pre> java -jar sqlserver_sample.jar jdbc:sqlserver://192.168.0.10:1433\\;databaseName=StriimTestDB\\;encrypt=true\\;trustServerCertificate=false\\;hostNameInCertificate=hostName.caas\\;trustStore=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/lib/security/cacerts\\;trustStorePassword=changeit\n striim_user striim_pwd dbo.StriimTestTable<br>\n Database connection successfull<br>\n ------Number of columns for table dbo.StriimTestTable : 2 ----------\n </pre>"} {"page_content": "<h2>Question:</h2>\n<p dir=\"auto\">While reading data from CDC reader Or DatabaseReader, we have to refer to the position of column in a table via index of data[] to get the data in a column. Sometimes, when the table column definition is changed, the TQL has to be changed to obtain the correct column data. This is causing confusion during development and debugging.</p>\n<p dir=\"auto\">Do we have any way to refer to the column names in the data[]?</p>\n<h2>Answer:</h2>\n<p>Yes, you may use GETDATA with column name, instead of index.<br>For example,</p>\n<p>create table Test (A number primary key, B varchar2(100));</p>\n<p>Following sample query in CQ shows both retrieving data values with index (first two) and with column name (last two):<br><br>SELECT <br>data[0],<br>data[1],<br>GETDATA(d,\"A\"),<br>GETDATA(d,\"B\")<br>FROM dbr1_stream d;</p>\n<p>Please note that, using GETDATA may add a little bit overhead. If the load is high, please test its impact before implementing it to production environment.</p>"} {"page_content": "<p><span class=\"wysiwyg-underline\"><strong>Problem:</strong></span></p>\n<p>I am doing initial load from Oracle to postgres, and it fails with following error:<br>ERROR: invalid byte sequence for encoding \"UTF8\": 0x00</p>\n<p>Source oracle :</p>\n<pre>create table s3 (a number primary key, b varchar2(40));<br>\ninsert into s3 values (1, 'ac123'||unistr('\\0000\\0000'));<br>\ninsert into s3 values (2, 'good_value');<br>\ncommit;<br>\n</pre>\n<p>Target postgres:</p>\n<pre>create table s3 (a bigint primary key, b varchar(40));\n</pre>\n<p><span class=\"wysiwyg-underline\"><strong>Cause:</strong></span><br>This is the limitation of PostgreSQL, which cannot handle 0x00.<br>The error can be reproduced at psql level:</p>\n<pre>mydb=# insert into s3 values (3,U&amp;'\\0000');<br>ERROR: invalid byte sequence for encoding \"UTF8\": 0x00\n</pre>\n<p><span class=\"wysiwyg-underline\"><strong>Solution:</strong></span><br>One way is to clean up the source database.<br>If that is not possible, at Striim level, a CQ may be added to remove 0x00.</p>\n<p><strong>Solution 1:</strong></p>\n<p>Source data:</p>\n<pre>SQL&gt; select a,b, dump(b,1016) from s3;<br>\n\n\t A B<br>\n---------- ----------------------------------------<br>\nDUMP(B,1016)<br>\n--------------------------------------------------------------------------------<br>\n\t 1 ac123<br>\nTyp=1 Len=7 CharacterSet=WE8MSWIN1252: 61,63,31,32,33,0,0<br>\n\n\t 2 good_value<br>\nTyp=1 Len=10 CharacterSet=WE8MSWIN1252: 67,6f,6f,64,5f,76,61,6c,75,65<br>\n</pre>\n<p><br>Target data:</p>\n<pre>mydb=# select a,b,convert_to(b,'SQL_ASCII') from s3;<br>\n a | b | convert_to <br>\n---+------------+------------------------<br>\n 1 | ac123 | \\x6163313233<br>\n 2 | good_value | \\x676f6f645f76616c7565<br>\n<br>(2 rows)\n</pre>\n<p>TQL:</p>\n<pre>CREATE APPLICATION remove_0x00;<br>\n\nCREATE STREAM ora_1_cq_stream OF global.waevent;<br><br>\n\nCREATE TARGET postgres1 USING DatabaseWriter ( <br>\n DatabaseProviderType: 'Default',<br>\n CheckPointTable: 'CHKPOINT',<br>\n PreserveSourceTransactionBoundary: 'false',<br>\n Username: 'fan',<br>\n BatchPolicy: 'EventCount:1000,Interval:3',<br>\n CommitPolicy: 'EventCount:1000,Interval:3',<br>\n ConnectionURL: 'jdbc:postgresql://localhost:5432/mydb?stringtype=unspecified',<br>\n Tables: 'FZHANG.S3,s3',<br>\n Password: 'tqmKV8CpzHI=',<br>\n Password_encrypted: true<br><br>\n ) \nINPUT FROM ora_1_cq_stream;<br><br>\n\nCREATE SOURCE ora_src1 USING DatabaseReader (<br> \n Username: 'fzhang',<br>\n DatabaseProviderType: 'default',<br>\n ConnectionURL: 'jdbc:oracle:thin:@192.168.56.3:1521:orcl',<br>\n Tables: 'FZHANG.S3',<br>\n FetchSize: 100,<br>\n Password: 'UJQX3ATWnXw=',<br>\n Password_encrypted: true<br>\n ) <br>\nOUTPUT TO ora_src1_stream ;<br><br>\n\nCREATE CQ ora_1_cq <br>\nINSERT INTO ora_1_cq_stream<br>\n<span class=\"wysiwyg-color-red\">SELECT replacedata(o,'B',to_string(GETDATA(o,\"B\")).replaceall(\"\\\\\\\\u0000\",\"\") ) </span><br><span class=\"wysiwyg-color-red\">\nFROM ora_src1_stream o;</span><br><br>\n\nEND APPLICATION remove_0x00;<br>\n</pre>\n<p> </p>\n<p><strong>Solution 2: Using Striim Function - replaceStringRegex</strong></p>\n<p>If there are too many tables/columns have this type of values, Striim function may be considered.</p>\n<p>e.g.,</p>\n<p>CREATE CQ ora_1_cq <br>INSERT INTO ora_1_cq_stream<br><span class=\"wysiwyg-color-red\">SELECT replaceStringRegex(o,'\\u0000','NULL')</span><br><span class=\"wysiwyg-color-red\"> FROM ora_src1_stream o;</span><span class=\"wysiwyg-color-red\"></span></p>\n<p>Note: prior to version 4.1.2, this applies to data only, but not BEFORE image. Please use this option for version 4.1.2 or later.</p>\n<p> </p>\n<p><strong>Solution 3: Using Open Processor</strong></p>\n<p>If there are too many tables/columns have this type of values, Striim open processor may be used to remove all of them.</p>\n<p>Following is an example:</p>\n<p>(1). download attached scm file (TextCleaner4.scm for version 4.0.x and up, or TextCleaner.scm for prior versions), and copy it to &lt;striim_home&gt;/module/ directory.</p>\n<p>(2) restart striim server.<br><br>(3). modifying following tql file:<br>- db login URLs/username/password<br>- table names</p>\n<pre>CREATE APPLICATION db2db; <br><br>\n\nCREATE OR REPLACE SOURCE db_src USING DatabaseReader (<br>\n FetchSize: 1,<br>\n Username: 'qatest',<br>\n ConnectionURL: 'jdbc:oracle:thin:@//localhost:1521/XE',<br>\n Tables: 'QATEST.S3',<br>\n Password: 'qatest',<br>\n Password_encrypted: false<br>\n )<br>\nOUTPUT TO stream1 ;<br><br>\n\nCREATE STREAM cleanStream OF Global.WAEvent;<br><br>\n\nCREATE OPEN PROCESSOR cleanseOP <br>\n USING TextCleaner ( replaceTo: 'NULL', replaceFrom: '\\\\u0000' )<br>\nINSERT INTO cleanStream<br>\nFROM stream1;<br><br>\n\nCREATE OR REPLACE TARGET db_tar USING DatabaseWriter (<br>\n Username: 'waction',<br>\n Password_encrypted: 'false',<br>\n ConnectionURL: 'jdbc:postgresql://localhost:5432/webaction?stringtype=unspecified',<br>\n Tables: 'QATEST.%,public.%',<br>\n Password: 'waction'<br>\n ) INPUT FROM cleanStream;<br>\nEND APPLICATION db2db;</pre>\n<p> </p>\n<p>To modify multiple unicode in open processor : </p>\n<pre><span>replaceFrom: '[\\\\u0000\\\\u000A]',</span><br><span>replaceTo: 'NULL' )</span></pre>\n<p> </p>\n<p>The example is for DatabaseReader, it also applies to CDC.</p>\n<p> </p>\n<p>Here is the download link for TextCleaner4.scm : </p>\n<p><a class=\"c-link\" tabindex=\"-1\" href=\"https://striim-field-distro.s3.us-west-2.amazonaws.com/TextCleaner4.scm\" target=\"_blank\" rel=\"noopener noreferrer\" data-stringify-link=\"https://striim-field-distro.s3.us-west-2.amazonaws.com/TextCleaner4.scm\" data-sk=\"tooltip_parent\" data-remove-tab-index=\"true\">https://striim-field-distro.s3.us-west-2.amazonaws.com/TextCleaner4.scm</a></p>\n<p> </p>"} {"page_content": "<p>When using Striim OracleReader, certain database level privileges are required. </p>\n<table style=\"height: 269px;\" width=\"653\">\n<tbody>\n<tr>\n<td style=\"width: 159px;\"><strong>Privilege</strong></td>\n<td style=\"width: 159px;\">\n<p><strong>OracleReader </strong></p>\n<p><strong>(logminer Mode)</strong></p>\n</td>\n<td style=\"width: 161px;\"><strong>DatabaseReader</strong></td>\n<td style=\"width: 161px;\"><strong>DatabaseWriter</strong></td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">CREATE SESSION</td>\n<td style=\"width: 159px;\"> x</td>\n<td style=\"width: 161px;\"> x</td>\n<td style=\"width: 161px;\"> x</td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">SELECT ANY TRANSACTION</td>\n<td style=\"width: 159px;\"> x</td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">SELECT ANY DICTIONARY</td>\n<td style=\"width: 159px;\"> x</td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">\n<p>EXECUTE_CATALOG_ROLE (required for logminer: see <a href=\"https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-logminer-utility.html#GUID-ED46E42D-B412-4820-9753-EBE15F49BA21\">https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-logminer-utility.html#GUID-ED46E42D-B412-4820-9753-EBE15F49BA21</a>)</p>\n<p>(when database vault is in use, the privilege is replaced by:</p>\n<p>EXECUTE ON SYS.DBMS_LOGMNR<br>EXECUTE ON SYS.DBMS_LOGMNR_D<br>EXECUTE ON SYS.DBMS_LOGMNR_LOGREP_DICT<br>EXECUTE ON SYS.DBMS_LOGMNR_SESSION</p>\n<p>)</p>\n</td>\n<td style=\"width: 159px;\"> x</td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">SELECET ON SYSTEM.LOGMNR_COL$</td>\n<td style=\"width: 159px;\"> x</td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">SELECET ON SYSTEM.LOGMNR_OBJ$</td>\n<td style=\"width: 159px;\"> x</td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">SELECET ON SYSTEM.LOGMNR_USER$</td>\n<td style=\"width: 159px;\"> x</td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">SELECET ON SYSTEM.LOGMNR_UID$</td>\n<td style=\"width: 159px;\"> x </td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">LOGMINING</td>\n<td style=\"width: 159px;\"> x (12c)</td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">DDL privileges on target objects (when using DDL) </td>\n<td style=\"width: 159px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> x</td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">FLASHBACK ANY TABLE or FLASHBACK ON schema.table </td>\n<td style=\"width: 159px;\"> x (when fetching data through OP) </td>\n<td style=\"width: 161px;\"> x (when using AS OF SCN)</td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">SELECT on source tables </td>\n<td style=\"width: 159px;\"> </td>\n<td style=\"width: 161px;\"> x</td>\n<td style=\"width: 161px;\"> </td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">INSERT, UPDATE, DELETE on target tables</td>\n<td style=\"width: 159px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> x</td>\n</tr>\n<tr>\n<td style=\"width: 159px;\">CREATE TABLE</td>\n<td style=\"width: 159px;\"> </td>\n<td style=\"width: 161px;\"> </td>\n<td style=\"width: 161px;\"> x (if target CHECKPOINT table is required)</td>\n</tr>\n</tbody>\n</table>\n<p>When a privilege is missing, it may show error. This article lists the potential errors and how to troubleshoot the problem.</p>\n<p><strong>1.Missing \"create session\"</strong></p>\n<pre>connecting to database. URL {jdbc:oracle:thin:@192.168.56.3:1521:orcl} User {c} Error Msg: {ORA-01045: user C lacks CREATE SESSION privilege; logon denied }</pre>\n<p><strong>2. Have #1 privilege, but missing one or more of following privileges</strong><br>select any transaction<br>select any dictionary <br>execute_catalog_role</p>\n<pre>Missing Privileges to run Logminer. Run following commands : 'GRANT SELECT ANY TRANSACTION TO USERID' 'GRANT SELECT ANY DICTIONARY TO USERID' 'GRANT CREATE SESSION TO USERID' 'GRANT EXECUTE_CATALOG_ROLE TO USERID'</pre>\n<p>When source Oracle DB has database vault turned on, the error will be: </p>\n<pre><em>2034 : Start Failed: SQL Query Execution Error ; ;ErrorCode : 6550;SQLCode : 65000;SQL Message : ORA-06550: line 5, column 13: PLS-00201: identifier 'DBMS_LOGMNR' must be declared ORA-06550: line 2, column 1: PL/SQL: Statement ignore</em></pre>\n<p> </p>\n<p><strong>3. Have #1 and #2 privileges, but missing one or more of following privileges:</strong><br>SELECT ON SYSTEM.LOGMNR_COL$ <br>SELECT ON SYSTEM.LOGMNR_OBJ$ <br>SELECT ON SYSTEM.LOGMNR_USER$ <br>SELECT ON SYSTEM.LOGMNR_UID$ <br><br>(1) missing all:</p>\n<pre> Missing Privileges to run Logminer. Run following commands : 'GRANT SELECT ANY TRANSACTION TO USERID' 'GRANT SELECT ANY DICTIONARY TO USERID' 'GRANT CREATE SESSION TO USERID' 'GRANT EXECUTE_CATALOG_ROLE TO USERID' SELECT permission for {SYSTEM.LOGMNR_COL$,SYSTEM.LOGMNR_OBJ$,SYSTEM.LOGMNR_USER$,SYSTEM.LOGMNR_UID$} is missing, please execute the following commad to grant the same GRANT SELECT ON SYSTEM.LOGMNR_COL$ TO c; GRANT SELECT ON SYSTEM.LOGMNR_OBJ$ TO c; GRANT SELECT ON SYSTEM.LOGMNR_USER$ TO c; GRANT SELECT ON SYSTEM.LOGMNR_UID$ TO c;</pre>\n<p><br>(2) missing last 3</p>\n<pre> Missing Privileges to run Logminer. Run following commands : 'GRANT SELECT ANY TRANSACTION TO USERID' 'GRANT SELECT ANY DICTIONARY TO USERID' 'GRANT CREATE SESSION TO USERID' 'GRANT EXECUTE_CATALOG_ROLE TO USERID' SELECT permission for {SYSTEM.LOGMNR_OBJ$,SYSTEM.LOGMNR_USER$,SYSTEM.LOGMNR_UID$} is missing, please execute the following commad to grant the same GRANT SELECT ON SYSTEM.LOGMNR_OBJ$ TO c; GRANT SELECT ON SYSTEM.LOGMNR_USER$ TO c; GRANT SELECT ON SYSTEM.LOGMNR_UID$ TO c;</pre>\n<p><br>(3) missing last 2</p>\n<pre>Missing Privileges to run Logminer. Run following commands : 'GRANT SELECT ANY TRANSACTION TO USERID' 'GRANT SELECT ANY DICTIONARY TO USERID' 'GRANT CREATE SESSION TO USERID' 'GRANT EXECUTE_CATALOG_ROLE TO USERID' SELECT permission for {SYSTEM.LOGMNR_USER$,SYSTEM.LOGMNR_UID$} is missing, please execute the following commad to grant the same GRANT SELECT ON SYSTEM.LOGMNR_USER$ TO c; GRANT SELECT ON SYSTEM.LOGMNR_UID$ TO c;</pre>\n<p><br>(4) missing last one</p>\n<pre> Missing Privileges to run Logminer. Run following commands : 'GRANT SELECT ANY TRANSACTION TO USERID' 'GRANT SELECT ANY DICTIONARY TO USERID' 'GRANT CREATE SESSION TO USERID' 'GRANT EXECUTE_CATALOG_ROLE TO USERID' SELECT permission for {SYSTEM.LOGMNR_UID$} is missing, please execute the following commad to grant the same GRANT SELECT ON SYSTEM.LOGMNR_UID$ TO c;</pre>\n<p><strong>4. Missing LOGMINING (for 12c only)</strong></p>\n<pre> Start Failed: SQL Query Execution Error ;ErrorCode : 1031;SQLCode : 42000;SQL Message : ORA-01031: insufficient privileges ORA-06512: at \"SYS.DBMS_LOGMNR\", line 58 ORA-06512: at line 2 </pre>\n<p><strong>5. Hitting privilege error, although privileges are granted explicitly</strong><br><span class=\"wysiwyg-underline\">Problem</span>: I granted requested privileges (event added DBA role) explicitly, but app start still crashed with privilege error. It is one of above errors, depending on if I grant the privileges through a role or directly.<br><span class=\"wysiwyg-underline\">Cause</span>: When the role is not enabled, it might cause this problem.<br><span class=\"wysiwyg-underline\">Reproduce the issue</span>: SQL&gt; alter user striim_user default role none;<br><span class=\"wysiwyg-underline\">How to troubleshot the problem</span>:<br>SQL&gt; select GRANTEE,GRANTED_ROLE,DEFAULT_ROLE from dba_role_privs where grantee='STRIIM_USER';</p>\n<p>GRANTEE GRANTED_ROLE DEF<br>------------------------------ ------------------------------ ---<br>STRIIM_USER STRIIM_PRIVS <span class=\"wysiwyg-color-red\">NO</span></p>\n<p>DEFAULT_ROLE shows 'NO', while it should be 'YES'.</p>\n<p><br>if the missing role is only for execute_catalog_role, you can login as the user, and check:<br>SQL&gt; select count(*) from session_roles where ROLE='EXECUTE_CATALOG_ROLE';</p>\n<p>COUNT(*)<br>----------<br><span class=\"wysiwyg-color-red\">0</span> (it should return 1)</p>\n<p><span class=\"wysiwyg-underline\">Solutions</span>:<br>(1) Drop and recreate the user and grant the privileges. Then check the role(s) again.</p>\n<p>SQL&gt; select GRANTEE,GRANTED_ROLE,DEFAULT_ROLE from dba_role_privs where grantee='STRIIM_USER';</p>\n<p>GRANTEE GRANTED_ROLE DEF<br>------------------------------ ------------------------------ ---<br>STRIIM_USER STRIIM_PRIVS <span class=\"wysiwyg-color-red\">YES</span></p>\n<p><br>Attached are scripts granting the privileges to an existing user.</p>\n<p>Or</p>\n<p>(2) SQL&gt; </p>\n<p class=\"p2\"><span class=\"s1\">SQL&gt; set role STRIIM_PRIVS;</span></p>\n<p class=\"p2\"><span class=\"s1\">Role set.</span></p>\n<p class=\"p2\"><span class=\"s1\">SQL&gt; select count(*) from session_roles where ROLE='EXECUTE_CATALOG_ROLE';</span></p>\n<p class=\"p2\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>COUNT(*)</span></p>\n<p class=\"p2\"><span class=\"s1\">----------</span></p>\n<p class=\"p2\">1</p>\n<p class=\"p1\"> </p>\n<p><strong>6. OracleReader CDC doesn't capture data or errors against CDB</strong></p>\n<p><strong><span class=\"wysiwyg-underline\">Problem</span>: </strong>For pre-3.10: OracleReader CDC would not capture any data</p>\n<p>For 3.10 and above: OracleReader would fail with error like upon encountering the first dml</p>\n<p>java.lang.RuntimeException: Problem creating type: admin.src_cdb_PDB1_RAJ_TEST_Type</p>\n<p><span class=\"wysiwyg-underline\"><strong>Cause &amp; Solution: </strong></span>This is due to following not granted for the common user (c##striim in this case) which is used by OracleReader as the login user against CDB</p>\n<pre class=\"p1\">alter user c##striim set container_data = (cdb$root, &lt;PDB name&gt;) container=current;</pre>\n<p><br><strong>NOTE: </strong>If there are still privilege issues, please open a ticket to Striim support with following files:</p>\n<p>- the spooled file from grant script</p>\n<p>- spooled file from attached <span class=\"s1\">oracle_privs_troubleshoot.sql</span></p>\n<p> </p>"} {"page_content": "<p class=\"p1\"><strong><span class=\"wysiwyg-underline\">Prerequisites:</span></strong></p>\n<ul>\n<li class=\"p2\">Install Kerberos in all nodes of the cluster and the machine which access the kerberos cluster. Steps for installing in Cloudera(<a href=\"https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_sg_intro_kerb.html)\">https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_sg_intro_kerb.html)</a>\n</li>\n<li class=\"p2\">Make sure that the kudu Cluster is enabled with the kerberos authentication. If not <span class=\"s1\">enable </span>the kerberos Authentication by following steps(<a href=\"https://www.cloudera.com/documentation/enterprise/latest/topics/kudu_security.html\">https://www.cloudera.com/documentation/enterprise/latest/topics/kudu_security.html</a>)</li>\n</ul>\n<p> </p>\n<p class=\"p1\"><span class=\"wysiwyg-underline\"><strong>Configuring Striim KuduWriter:</strong></span></p>\n<p class=\"p2\">Striim KuduWriter acts as a Client to the Kudu Cluster. It is recommended to run the Striim node from the cluster where a Kudu worker node which is already configured to be the part of kerberos Cluster, Thus making use of the default principal and the Authentication credentials of the worker node where kudu is running. If not, the kerberos cluster should be configured in the same way as that of the kudu cluster. An example of krb5.conf file is as Follows:</p>\n<table style=\"width: 350px;\">\n<tbody>\n<tr>\n<td style=\"width: 347px;\">\n<p class=\"wysiwyg-text-align-left wysiwyg-indent3\">[libdefaults]<br>default_realm = HADOOPSECURITY.LOCAL<br>dns_lookup_kdc = false<br>dns_lookup_realm = false<br>ticket_lifetime = 86400<br>renew_lifetime = 604800<br>forwardable = true<br>default_tgs_enctypes = aes256-cts<br>default_tkt_enctypes = aes256-cts<br>permitted_enctypes = aes256-cts<br>udp_preference_limit = 1<br>kdc_timeout = 3000</p>\n<p class=\"wysiwyg-text-align-left wysiwyg-indent3\">[realms]<br>HADOOPSECURITY.LOCAL = {<br>kdc = 172.31.26.88<br>admin_server = 172.31.26.88<br>}<br>[domain_realm]</p>\n</td>\n</tr>\n</tbody>\n</table>\n<p class=\"p1\"> </p>\n<p class=\"p1\">KuduWriter needs two authentication credentials <strong>Principal</strong> and a <strong>.Keytab</strong> file for that principal which has to be created for the node where kudu's master is running. Follow the steps given below to create those constrains.</p>\n<ol>\n<li class=\"p1\"><strong>Creating Principal</strong></li>\n</ol>\n<p class=\"p1 wysiwyg-indent2\">A principal has to be created for node where the Striim server is running. It has to created in the default realm which has been mentioned in the krb5.conf file.</p>\n<table style=\"height: 109px; width: 295px;\">\n<tbody>\n<tr style=\"height: 93px;\">\n<td class=\"wysiwyg-indent2\" style=\"width: 273px; height: 93px;\">$ kadmin<br>kadmin:addprinc [principal<br>name]/[hostname].[realm]</td>\n</tr>\n</tbody>\n</table>\n<p class=\"wysiwyg-text-align-left wysiwyg-indent2\">2.<strong> Creating Keytab<br></strong></p>\n<p class=\"wysiwyg-text-align-left wysiwyg-indent2\">A keytab has to be created for the principal which has been created</p>\n<table>\n<tbody>\n<tr>\n<td class=\"wysiwyg-indent2\">\n<p>$ kadmin<br>kadmin:ktadd -k /etc/security/[keytab<br>name].keytab [principal<br>name]/[hostname].[realm]</p>\n</td>\n</tr>\n</tbody>\n</table>\n<p><br><span class=\"wysiwyg-color-blue\"><em>Note: check whether the keytab has been created properly for the principal using klist -e -k [keytab</em></span><br><span class=\"wysiwyg-color-blue\"><em>name].keytab</em></span></p>\n<p><span class=\"wysiwyg-color-black\">The values should be given in the authentication Policy of KuduWriter. A sample tql is given below as follows:</span></p>\n<p>CREATE OR REPLACE TARGET WriteintoKudu1 USING KuduWriter (<br>batchpolicy: 'EventCount:1,Interval:0',<br>pkupdatehandlingmode: 'DELETEANDINSERT',<br>tables: 'impala::default.KUDU_ALLDATATYPES_Merchant',<br>authenticationpolicy: 'kerberos,Principal:kudu/ip-172-31-31-54.us-west-2.co<br>mpute.internal@HADOOPSECURITY.LOCAL,<br>KeyTabPath:/run/cloudera-scm-agent/process/117-kudu-KUDU_MASTER/kudu.keytab',<br>UpdateAsUpsert: 'false',<br>adapterName: 'KuduWriter',<br>kuduclientconfig: 'master.addresses-&gt;ip-172-31-31-54.us-west-2.compute.inte<br>rnal:7051;socketreadtimeout-&gt;3000;operationtimeout-&gt;240',<br>CheckPointTable: 'CHKPOINT'<br>)<br>INPUT FROM TypedAccessLogStream;</p>\n<p> </p>\n<p><span class=\"wysiwyg-underline\"><strong>TroubleShooting:</strong></span></p>\n<p><br>Some of the Exceptions which may rise while using Kerberos Authentication:</p>\n<ol>\n<li>caused by <strong>org.apache.kudu.client.NonRecoverableException: server requires authentication, but client does not have Kerberos </strong><strong>credentials (tgt). Authentication tokens were not used because no token is available</strong>. When the authentication credentials is not provided for the node which has kerberos authentication enabled in it. Or when the credentials are not pointing to the master address which has been mentioned. <span class=\"wysiwyg-underline\">Solution:</span> If the principal and keytab or not available create one. If available check whether they point to the correct host</li>\n<li>caused by <strong>org.apache.kudu.client.NonRecoverableException:Couldn't resolve this master's address</strong>. When the master node is not available for the given master address and authentication policy. <span class=\"wysiwyg-underline\">Solution:</span> Create the credentials for the right node and try again</li>\n</ol>"} {"page_content": "<p>For an app with KafkaWriter (KW) in recovery mode, the Kafka checkpoint information is stored Striim MDR (by default derby database) as well as inside each Kafka message. If these two checkpoints do not match, it will cause error.</p>\n<p>Here lists several errors that is related to KW checkpoints.</p>\n<p><span class=\"wysiwyg-font-size-large\">1. topic is empty while checkpoint exists in MDR</span></p>\n<pre>Start failed! Exception(s) leading to CRASH State: { \"componentName\" : \"admin.ora_mask_kafka\" , \"componentType\" : \"TARGET\" , \"exception\" : \"com.webaction.runtime.components.StriimComponentException\" , \"message\" : \"Could not find the data with Kafka offset - 0 in topic (test1-0). Last offset of data topic is -1. Data in the topic seems to be lost.\" , \"relatedEvents\" : null }</pre>\n<p>This can be reproduced in following way:<br>- start app in recovery mode (e.g., oracle CDC -&gt; Kafka)<br>- process at least 1 event from source to target.<br>- stop the app<br>- drop the Kafka topic (make sure it is allowed by Kafka), and recreate the topic<br>- start the app, and it will hit above error.</p>\n<p>The cause is that after recreating the topic, the checkpoint in the topic is missing, while still exists in MDR. Therefore, the solutions are:<br>(1) recreate the app<br>- export the app<br>- drop the app<br>- re-import the app<br>please note that the checkpoinnt for source will also be lost by this way.<br>(2) manually remove the checkpoint from MDR. Then restart the Striim server.<br>If this way is preferred, please contact Striim Support by openning a ticket.</p>\n<p><span class=\"wysiwyg-font-size-large\">2. When the topic is changed by a producer other than Striim</span></p>\n<pre>Start failed! Exception(s) leading to CRASH State: { \"componentName\" : \"admin.ora_mask_kafka\" , \"componentType\" : \"TARGET\" , \"exception\" : \"com.webaction.runtime.components.StriimComponentException\" , \"message\" : \"Problem while fetching latest position of (test1,0). This usually happens if the partition log file has been corrupted or the existing data in the partition was generated through a different application.java.lang.RuntimeException: com.webaction.common.exc.InvalidDataException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'hello': was expecting ('true', 'false' or 'null')\\n at [Source: ; line: 1, column: 11]\" , \"relatedEvents\" : null }</pre>\n<p>This can be reproduced in following way:<br>- start app in recovery mode<br>- process at least 1 event from source to target.<br>- use producer from command line enter 1-2 messages.<br>- enter an event in source, which will go to the topic fine.<br>- stop the app<br>- use producer from command line enter 1-2 messages again<br>- start the app, and it will hit above error.</p>\n<p>As a change from other producer is not supported, app may need to be recreated, and the topic can als be recreated or a new topic is specified.</p>"} {"page_content": "<h3><strong>Question:</strong></h3>\n<p>I have an OracleReader with recovery enabled. When processing a batch job with large transaction, it crashed with error: heap usage threshold of 90.0. However at restart, it worked. I understand that pending transaction will be in memory. But why restart worked? </p>\n<h3><strong>Answer:</strong></h3>\n<p>First, please consider to increase <span class=\"s1\">MEM_MAX setting in ./conf/startUp.properties file (default 4GB), if there is enough resource.</span></p>\n<p>In addition, this specific issue is related to GC (garbage collection), which has default threshold 55%. When Striim Window/Cache are not in active usage, you may consider following change in ./bin/server.sh</p>\n<p>from:</p>\n<pre>EVICTTHRESHOLD=55\n</pre>\n<p>to:</p>\n<pre>EVICTTHRESHOLD=25\n</pre>\n<p>Then restart the Striim server.<br> The change will trigger the GC with lower threshold, to make more memory available for large transactions.</p>"} {"page_content": "<p>For Kafka with Kerberos Authentication use the following sasl.properties contents</p>\n<p>security.protocol = SASL_PLAINTEXT <br>sasl.mechanism = GSSAPI</p>\n<p>Here is an example:</p>\n<pre>CREATE OR REPLACE PROPERTYSET psKafka ( <br>securityconfig:'security.protocol:SASL_PLAINTEXT,sasl.kerberos.service.name:kafka,sasl.mechanism:GSSAPI',<br>zk.address:'server1:2181,server2:2181,server3:2181', <br>bootstrap.brokers:'server1:9092,server2:9092,server3:9092,server4:9092' <br>);</pre>"} {"page_content": "<h3><strong>Question:</strong></h3>\n<p>When sending Oracle transactions to a non-DB target (such as Kafka), certain column values (like CLOB) may not be wanted. How can I exclude the those column values?</p>\n<h3><strong>Answer:</strong></h3>\n<p>A non-key column may be excluded at OracleReader with TABLE setting like:<br><span class=\"wysiwyg-color-blue\">Table: 'SCOTT.EMP(-SALARY)'</span></p>\n<p>Following example shows how to exclude 2 CLOB column values from a table.</p>\n<p>1. create table <br><span class=\"wysiwyg-color-blue\">create table s1 (a number, b varchar2(10), c clob, d number, e clob);</span></p>\n<p>2. OracleReader Table parameter:<br><span class=\"wysiwyg-color-blue\">Tables: 'FZHANG.S1(-C,-E)',</span></p>\n<p>Full tql is attached to this article.</p>\n<p>3. test results.<br>(1) <span class=\"wysiwyg-color-blue\">insert into s1 values (1,1,1,1,1);</span><br><span class=\"wysiwyg-color-blue\"> commit;</span><br>it captures two events: first insert and second update for CLOB that is set to null now.</p>\n<p><textarea cols=\"80\" rows=\"30\">ora1_out: WAEvent{\n data: [\"1\",\"1\",null,\"1\",null]\n metadata: {\"RbaSqn\":\"94\",\"AuditSessionId\":\"1330164\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"2327777\",\"SQLRedoLength\":\"93\",\"BytesProcessed\":\"561\",\"ParentTxnID\":\"5.3.1660\",\"SessionInfo\":\"UNKNOWN\",\"RecordSetID\":\" 0x00005e.00001236.0154 \",\"COMMITSCN\":2327787,\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"2327777\",\"SegmentName\":\"S1\",\"OperationName\":\"INSERT\",\"TimeStamp\":1533532845000,\"TxnUserID\":\"FZHANG\",\"RbaBlk\":\"4662\",\"SegmentType\":\"TABLE\",\"TableName\":\"FZHANG.S1\",\"TxnID\":\"5.3.1660\",\"Serial\":\"439\",\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1533532850000,\"OperationType\":\"DML\",\"ROWID\":\"AAAAAAAAAAAAAAAAAA\",\"TransactionName\":\"\",\"SCN\":\"232777700000264586481163308360000\",\"Session\":\"10\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"Cw==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: {\"uuidstring\":\"01e89b32-56fa-b0a1-9706-227eec4af698\"}\n};\nora1_out: WAEvent{\n data: [\"1\",\"1\",null,\"1\",null]\n metadata: {\"RbaSqn\":\"94\",\"AuditSessionId\":\"1330164\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"2327778\",\"SQLRedoLength\":\"89\",\"BytesProcessed\":\"602\",\"ParentTxnID\":\"5.3.1660\",\"SessionInfo\":\"UNKNOWN\",\"RecordSetID\":\" 0x00005e.0000123a.0124 \",\"COMMITSCN\":2327787,\"SEQUENCE\":\"2\",\"Rollback\":\"0\",\"STARTSCN\":\"2327777\",\"SegmentName\":\"S1\",\"OperationName\":\"UPDATE\",\"TimeStamp\":1533532845000,\"TxnUserID\":\"FZHANG\",\"RbaBlk\":\"4666\",\"SegmentType\":\"TABLE\",\"TableName\":\"FZHANG.S1\",\"TxnID\":\"5.3.1660\",\"Serial\":\"439\",\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1533532850000,\"OperationType\":\"DML\",\"ROWID\":\"AAAWNGAAEAAAANFAAA\",\"TransactionName\":\"\",\"SCN\":\"232777800000264586481165929320000\",\"Session\":\"10\"}\n userdata: null\n before: [\"1\",\"1\",null,\"1\",null]\n dataPresenceBitMap: \"Cw==\"\n beforePresenceBitMap: \"Cw==\"\n typeUUID: {\"uuidstring\":\"01e89b32-56fa-b0a1-9706-227eec4af698\"}\n};\n</textarea></p>\n<p>(2) <span class=\"wysiwyg-color-blue\">update s1 set b=2,c=2,d=2,e=3;</span><br><span class=\"wysiwyg-color-blue\"> commit;</span><br>it captures 2 updates: first one changed non-clob columns, and second one changed clob values (as theor values are excluded, it shows NULL).</p>\n<p><textarea cols=\"80\" rows=\"30\">ora1_out: WAEvent{\n data: [\"1\",\"2\",null,\"2\",null]\n metadata: {\"RbaSqn\":\"94\",\"AuditSessionId\":\"1330164\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"2327848\",\"SQLRedoLength\":\"89\",\"BytesProcessed\":\"602\",\"ParentTxnID\":\"1.5.1259\",\"SessionInfo\":\"UNKNOWN\",\"RecordSetID\":\" 0x00005e.0000124c.0010 \",\"COMMITSCN\":2327850,\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"2327848\",\"SegmentName\":\"S1\",\"OperationName\":\"UPDATE\",\"TimeStamp\":1533533024000,\"TxnUserID\":\"FZHANG\",\"RbaBlk\":\"4684\",\"SegmentType\":\"TABLE\",\"TableName\":\"FZHANG.S1\",\"TxnID\":\"1.5.1259\",\"Serial\":\"439\",\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1533533025000,\"OperationType\":\"DML\",\"ROWID\":\"AAAAAAAAAAAAAAAAAA\",\"TransactionName\":\"\",\"SCN\":\"232784800000264586481177723040000\",\"Session\":\"10\"}\n userdata: null\n before: [\"1\",\"1\",null,\"1\",null]\n dataPresenceBitMap: \"Cw==\"\n beforePresenceBitMap: \"Cw==\"\n typeUUID: {\"uuidstring\":\"01e89b32-56fa-b0a1-9706-227eec4af698\"}\n};\nora1_out: WAEvent{\n data: [\"1\",\"2\",null,\"2\",null]\n metadata: {\"RbaSqn\":\"94\",\"AuditSessionId\":\"1330164\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"2327848\",\"SQLRedoLength\":\"89\",\"BytesProcessed\":\"602\",\"ParentTxnID\":\"1.5.1259\",\"SessionInfo\":\"UNKNOWN\",\"RecordSetID\":\" 0x00005e.0000124b.0010 \",\"COMMITSCN\":2327850,\"SEQUENCE\":\"2\",\"Rollback\":\"0\",\"STARTSCN\":\"2327848\",\"SegmentName\":\"S1\",\"OperationName\":\"UPDATE\",\"TimeStamp\":1533533024000,\"TxnUserID\":\"FZHANG\",\"RbaBlk\":\"4683\",\"SegmentType\":\"TABLE\",\"TableName\":\"FZHANG.S1\",\"TxnID\":\"1.5.1259\",\"Serial\":\"439\",\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1533533025000,\"OperationType\":\"DML\",\"ROWID\":\"AAAWNGAAEAAAANFAAA\",\"TransactionName\":\"\",\"SCN\":\"232784800000264586481177067680000\",\"Session\":\"10\"}\n userdata: null\n before: [\"1\",\"2\",null,\"2\",null]\n dataPresenceBitMap: \"Cw==\"\n beforePresenceBitMap: \"Cw==\"\n typeUUID: {\"uuidstring\":\"01e89b32-56fa-b0a1-9706-227eec4af698\"}\n};\n</textarea></p>\n<p>(3) <span class=\"wysiwyg-color-blue\">delete from s1;</span><br><span class=\"wysiwyg-color-blue\"> commit;</span><br>delete event does not include CLOB columns per se, so it is same as not using column exclusion here.</p>\n<p><textarea cols=\"80\" rows=\"10\">ora1_out: WAEvent{\n data: [\"1\",\"2\",null,\"2\",null]\n metadata: {\"RbaSqn\":\"94\",\"AuditSessionId\":\"1330164\",\"TableSpace\":\"USERS\",\"CURRENTSCN\":\"2327909\",\"SQLRedoLength\":\"69\",\"BytesProcessed\":\"581\",\"ParentTxnID\":\"4.3.1277\",\"SessionInfo\":\"UNKNOWN\",\"RecordSetID\":\" 0x00005e.00001267.0010 \",\"COMMITSCN\":2327910,\"SEQUENCE\":\"1\",\"Rollback\":\"0\",\"STARTSCN\":\"2327909\",\"SegmentName\":\"S1\",\"OperationName\":\"DELETE\",\"TimeStamp\":1533533186000,\"TxnUserID\":\"FZHANG\",\"RbaBlk\":\"4711\",\"SegmentType\":\"TABLE\",\"TableName\":\"FZHANG.S1\",\"TxnID\":\"4.3.1277\",\"Serial\":\"439\",\"ThreadID\":\"1\",\"COMMIT_TIMESTAMP\":1533533186000,\"OperationType\":\"DML\",\"ROWID\":\"AAAWNGAAEAAAANFAAA\",\"TransactionName\":\"\",\"SCN\":\"232790900000264586481195417760000\",\"Session\":\"10\"}\n userdata: null\n before: null\n dataPresenceBitMap: \"Cw==\"\n beforePresenceBitMap: \"AA==\"\n typeUUID: {\"uuidstring\":\"01e89b32-56fa-b0a1-9706-227eec4af698\"}\n};\n</textarea></p>\n<p> </p>"} {"page_content": "<p>Kafka topic message has limit of 1M by default.<br>With default setting, when writing a large message (e.g., from Oracle CLOB column), it will hitting following error.</p>\n<p>e.g., even size = 1.6M</p>\n<p><strong>Caused by: com.webaction.target.kafka.RecordBatchTooLargeException: Size of the incoming event 1600105 is greater than the batch.size 999900. Please increase the batch.siz</strong><br><strong>e and max.message.bytes (topic configration) respectively.</strong></p>\n<p>To handle this, Kafka parameters need to be changed at tow places:</p>\n<p>1. at Striim Kafkawriter (as a producer), append following to \"Kafka Config\":<br>batch.size=2000000;max.request.size=2000000;</p>\n<p><br>2. at Kafka server (broker), add or modify server.properties file:<br>message.max.bytes=2000000</p>\n<p>With #1 but without #2, following error will be encountered.</p>\n<p><strong>Problem while flushing data to kafka topic \"test\", partition id \"0\".java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.</strong></p>"} {"page_content": "<p>By default, Striim server will use the timezone from host server.<br>Sometimes because source data time zone and Striim server time zone differ, it makes the reading from console and logs difficult. It will be helpful to setup Striim timezone same as source one.</p>\n<p> </p>\n<p>Striim timezone may be changed at following levels: </p>\n<p>1. changing the host time zone</p>\n<p>2. set Striim server time zone<br>add a line in ./bin/server.sh as a JVM property<br>e.g.,<br>-Duser.timezone=\"America/New_York\" \\</p>\n<p>For windows server ./bin/server.bat as a JVM property</p>\n<p>-Duser.timezone=\"America/New_York\" ^</p>\n<p>then restart the Striim server. The timestamps in the logs will be from the new time zone.</p>\n<p><strong>Note: If Striim agent is in use, add the above mentioned JVM property in agent.sh for unix operating system and in agent.bat for windows operating system. It is recommended to have same timezone setting as server. This will ensure that date values are not added with offset when source and target databases are in different timezone.</strong></p>\n<p>3. show different time zone in console at session level</p>\n<p>example: source Oracle DB is at EDT, and Striim server is at PDT.<br>from 'mon &lt;source&gt;', it may show following mismatch, although there is no lag:</p>\n<p>│ Oracle Reader Last Timestamp │ S192_168_59_1:2018-08-03T14:17:45.000-07:00 │<br>│ Timestamp │ 2018/08/03-11:17:53</p>\n<p>This may be changed by setting environmental variable TZ: <br>export TZ=America/New_York<br>login to console from same session. 'mon &lt;source&gt;' now shows the matching timestamp.</p>\n<p>│ Oracle Reader Last Timestamp │ S192_168_59_1:2018-08-03T14:17:45.000-07:00 │<br>│ Timestamp │ 2018/08/03-14:19:43</p>\n<p>4. If Striim agent is in use, please make similar change in agent.sh|agent.bat file. It is recommended to have same timezone setting as server.</p>\n<p>5. In general, <span>set </span>-Duser.timezone=\"UTC\"\\ will prevent the timezone adjustment.<span></span></p>"} {"page_content": "<p><strong><span class=\"wysiwyg-font-size-large\">Question:</span></strong></p>\n<p>I have lots of applications, and found that Elasticsearch (ES) data directory grows fast. At one time, it fills up my disk space (10GB limit). I do not have any WactionStore object. Is there a way to limit its growth? </p>\n<p> </p>\n<p><strong><span class=\"wysiwyg-font-size-large\">Answer:</span></strong></p>\n<p>ES directory is used mainly to store WactionStore and monitoring index. Later is storing the status information that may grow gradually. ES has time to live (TTL) of 24 hours by default. To change this TTL setting, </p>\n<p>1. In version 4.10.1 and later, default monitoring data gathering interval is increased from 5 sec to 15 sec. The interval is configured by the setting in startUp.properties:</p>\n<p>e.g.,</p>\n<p class=\"p1\"><span class=\"s1\">MonitorDataCollectorInterval=20000</span></p>\n<p class=\"p1\"><span class=\"s1\">(unit is milli second).</span></p>\n<p>2. for version 3.9 and later, parameter may be set in startUp.peroperties file.</p>\n<p>MonitorPersistenceRetention=4</p>\n<p>Here 4 is in hours. </p>\n<p>3. for older version, following parameter may be added to server.sh file.</p>\n<p>e.g.,</p>\n<pre> -Dcom.webaction.config.monitor-db-max=14400000 \\<br><br>\n</pre>\n<p>The value is in milli-second, and above example is for 4 hours (4*3600*1000).</p>\n<p>4. Striim server needs to be restarted for the change to be effective. ./elasticsearch/data/ directory can be removed before start the server.</p>\n<p> </p>\n<p>Please note that the setting is the keep the data for specified time, and it does not mean that the data older than the specified time will be purged immediately.</p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\">Problem:</span></p>\n<p>SQL SERVER databasewrite hits error for ddl column type in checkpoint table.</p>\n<div class=\"code panel\">\n<div class=\"codeContent panelContent\">\n<pre class=\"code-java\">2017-10-30 20:24:47,003 @S16_202_12_75 -ERROR qtp1854551986-106 com.webaction.web.RMIWebSocket.handleMessageException (RMIWebSocket.java:251) Problem executing call: com.webaction.exception.Warning: java.util.concurrent.ExecutionException: com.webaction.common.exc.AdapterException: Initialization exception in Target Adapter Tgt_Sql. Cause: Error in initialising DatabaseWriter {2748 : Incorrect checkpoint table structure Checkpoint table {tpcc.dbo.CHKPOINT} does not contain proper column type <span class=\"code-keyword\">for</span> {ddl}. \nPlease specify the proper checkpoint table value using <span class=\"code-quote\">'CheckPointTable'</span> property or Please use the below SQL <span class=\"code-keyword\">for</span> creating a <span class=\"code-keyword\">new</span> checkpoint table\nCREATE TABLE CHKPOINT (id VARCHAR(100) PRIMARY KEY, sourceposition VARBINARY(MAX), pendingddl BIT, ddl VARCHAR(MAX));} : \n</pre>\n</div>\n</div>\n<p>the checkpoint table DDL does exist.</p>\n<p><span class=\"wysiwyg-font-size-large\">Solution:</span></p>\n<p>This is bug (DEV-12493), and fix is available in Striim version 3.8 and later.</p>\n<p> </p>"} {"page_content": "<p>If you encounter an issue in Striim, collect the below list of information and update in the Zendesk support ticket.</p>\n<p>Login as the owner of striim process or switch user before gathering following details</p>\n<pre>$ su - striim</pre>\n<p> 1. Logs (./logs/)<br> striim.server.log<br> striim.command.log<br> striim-node.log<br> <br> 2. OS Level information<br> (1) For hanging/slow or related issue<br> jstack &lt;pid&gt; &gt;&gt; jstack.log<br> df -k<br> du -sk &lt;striim_home&gt;/* |sort -n<br> top<br> ps -eLo pid,lwp,nlwp,ruser,pcpu,stime,etime,vsz,wchan|grep &lt;pid&gt;</p>\n<p> (2) For memory leak <br> jmap -dump:format=b,file=striim_dump.bin &lt;pid&gt;</p>\n<p> 3. Striim<br> (1) TQL file</p>\n<p> (2) Console Output:<br> mon;<br> mon &lt;app&gt;;<br> mon &lt;src&gt;;<br> mon &lt;related_component_in_the_middle&gt;;<br> mon &lt;target&gt;;<br> describe &lt;app&gt;;<br> describe &lt;src&gt;;<br><br> (3) Any error message from app<br> Command and its output when hitting error<br> Striim Server state - Hot threads, application throughput, last event processed by the app.</p>\n<p> 4. Logs from other party software, if related<br> Example:<br> (1) kafka<br> (2) kudu<br> (3) oracle alert<br> (4) oracle AWR report<br> (5) other related logs</p>\n<p> </p>"} {"page_content": "<p>Follow the below steps to calculate Oracle Logminer's Read Rate against your current database,</p>\n<p><strong><span class=\"wysiwyg-color-black\">I. Dry run test:</span></strong></p>\n<p> This will test the read throughput (without capturing any data) on archived logs.</p>\n<ol>\n<li>Go to your SQLPlus</li>\n<li>Issue - SELECT NAME, FIRST_TIME, FIRST_CHANGE#, NEXT_CHANGE# FROM V$ARCHIVED_LOG;</li>\n<li>Above query will list the archive logs. Please choose 5 archive log files (older by two days), which are next to each other</li>\n<li>Take the first log file and execute the following command:<br> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -<br> LOGFILENAME =&gt; '&lt;first_log_file&gt;', -<br> OPTIONS =&gt; DBMS_LOGMNR.NEW);</li>\n<li>For the remaining 4 archive log files, add them one by one using the following command:<br> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -<br> LOGFILENAME =&gt; '&lt;next_log_file&gt;', -<br> OPTIONS =&gt; DBMS_LOGMNR.ADDFILE);</li>\n<li>Please use the FIRST_CHANGE# value (from first log file) as STARTSCN and NEXT_CHANGE# (last log file) values as ENDSCN (from step 2) in the following command:<br> EXECUTE DBMS_LOGMNR.START_LOGMNR<br> (STARTSCN =&gt; &lt;first_change#&gt;, ENDSCN =&gt; &lt;next_change#&gt;,<br> OPTIONS =&gt; DBMS_LOGMNR.SKIP_CORRUPTION+ DBMS_LOGMNR.NO_SQL_DELIMITER+DBMS_LOGMNR.NO_ROWID_IN_STMT+ DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);</li>\n<li>Enable the time in your SQLPlus and note down</li>\n<li>Execute the following query<br>set arraysize 1000<br>set timing on<br>SELECT * FROM V$LOGMNR_CONTENTS WHERE TABLE_NAME = 'TABLENOTEXIT';</li>\n</ol>\n<p>Above query will force the LogMiner to read all the redos from these 5 archive log files. At the end, it will print 'no rows found'. Note down that time. Now calculate the query execution time duration.</p>\n<p>Let say 2GB is your each log file size and the query(step 8) took 10 mins to run then the Logminer Read Rate is,</p>\n<p> (5*2)*(60/10) = 60GB/Hr.</p>\n<p> 9. At the end of testing, stop logminer from above session:</p>\n<p> EXECUTE DBMS_LOGMNR.END_LOGMNR;</p>\n<p> </p>\n<p><strong>II. Test on tables:</strong></p>\n<p> This test will get the throughput of capturing from one a more tables from archived logs.</p>\n<p>1-6: same as above dry run test.</p>\n<p>7. Create a file called logminer_select.sql with all the following text</p>\n<pre>col SCN format 9999999999999999<br>col START_SCN format 9999999999999999<br>col COMMIT_SCN format 9999999999999999<br>set linesize 120<br>set pagesize 50000<br>set arraysize 1000<br>set timing on<br>set termout off<br><br>alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';<br><br>SELECT thread#, scn, start_scn, commit_scn, timestamp, commit_timestamp, (xidusn || '.' || xidslt || '.' || xidsqn ) as xid, operation_code, status, SEG_TYPE_NAME ,info,seg_owner, table_name, SSN, username, sql_redo ,row_id, csf,ROLLBACK, TABLE_SPACE, SESSION_INFO, RS_ID, RBASQN, RBABLK, SEQUENCE#, TX_NAME, SEG_NAME, SEG_TYPE_NAME, (PXIDUSN || '.' || PXIDSLT || '.' || PXIDSQN ) as pid, AUDIT_SESSIONID, SESSION#, SERIAL# FROM v$logmnr_contents WHERE (((SEG_OWNER like 'SOE' and TABLE_NAME like 'LOGON') ) OR (operation IN ('START', 'COMMIT','ROLLBACK'))) AND OPERATION_CODE != 5;</pre>\n<p>In this example, owner=SOE and table_name=LOGON, which may be modified for your table(s).</p>\n<p>8. Run Logminer script</p>\n<p>from same sqlplus session of steps 4-6:</p>\n<p>sqlplus&gt;spool logminer_result.txt</p>\n<p>sqlplus&gt;@logminer_select.sql</p>\n<p>sqlplus&gt;spool off</p>\n<p>9. Calculate the rate in similar way as Dry-run test.</p>\n<p> </p>\n<p>======================================================================</p>\n<p>Above obtained rate may be compared with Oracle Redo generating rate, which may be queries from following SQLs:</p>\n<pre>SELECT TO_CHAR(FIRST_TIME, 'MM-DD') MM_DD,<br>\nTO_CHAR(FIRST_TIME, 'HH24') HR_24,\n<br>trunc(SUM(BLOCKS*BLOCK_SIZE/1024/1024/1024)) LOG_GB\n<br>FROM V$ARCHIVED_LOG\n<br>WHERE STANDBY_DEST ='NO'\n<br>GROUP BY TO_CHAR(FIRST_TIME, 'MM-DD'), TO_CHAR(FIRST_TIME, 'HH24')\n<br>ORDER BY 1, 2;</pre>\n<pre>select trunc(COMPLETION_TIME,'DD') Day, \n<br>round(sum(BLOCKS*BLOCK_SIZE)/1024/1024/1024) GB,\n<br>count(*) Archives_Generated \n<br>from v$archived_log \n<br>WHERE STANDBY_DEST ='NO'\n<br>group by trunc(COMPLETION_TIME,'DD')\n<br>order by 1;</pre>\n<p> </p>"} {"page_content": "<p>KafkaConfig - Can contain 0.. n kafkaproducer and KafkaConsumer properties.</p>\n<p><span class=\"wysiwyg-underline\">Producer configuration properties:</span> (refer to respective kafka documentation version \"3.2 Producer Configs\" section<br>Eg : KafkaConfig : “property=value;property=value”</p>\n<p>“retry.backoff.ms” is the wait time between Kafka producer tries to fetch the topic metadata in case of broker failures. The default value is 100 ms, which is very less and often leads to send failure since the Leader election might take some. So the user is recommended to increase the wait time to 10000 ms and above.<br>“key.deserializer” - this property is always set to “org.apache.kafka.common.serialization.ByteArrayDeserializer\" and cannot be overridden.<br>\"Value.deserializer\" - this property is always set to “org.apache.kafka.common.serialization.ByteArrayDeserializer\" and cannot be overridden.<br>“partitioner.class” - The user can specify the user defined class for partitioning logic (by implementing com.webaction.kafka.PartitionerIntf interface)<br>“Acks” - supported in both sync (accepted values 1 and all only ) and async (0,1, all is accepted) mode.<br>“Retries” - In Sync mode (KafkaWriter will internally try to retry and will not be set for KafkaProducer API), async mode will be set to the KafkaProducer API<br>“Batch.size” - In sync mode (KafkaWriter will try to batch records into a single kafka message not more than batch size). In async mode it will set to the KafkaProducer API.<br>“Linger.ms” - In sync mode (the maximum wait for batch to expire), in async mode its set to the KafkaProducer API.</p>\n<p><span class=\"wysiwyg-color-blue\"><em>NOTE : Other than the above properties all other properties will work as per its specified in KafkaProducer Config doc. Internally KafkaWriter invokes KafkaConsumer for various purposes and the WARNING from consumer API due to passing KafkaWriter’s config properties can be ignored.</em></span></p>\n<p><span class=\"wysiwyg-underline\">Consumer configuration properties:</span><br>KafkaConfig property accepts all configuration specified under KafkaConsumer Config section in (<a href=\"https://kafka.apache.org/0102/documentation.html#newconsumerconfigs\">https://kafka.apache.org/0102/documentation.html#newconsumerconfigs</a>) and it will be set to the KafkaConsumer internally used.</p>\n<p>Following are some default KafkaConfig values on KafkaReader,</p>\n<p>max.partition.fetch.bytes=10485760<br>fetch.min.bytes=1048576<br>fetch.max.wait.ms=1000<br>receive.buffer.bytes=2000000<br>poll.timeout.ms=10000<br>request.timeout.ms=10001<br>session.timeout.ms=10000</p>\n<p> </p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\">Problem: </span></p>\n<p>I am using mapping table - SOURCE.S5,s5</p>\n<p>KuduWriter error: 2747 : Target table does not exist s5.</p>\n<p><span class=\"wysiwyg-font-size-large\">Cause:</span></p>\n<p>Kudu table name is case sensitive. In above error, the table was created as S5.</p>\n<p>The mapping should be: SOURCE.S5,S5</p>\n<p> </p>\n<p> </p>"} {"page_content": "<p>By default, FileWriter will create multiple output files by rolling over when default(EventCount:10000,Interval:30s) parameters is achieved. But sometimes customer wants to create single output file using FileWriter.</p>\n<p>In such cases we could use <span class=\"wysiwyg-color-red\">DefaultRollingPolicy</span> to prevent FileWriter from creating multiple output files.</p>\n<p> </p>\n<p>Sample:</p>\n<p>CREATE OR REPLACE TARGET TargetFile USING FileWriter ( <br> filename: 'out.log',<br> rolloveronddl: 'false',<br> adapterName: 'FileWriter',<br> directory: '/Users/achup/Documents/Striim/Samples/PosApp/appData',<br> <span class=\"wysiwyg-color-red\">rolloverpolicy: 'DefaultRollingPolicy'</span><br> )</p>\n<p> </p>\n<p>This way, it will create single output file 'out.log'.</p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\">Question:</span></p>\n<p>In Documentation for health REST API, it shows a pretty-print json output. How can I get this format without using external formatter?</p>\n<p> </p>\n<p><span class=\"wysiwyg-font-size-large\">Answer:</span></p>\n<p>By default, it will get output like following:</p>\n<p class=\"p1\"><span class=\"s1\">$ <span class=\"wysiwyg-color-blue\">curl -s <span class=\"Apple-converted-space\"> </span>-X GET <span class=\"Apple-converted-space\"> </span>-H \"content-type: application/json\" <span class=\"Apple-converted-space\"> </span>-H \"Authorization: STRIIM-TOKEN 01e8311c-77e1-4f21-9ceb-ba5e89274885\" http://localhost:9080/api/v2/applications</span></span></p>\n<p class=\"p1\"><span class=\"s1\">[{\"namespace\":\"admin\",\"name\":\"sfdc1\",\"status\":\"DEPLOYED\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.sfdc1\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.sfdc1/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.sfdc1/sprint\"}]},{\"namespace\":\"admin\",\"name\":\"ora1\",\"status\":\"DEPLOYED\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.ora1\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.ora1/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.ora1/sprint\"}]},{\"namespace\":\"admin\",\"name\":\"file1\",\"status\":\"CREATED\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.file1\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.file1/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.file1/sprint\"}]},{\"namespace\":\"admin\",\"name\":\"mssql_148\",\"status\":\"CREATED\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.mssql_148\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.mssql_148/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.mssql_148/sprint\"}]},{\"namespace\":\"admin\",\"name\":\"ora_batch\",\"status\":\"CREATED\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.ora_batch\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.ora_batch/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.ora_batch/sprint\"}]},{\"namespace\":\"admin\",\"name\":\"gp\",\"status\":\"RUNNING\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.gp\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.gp/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.gp/sprint\"}]},{\"namespace\":\"admin\",\"name\":\"ora12c\",\"status\":\"DEPLOYED\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.ora12c\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.ora12c/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.ora12c/sprint\"}]},{\"namespace\":\"admin\",\"name\":\"file_hack\",\"status\":\"CREATED\",\"links\":[{\"rel\":\"self\",\"allow\":[\"GET\",\"DELETE\"],\"href\":\"/api/v2/applications/admin.file_hack\"},{\"rel\":\"deployment\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.file_hack/deployment\"},{\"rel\":\"sprint\",\"allow\":[\"DELETE\",\"POST\"],\"href\":\"/api/v2/applications/admin.file_hack/sprint\"}]}]</span></p>\n<p class=\"p1\"> </p>\n<p class=\"p1\"><span class=\"s1\">To get pretty-print, following syntax may be used:</span></p>\n<p class=\"p1\"><span class=\"s1\">$ <span class=\"wysiwyg-color-blue\">curl -s <span class=\"Apple-converted-space\"> </span>-X GET <span class=\"Apple-converted-space\"> </span>-H \"content-type: application/json\" <span class=\"Apple-converted-space\"> </span>-H \"Authorization: STRIIM-TOKEN 01e8311c-77e1-4f21-9ceb-ba5e89274885\" http://localhost:9080/api/v2/applications | python -c \"import sys,json; parsed=json.load(sys.stdin); print json.dumps(parsed, indent=4, sort_keys=True)\"</span></span></p>\n<p class=\"p1\"><span class=\"s1\">[</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.sfdc1\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.sfdc1/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.sfdc1/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"sfdc1\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"DEPLOYED\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora1\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora1/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora1/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"ora1\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"DEPLOYED\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.file1\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.file1/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.file1/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"file1\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"CREATED\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.mssql_148\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.mssql_148/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.mssql_148/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"mssql_148\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"CREATED\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora_batch\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora_batch/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora_batch/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"ora_batch\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"CREATED\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.gp\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.gp/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.gp/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"gp\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"RUNNING\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora12c\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora12c/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.ora12c/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"ora12c\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"DEPLOYED\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"links\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"GET\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.file_hack\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"self\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.file_hack/deployment\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"deployment\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}, </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>{</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"allow\": [</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"DELETE\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"POST\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"href\": \"/api/v2/applications/admin.file_hack/sprint\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"rel\": \"sprint\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>], </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"name\": \"file_hack\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"namespace\": \"admin\", </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"status\": \"CREATED\"</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>}</span></p>\n<p class=\"p1\"><span class=\"s1\">]</span></p>"} {"page_content": "<p>Yes.</p>\n<p>org.joda.time.DateTime is imported automatically and may be specified as DateTime in Striim.</p>\n<p>Thus, the buikld-in methods for joda are also available for Striim DateTime.</p>\n<p>Following Java Doc may bel referenced:<br><a href=\"http://www.joda.org/joda-time/apidocs/org/joda/time/DateTime.html\">http://www.joda.org/joda-time/apidocs/org/joda/time/DateTime.html</a></p>\n<p>Example:</p>\n<p class=\"p1\"><span class=\"s1\">W (admin) &gt; select count(*), dnow(), dnow().getyear() yr, <span class=\"wysiwyg-color-red\">dnow().getdayOfWeek()</span> as DayOfWeekInteger from admin.WA_db;</span></p>\n<p class=\"p1\"><span class=\"s1\">Processing - select count(*), dnow(), dnow().getyear() yr, dnow().getdayOfWeek() as DayOfWeekInteger from admin.WA_db</span></p>\n<p class=\"p1\"><span class=\"s1\">[</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>count(*) = 52</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>dnow() = 2018-02-14T13:57:56.938-08:00</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>yr = 2018</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span><span class=\"wysiwyg-color-red\">DayOfWeekInteger = 3</span></span></p>\n<p class=\"p1\"><span class=\"s1\">]</span></p>\n<p class=\"p2\"> </p>\n<p class=\"p1\"><span class=\"s1\">-&gt; SUCCESS </span></p>\n<p class=\"p1\"><span class=\"s1\">Elapsed time: 989 ms</span></p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\"><strong>Problem:</strong></span></p>\n<p>I have moved /var/striim directory and created soft link. after that, Striim could not be started.</p>\n<p>from striim.server.log:</p>\n<pre>Error Code: -20001<br>Call: INSERT INTO WACTIONOBJECT (objectid, CODEPENDENTOBJECTS, CreationTime, DESCRIPTION, METAINFOSTATUS, METAOBJECTCLASS, NAME, NAMESPACEID, NSNAME, owner, REVERSEINDEXOBJECTDEPENDENCIES, SOURCETEXT, type, URI, version) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)<br> bind =&gt; [15 parameters bound]<br>Query: InsertObjectQuery(DSVFormatter PROPERTYTEMPLATE 11 01e8103f-8c32-be81-8ecf-005056bb6c17 type:formatter version:0.0.0 class:com.webaction.proc.DSVFormatter props:{nullvalue=java.lang.String default NULL optional, standard=com.webaction.source.lib.enums.Standard default none optional, charset=java.lang.String default optional, usequotes=java.lang.Boolean default false optional, rowdelimiter=java.lang.String default <br> optional, members=java.lang.String default optional, quotecharacter=java.lang.String default \" optional, columndelimiter=java.lang.String default , optional} inputType:com.webaction.anno.NotSet outputType:com.webaction.anno.NotSet)<br> at org.eclipse.persistence.internal.jpa.EntityManagerImpl.flush(EntityManagerImpl.java:868)<br> at com.webaction.metaRepository.MetaDataDBOps.store(MetaDataDBOps.java:234)<br> at com.webaction.metaRepository.MetadataRepository.putMetaObject(MetadataRepository.java:288)<br> at com.webaction.runtime.BaseServer.putObject(BaseServer.java:296)<br> at com.webaction.runtime.Server.loadClasses(Server.java:1520)<br> at com.webaction.runtime.Server.loadPropertyTemplates(Server.java:1565)<br> at com.webaction.runtime.Server.setSecMgrAndOther(Server.java:594)<br> at com.webaction.runtime.Server.main(Server.java:3100)<br>Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException<br>Internal Exception: java.sql.SQLException: An SQL data change is not permitted for a read-only connection, user or database.</pre>\n<p> </p>\n<p>from OS:<br>[root@qsd01d-0116s:wactionrepos]# ls -al<br>total 32<br>drwxrwx--- 5 striim striim 155 Feb 12 17:17 .<br>drwxrwx--- 4 striim striim 85 Jan 22 14:02 ..<br>-rw-r--r-- 1 striim striim 529 Jan 19 15:23 BACKUP.HISTORY<br>-rw------- 1 striim striim 4 Feb 12 14:35 dbex.lck<br>-rw------- 1 <span class=\"wysiwyg-color-red\">root root</span> 38 Feb 12 14:35 <span class=\"wysiwyg-color-red\">db.lck</span> <br>drwxrwx--- 2 striim striim 193 Feb 12 14:35 log<br>-rwxrwx--- 1 striim striim 608 Jan 9 00:59 README_DO_NOT_TOUCH_FILES.txt<br>drwxrwx--- 2 striim striim 8192 Jan 22 13:51 seg0<br>-rwxrwx--- 1 striim striim 964 Jan 9 00:59 service.properties<br>drwx------ 2 striim striim 6 Feb 12 17:17 tmp</p>\n<p> </p>\n<p><span class=\"wysiwyg-font-size-large\"><strong>Cause and Solution:</strong></span><br>The files should be owned by striim in this setup. Somewhow, file db.lck is owned by root, which prevents the striim process (from striim user) to change the metadata at startup.</p>\n<p>Solution:<br>1. sudo stop striim-dbms<br>2. sudo chown striim:striim db.lck<br>3. sudo start striim-dbms<br>4. sudo start striim-node</p>"} {"page_content": "<p>For community go here:</p>\n<p><a href=\"https://support.striim.com/hc/en-us/community/topics\">Community</a></p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\"><strong>Problem:</strong></span></p>\n<p>I tried to startup OracleReader with StartSCN, but it failed with error:</p>\n<p>Start failed! java.util.concurrent.ExecutionException: com.webaction.source.oraclecommon.OracleException: 2034 : Start Failed: SQL Query Execution Error ;ErrorCode : 1292;SQLCode : 99999;SQL Message : ORA-01292: no log file has been specified for the current LogMiner session ORA-06512: at \"SYS.DBMS_LOGMNR\", line 58 ORA-06512: at line 2</p>\n<p><br><strong><span class=\"wysiwyg-font-size-large\">Cause and Solution:</span></strong></p>\n<p>The error means that when logminer tries to add a logfile based on specified SCN, it could not find the file.</p>\n<p>This may be checked with following query:<br>select thread#, sequence#, name, deleted, FIRST_CHANGE#, NEXT_CHANGE# from v$archived_log<br>where &amp;scn between FIRST_CHANGE# and NEXT_CHANGE#;</p>\n<p>e.g.,<br>SQL&gt; select thread#, sequence#, name, deleted, FIRST_CHANGE#, NEXT_CHANGE# from v$archived_log <br>where &amp;scn between FIRST_CHANGE# and NEXT_CHANGE# and standby_dest = 'NO' ; </p>\n<p>Enter value for scn: 150300000<br>old 2: where &amp;scn between FIRST_CHANGE# and NEXT_CHANGE#<br>new 2: where 150300000 between FIRST_CHANGE# and NEXT_CHANGE#</p>\n<p>THREAD# SEQUENCE#<br>---------- ----------<br>NAME<br>--------------------------------------------------------------------------------<br>DEL FIRST_CHANGE# NEXT_CHANGE#<br>--- ------------- ------------<br> 1 8810<br>/opt/ora_archivelog/1_8810_898068303.dbf<br>NO 150295416 150334128</p>\n<p>(above example shows a file exists. if the log is missing, the 'name' may be empty and/or 'DELETED'='YES')</p>\n<p><br>Also make sure all following archived logs are available.<br>If not, the StartSCN needs to be adjusted accordingly.</p>"} {"page_content": "<p><strong><span class=\"wysiwyg-font-size-large\">Problem:</span></strong></p>\n<p>When doing initial load from Oracle database, I exported tables from a specific scn (2147483648).<br>Next, in CDC, a CQ is used to filter out the transactions before that scn.</p>\n<p>select * <br>from source_stream<br>WHERE TO_LONG(META(x,'COMMITSCN')) &gt; 2147483648;</p>\n<p>at 'Save', it hits error:</p>\n<p>Too big integer constant: line:5 where to_long(data[0]) &gt; 2147483648; ^^^^^^^^^^</p>\n<p><br><span class=\"wysiwyg-font-size-large\"><strong>Solution:</strong></span></p>\n<p>This is due to that java integer.max_value is 2147483647.</p>\n<p><br>The solution is to use long datatype, change<br>from: 2147483648<br>to: 2147483648<span class=\"wysiwyg-color-black\">L (add \"L\" at the end of the number)</span></p>\n<p>select * <br>from source_stream<br>WHERE TO_LONG(META(x,'COMMITSCN')) &gt; 2147483648L;</p>"} {"page_content": "<p><strong><span class=\"wysiwyg-font-size-medium\">Problem:</span></strong></p>\n<pre>CREATE OR REPLACE TARGET my_target USING DatabaseWriter ( <br> DatabaseProviderType: 'Default',<br> CheckPointTable: 'CHKPOINT',<br> PreserveSourceTransactionBoundary: 'false',<br> Username: 'striim',<br> Password_encrypted: 'false',<br> BatchPolicy: 'EventCount:1000,Interval:60',<br> CommitPolicy: 'EventCount:1000,Interval:60',<br> ConnectionURL: 'jdbc:oracle:thin:@192.168.0.100:1521:orcl',<br> Tables: 'SCOTT.S1, SCOTT.T1',<br> adapterName: 'DatabaseWriter',<br> Password: 'test'<br> ) <br>INPUT FROM ora_stream;</pre>\n<p> </p>\n<p>For above DBWriter, at start it hits error:</p>\n<p>Start failed! java.util.concurrent.ExecutionException: com.webaction.common.exc.AdapterException: Initialization exception in Target Adapter nyu_target. Cause: Error in initialising DatabaseWriter {2749 : Incorrect table map specified in Tables property Incorrect table map specified {SCOTT.S1, SCOTT.T1}}</p>\n<p>Both source and target tables exist and have same structure.</p>\n<p><span class=\"wysiwyg-font-size-large\">Solution:</span></p>\n<p>The space between source and target table should be removed, and change</p>\n<p>from: 'SCOTT.S1, SCOTT.T1',</p>\n<p>to: 'SCOTT.S1,SCOTT.T1',</p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\"><strong>Question:</strong></span><br>I installed version 3.8.1, and setup an app with recovery and persist(PS) Kafka.<br>In PS data topic, I see some extra rows like \"Scom.webaction.proc.events.commands.CheckpointCommandEven.... \" (see bellow for example). What are they? do they affect function?</p>\n<pre><span class=\"wysiwyg-color-blue110\">Scom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?݈??X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?????X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\tց???X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?Պ??X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\tʦ???X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?????X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?Ɍ??X\n</span><span class=\"wysiwyg-color-blue110\"><span class=\"wysiwyg-color-black\">VD???????ѧY??3_?a\n???admin.ora_s2#SOURCE#S10_1_10_7?admin.ora_s2_stream#STREAM#S10_1_10_7?a\n???admin.ora_s2_stream#STREAM#S10_1_10_7?K?? ?*??}??3_?a\n??ۀ???w?'q????3_??134783566562456?java.util.HashMa?RbaSq?874?AuditSessionI?130175?TableSpac?USER?CURRENTSC?14621774?SQLRedoLengt?6?BytesProcesse?58?ParentTxnI?5.12.11438?SessionInf?UNKNOW?RecordSetI? 0x002227.0000b8fc.01b0?COMMITSC?java.math.BigIntege?EQUENC??1Rollbac??0STARTSC?14621774?SegmentNam?S?OperationNam?INSER?TimeStam?\n a?2?TxnUserI?FZHAN?RbaBl?4735?SegmentTyp?TABL?TableNam?FZHANG.S?TxnI?5.12.11438?Seria?4061?ThreadI??1COMMIT_TIMESTAM?\n a?6?OperationTyp?DM?ROWI?AAAxGTAAEAACk8PAA?TransactionNam??SC?1462177430002460935724484788656000?Sessio?14?@?? ????}??3_?\nVD???????ѧY??3_?a\n???admin.ora_s2#SOURCE#S10_1_10_7?admin.ora_s2_stream#STREAM#S10_1_10_7?a\n???admin.ora_s2_stream#STREAM#S10_1_10_7?K??`???}??3_?a\n??????w?'q????3_??134783566562456?java.util.HashMa?RbaSq?874?AuditSessionI?130175?TableSpac?USER?CURRENTSC?14621774?SQLRedoLengt?6?BytesProcesse?58?ParentTxnI?5.12.11438?SessionInf?UNKNOW?RecordSetI? 0x002227.0000b8fc.01b0?COMMITSC?java.math.BigIntege?EQUENC??1Rollbac??0STARTSC?14621774?SegmentNam?S?OperationNam?INSER?TimeStam?\n a?2?TxnUserI?FZHAN?RbaBl?4735?SegmentTyp?TABL?TableNam?FZHANG.S?TxnI?5.12.11438?Seria?4061?ThreadI??1COMMIT_TIMESTAM?\n a?6?OperationTyp?DM?ROWI?AAAxGTAAEAACk8PAA?TransactionNam??SC?1462177430002460935724484788656000?Sessio?14?@??`??A?}??3_?\n</span>Scom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\tƝ???X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?????X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\tܪ???X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?????X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?????X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?????X\nScom.webaction.proc.events.commands.CheckpointCommandEven?@??????ѧY??3_?\t?????X\n</span></pre>\n<p><strong><span class=\"wysiwyg-font-size-large\">Answer:</span></strong><br>This is related to a new feature in 3.8.1, and it is expected. It servers as heartbeat event when recovery is enabled, and will allow us to better handle the checkpoints for the whole app. The feature will improve the recovery function for Striim application.</p>"} {"page_content": "<p><img id=\"screenshot\" src=\"https://support.striim.com/hc/article_attachments/115026492227/Screenshot_2017-11-17_11.27.00.png\" alt=\"Screenshot_2017-11-17_11.27.00.png\" width=\"815\" height=\"509\"></p>\n<h2>Overview</h2>\n<p id=\"overview\">The Financial Transaction Monitoring application sources real-time transaction information for ATM and POS cash withdrawals. It monitors the decline rate of transactions and identifies situations where the decline rate drops by more than 10% in any 5 minute period. The application also slices the transactions by a number of dimensions, including transaction type, card type, location and financial network end points. The transaction flow and dimensional data is shown on a dashboard and alerts are issued where the decline rate drops too quickly.</p>\n<h2>Download</h2>\n<p><a id=\"download\" href=\"https://drive.google.com/file/d/1OzvuAzqAzvW1gX2FSb6BsEPmovl14bTz/view?usp=sharing\" target=\"_blank\" rel=\"noopener\">FTM App Bundle (zip file 68MB)</a></p>\n<h2>Installation</h2>\n<p>To install, follow these instructions:</p>\n<ol>\n<li>Download the app bundle <a href=\"https://drive.google.com/file/d/1OzvuAzqAzvW1gX2FSb6BsEPmovl14bTz/view?usp=sharing\" target=\"_blank\" rel=\"noopener\">FTM.zip</a>\n</li>\n<li>Unzip in your Striim installation folder to create a subdirectory FTM</li>\n<li>In the Striim UI import the FTM.tql data flow application in the namespace FTM</li>\n<li>Deploy the FTM application</li>\n<li>Import the FTM.json file as a Dashboard in the namespace FTM</li>\n<li>Start the FTM application and view the Dashboard</li>\n</ol>\n<h2>Resources</h2>\n<p>Watch this video for an overview of the application in action.</p>\n<p><iframe src=\"//www.youtube-nocookie.com/embed/02ZoliL_mE4\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"\"></iframe></p>\n<p> </p>"} {"page_content": "<p>Here's the workflow for localizing your Help Center in other languages:</p>\n\n<ol>\n<li>Get your content translated in the other languages.</li>\n<li>Configure the Help Center to support all your languages.</li>\n<li>Add the translated content to the Help Center.</li>\n</ol>\n\n\n<p>For instructions, see <a href=\"https://support.zendesk.com/hc/en-us/articles/203664336#topic_inn_3qy_43\">Localizing the Help Center</a>.</p>"} {"page_content": "<p>If you suspect that there is memory leak or even high memory usage, please first check the config for MEM_MAX and EVICTTHRESHOLD in startUp.properties file</p>\n<p>For example</p>\n<pre class=\"p1\"><span class=\"s1\">MEM_MAX=4096m</span><br><span class=\"s1\">EVICTTHRESHOLD=65</span></pre>\n<p class=\"p1\"><span class=\"s1\">Above settings mean Striim JVM process could use up to 4,096 M memory and request JVM to start garbage collection once memory usage is above 55% of 4,096 M, which is about 2,253 M. If memory usage is still below the EVICTTHRESHOLD setting, you will most likely see the memory usage continue to climb till that point.</span></p>\n<p>Login as the owner of striim process or switch user before executing following commands</p>\n<pre>$ su - striim</pre>\n<p class=\"p1\"><span class=\"s1\">You could explicitly kick off garbage collection process by doing the following command under shell</span></p>\n<pre><span class=\"s1\">shell&gt; jcmd &lt;striim server pid&gt; GC.run</span></pre>\n<p class=\"p1\"><span class=\"s1\">The pid is the process ID of Striim server process. Please make sure you are running this command under the same userid that started the Striim server process.</span></p>\n<p class=\"p1\"><span class=\"s1\">Once the command is ran, if memory usage is still increasing steadily without any significant change in the window/cache/wactionstore size, as well as the event rates, please collect the following diagnostic details </span></p>\n<p class=\"p1\"><span class=\"s1\">1. Heap dump of the Striim server process by running the following command</span></p>\n<pre class=\"p1\"><span class=\"s1\">shell&gt; <span>jmap -dump:live,file=heapdump_$(date +\"%m-%d-%Y\"-%T-%N).hprof &lt;pid&gt;</span></span></pre>\n<p class=\"p1\"><span class=\"s1\">Here pid is the process id of the Striim server process. Please run this under the same userid that runs Striim server. Depending on the allocated memory size for Striim, the dump file could be pretty big, so please make sure you have enough disk space to hold the file. </span></p>\n<p class=\"p1\"><span class=\"s1\">After the dump is finished, please zip and tar the dump file and upload it the support ticket, or a google drive if the size is bigger than 40M.</span></p>\n<p class=\"p1\"><span class=\"s1\">2. </span><span class=\"s1\"> Stack of the Striim server process</span></p>\n<pre class=\"p1\"><span class=\"s1\">shell&gt; jstack &lt;pid&gt; &gt;&gt; jstack_$(date +\"%m-%d-%Y\"-%T-%N).log</span></pre>\n<p class=\"p1\"><span class=\"s1\"><span>3. Profiling data of Striim server process using JFR (Java flight recorder)</span></span></p>\n<pre class=\"p1\"><span class=\"s1\"><span>shell&gt; jcmd &lt;pid&gt; VM.unlock_commercial_features</span></span><br><span class=\"s1\"><span>shell&gt; jcmd &lt;pid&gt; JFR.start duration=300s filename=striim_$(date +%F).jfr</span></span></pre>\n<p class=\"p1\"><span class=\"s1\"><span>Here pid is the process id of the Striim server process. Please run this under the same userid that runs Striim server. The output file gets the profiling data after the set duration (5 minutes in this example)</span></span></p>\n<p class=\"p1\"><span class=\"s1\"><span>4. As a general practice users can monitor the memory usage using scripts like below</span></span></p>\n<pre class=\"p1\"><span class=\"s1\"><span><strong data-stringify-type=\"bold\">$</strong> cat striim_memcheck.sh <br>#!/bin/bash<br><span class=\"c-mrkdwn__br\" data-stringify-type=\"paragraph-break\"></span>while [[ true ]]; do<br>SPID=`ps ax | grep -i \"com.webaction.runtime.Server\" | grep java | grep -v grep | awk '{print $1}'`<span class=\"c-mrkdwn__br\" data-stringify-type=\"paragraph-break\"></span> <br>if [[ -z \"$SPID\" ]]; then<br> echo \"Striim Server is NOT RUNNING!\"<br> echo \"Checking again in 5 minutes\"<br> sleep 300<br> else<br> HEAP_MEMORY=$( ( jstat -gc $SPID 2&gt;/dev/null || echo \"0 0 0 0 0 0 0 0 0\" ) | tail -n 1 | awk '{split($0,a,\" \"); sum=a[3]+a[4]+a[6]+a[8]; print sum/1024}' )<br> HEAP_MEMORY=${HEAP_MEMORY%.*}<br> echo \"$(date) Heap Memory of Striim PID $SPID is (MB): $HEAP_MEMORY\" &gt;&gt; striim_mem_$(date +%F).txt<br> sleep 300<br> fi<br>done</span></span></pre>\n<p class=\"p1\">To execute the script do the following</p>\n<pre class=\"p1\">shell&gt; chmod +x <span class=\"s1\"><span>striim_memcheck.sh </span></span><br>shell&gt; nohup ./<span class=\"s1\"><span>striim_memcheck.sh &amp;</span></span></pre>\n<p class=\"p1\">The script creates one file everyday and outputs the memory usage every 5 minutes. </p>\n<p class=\"p1\">To stop/kill the script</p>\n<pre class=\"p1\">shell&gt; ps -ef | grep <span class=\"s1\"><span>striim_memcheck | grep -v grep</span></span><br>shell&gt; kill -9 &lt;pid of above&gt;</pre>\n<p class=\"p1\">Make sure to start the script as the same user who starts Striim server.</p>"} {"page_content": "<p>Question: I am capturing DMLs from a source Oracle partition table, in OnlineCatalog mode. When a new partition is added to this table, does OracleReader captures the DMLs on newly added partition automatically, without restarting the Striim app?</p>\n<p>Answer: Yes, the DMLs on newly added partition will be captured automatically, even in OnlineCatalog mode.</p>\n<p><br>Following is a demo example (Striim tql file is attached as <span class=\"s1\">admin.ora_partition_test.tql)</span></p>\n<p>1. create source and target tables</p>\n<p>create table s22 (x int primary key, y int)<br> partition by range (x)<br>(partition part_1 values less than (10),<br> partition part_2 values less than (20)<br>);</p>\n<p>create table t22 (x int primary key, y int)<br> partition by range (x)<br>(partition part_1 values less than (10),<br> partition part_2 values less than (20)<br>);</p>\n<p><br>2. inserts to existing partition</p>\n<p>SQL&gt; insert into s22 values (1,1);<br>insert into s22 values (11,11);<br>commit;<br>1 row created.</p>\n<p><br>SQL&gt; select * from t22; -------- checking target table</p>\n<p>X Y<br>---------- ----------<br> 1 1<br> 11 11</p>\n<p>SQL&gt; insert into s22 values (21,21); ----- value '21' is for a partition not existing yet.<br>insert into s22 values (21,21)<br> *<br>ERROR at line 1:<br>ORA-14400: inserted partition key does not map to any partition</p>\n<p><br>3. add new partition to both source and target, while Striim app keeps on running</p>\n<p>alter table s22 add partition part_3 values less than (30);<br>alter table t22 add partition part_3 values less than (30);</p>\n<p>insert into s22 values (21,21);<br>commit;</p>\n<p>SQL&gt; select * from t22;</p>\n<p>X Y<br>---------- ----------<br> 1 1<br> 11 11<br> 21 21 --------- the row in newly added partition is replicated to target</p>"} {"page_content": "<p>While installing rpm packages to CentOS 7.x or Redhat Linux 7.x, you will see below errors</p>\n<pre class=\"code-java\">[ferroadmin@Flotvmlcdc001 striim]$ sudo rpm -ivh striim-node-3.6.6-Linux.rpm\nPreparing... ################################# [100%]\n file /lib from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /opt from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /<span class=\"code-keyword\">var</span> from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /<span class=\"code-keyword\">var</span>/log from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /lib/systemd from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> systemd-219-19.el7_2.13.x86_64</pre>\n<p> </p>\n<p>You can suppress these errors by using rpm option --replacefiles</p>\n<p>rpm -ivh --replacefiles striim-dbms-3.6.7-Linux.rpm <br>rpm -ivh --replacefiles striim-node-3.6.7-Linux.rpm</p>\n<p>Also our current rpm installation doesn't support systemd yet, so for CentOS 7.x and Redhat 7.x, if you want striim start automatically upon system reboot, you have to manually configure the systemd to add striim services. See detail steps below.</p>\n<p>1. Make sure striim package is installed correctly by using rpm command</p>\n<p>2. We need manually create the systemd files and enable those service upon system startup, this will address the issue tracked in this ticket</p>\n<div class=\"code panel\">\n<div class=\"codeContent panelContent\">\n<pre class=\"code-java\">a. For a single node cluster, or the primary node in a multi-node cluster, create two files uder /etc/systemd/system\nls -l striim*\n-rw-r--r--. 1 root root 173 Jan 17 19:45 striim-dbms.service\n-rw-r--r--. 1 root root 154 Jan 17 19:45 striim-node.service\n[root@WHMCentOS7 system]# cat striim-dbms.service\n[Unit]<br>Description=WebAction DBMS<br>After=syslog.target network.target<br><br>[Service]<br>ExecStart=/opt/Striim-3.6.7/sbin/striim-dbms start<br>ExecStop=/opt/Striim-3.6.7/sbin/striim-dbms stop<br><br>[Install]<br>RequiredBy=striim-node.service<br>WantedBy=multi-user.target<br>\n\n[root@WHMCentOS7 system]# cat striim-node.service\n[Unit]<br>Description=WebAction Cluster Node<br>After=network.target<br>After=syslog.target<br>After=striim-dbms.service<br>Requires=striim-dbms.service<br><br>[Service]<br>ExecStart=/opt/Striim-3.6.7/sbin/striim-node start<br><br>[Install]<br>WantedBy=multi-user.target<br><br>b. For other nodes in a multi-node cluster, create only one files uder /etc/systemd/system <br>ls -l striim* <br><br>-rw-r--r--. 1 root root 154 Jan 17 19:45 striim-node.service <br><br>[root@WHMCentOS7 system]# cat striim-node.service [Unit]<br>Description=WebAction Cluster Node<br>After=network.target<br>After=syslog.target<br><br>[Service]<br>ExecStart=/opt/Striim-3.6.7/sbin/striim-node start<br><br>[Install]<br>WantedBy=multi-user.target<br>\n</pre>\n</div>\n</div>\n<p>3. Run the following commands after both files are created<br>systemctl enable striim-dbms (**)<br>systemctl enable striim-node</p>\n<p>(**) This is only needed on the primary node in a multi-node clustering installation. The same applies to the steps below, the striim-dbms service is only needed on the primary node.</p>\n<p>4. Create symbolic link under /etc/systemd/system/multi-user.target.wants<br>shell&gt;cd /etc/systemd/system/multi-user.target.wants<br>shell&gt;ln -s ../striim-node.service ./striim-node.service<br>shell&gt;ln -s ../striim-dbms.service ./striim-dbms.service<br>So you will get these two links<br>lrwxrwxrwx. 1 root root 22 Jan 17 19:47 striim-dbms.service -&gt; ../striim-dbms.service<br>lrwxrwxrwx. 1 root root 22 Jan 17 19:46 striim-node.service -&gt; ../striim-node.service</p>\n<p>5. Edit /opt/Striim-3.6.7/conf/striim.conf, fill out correct values in the configuration file, such as license, cluster name, company name, cluster password, admin password and etc.</p>\n<p>6. Test out the following scripts to make sure they run correctly<br>/opt/Striim-3.6.7/sbin/striim-dbms start<br>/opt/Striim-3.6.7/sbin/striim-dbms stop<br>/opt/Striim-3.6.7/sbin/striim-node start</p>\n<p>7. Kill all the processes generated by the testing in step 6. Test the following commands<br>shell&gt;systemctl start striim-node<br>This should start both striim-node and striim-dbms service</p>\n<p>If all test in step 6 and 7 are successful, you are all set. The striim-node and striim-dbms services will be started when the system reboot in the future.</p>\n<p> </p>\n<p> </p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\">A sample script to resume crashed application.</span></p>\n<p>When Striim applications crash due to certain unexpected issues, they may be restarted by resuming the application from GUI or console manually. There is no autorestart feature on crashed applications in current Striim version (3.7.4). However, a script may be used to achieve it, together with a utility like cron job.</p>\n<p>Attached is a sample perl script on Linux/Unix/Mac, which will resume the crashed application.</p>\n<p>example:</p>\n<p>OS&gt; ./<span class=\"s1\">resume_crashed.pl</span></p>\n<p><span class=\"s1\">OS&gt; cat logs/restart.log</span></p>\n<p><textarea cols=\"80\" rows=\"22\">OS&gt; ./resume_crashed.pl\nOS&gt; cat logs/restart.log\n========================================================\nMon Aug 21 11:15:52 2017\nWelcome to the Striim Tungsten Command Line - Version 3.7.4 (1e444838f1)\nConnecting to cluster striim_374_mac.....connected.\nProcessing - use admin\n-&gt; SUCCESS \nElapsed time: 164 ms\nProcessing - resume application admin.ora_blob\n-&gt; SUCCESS \nProcessing - resume application admin.ora2ora\n-&gt; SUCCESS \nProcessing - resume application admin.ora_ddl\n-&gt; SUCCESS \nProcessing - exit\n-&gt; SUCCESS \nElapsed time: 1030 ms\nType 'HELP' to get some assistance, or enter commands below\n\nQuit.\n\n</textarea></p>\n<p><strong>Please note</strong> that this script is for demo purpose only. It will need to be modified and tested thoroughly for each env.</p>"} {"page_content": "<p>When data in Kafka are saved in a format other than Avro, such as JSON format, a KafkaReader may extract filed values with multiple layer get function:<br> e.g., data.get('data').get('col_name1').<br>The same approach may not work when data in Kafka are in Avro format.</p>\n<p>This KM shows an example of how to get a field value for Avro formatted data.</p>\n<p>Setup (see attached TQL file for details):<br>1. OracleReader -&gt; KafkaWriter,AvroFormat<br>2. KafkaReader,AvroPaser -&gt; CQ (to list the field) -&gt; Sysout (to show the output)</p>\n<p>The query in CQ is:<br><span class=\"wysiwyg-color-blue\">SELECT </span><br><span class=\"wysiwyg-color-blue\">VALUE(data.get(\"metadata\"), \"OperationName\").toString() as opName,</span><br><span class=\"wysiwyg-color-blue\">VALUE(data.get(\"metadata\"), \"TableName\").toString() as tableName,</span><br><span class=\"wysiwyg-color-blue\">VALUE(data.get(\"data\"), \"A\").toString() as A,</span><br><span class=\"wysiwyg-color-blue\">NVL(VALUE(data.get(\"data\"), \"B\"), \"0\").toString() as B,</span><br><span class=\"wysiwyg-color-blue\">NVL(VALUE(data.get(\"data\"), \"D\"), \"0\").toString() as D</span><br><span class=\"wysiwyg-color-blue\">FROM readFromKafkaStream</span><br><span class=\"wysiwyg-color-blue\">;</span></p>\n<p><br>Test result:</p>\n<p>SQL&gt; desc s1<br> Name Null? Type<br> ----------------------------------------- -------- ----------------------------<br> A NOT NULL VARCHAR2(100)<br> D VARCHAR2(100)<br> B CLOB</p>\n<p>SQL&gt; insert into s1 values (1,2,3);<br>1 row created.<br>SQL&gt; commit;<br>Commit complete.</p>\n<p>Sysout output:</p>\n<p><span class=\"wysiwyg-color-blue\">MY_SYSOUT: parseDataOutputStream_Type_1_0{</span><br> <span class=\"wysiwyg-color-blue\">opName: \"INSERT\"</span><br><span class=\"wysiwyg-color-blue\"> tableName: \"FZHANG.S1\"</span><br><span class=\"wysiwyg-color-blue\"> A: \"1\"</span><br><span class=\"wysiwyg-color-blue\"> B: \"3\"</span><br><span class=\"wysiwyg-color-blue\"> D: \"2\"</span><br><span class=\"wysiwyg-color-blue\">};</span></p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\">A sample script to resume crashed application.</span></p>\n<p>When Striim applications crash due to certain unexpected issues, they may be restarted by resuming the application from GUI or console manually. There is no autorestart feature on crashed applications in current Striim version (3.8). However, a script may be used to achieve it, together with a utility like cron job.</p>\n<p>Attached is a sample perl script on Linux/Unix/Mac, which will resume the crashed application.</p>\n<p>example:</p>\n<p>OS&gt; ./<span class=\"s1\">resume_crashed.pl</span></p>\n<p><span class=\"s1\">OS&gt; cat logs/restart.log</span></p>\n<p><textarea cols=\"80\" rows=\"22\">OS&gt; ./resume_crashed.pl\nOS&gt; cat logs/restart.log\n========================================================\nMon Aug 21 11:15:52 2017\nWelcome to the Striim Tungsten Command Line - Version 3.7.4 (1e444838f1)\nConnecting to cluster striim_374_mac.....connected.\nProcessing - use admin\n-&gt; SUCCESS \nElapsed time: 164 ms\nProcessing - resume application admin.ora_blob\n-&gt; SUCCESS \nProcessing - resume application admin.ora2ora\n-&gt; SUCCESS \nProcessing - resume application admin.ora_ddl\n-&gt; SUCCESS \nProcessing - exit\n-&gt; SUCCESS \nElapsed time: 1030 ms\nType 'HELP' to get some assistance, or enter commands below\n\nQuit.\n\n</textarea></p>\n<p><strong>Please note</strong> that this script is for demo purpose only. It will need to be modified and tested thoroughly for each env.</p>"} {"page_content": "<p>For pre-v3.8.1, while installing rpm packages to CentOS 7.x or Redhat Linux 7.x, you will see below errors</p>\n<pre class=\"code-java\">[ferroadmin@Flotvmlcdc001 striim]$ sudo rpm -ivh striim-node-3.6.6-Linux.rpm\nPreparing... ################################# [100%]\n file /lib from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /opt from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /<span class=\"code-keyword\">var</span> from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /<span class=\"code-keyword\">var</span>/log from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> filesystem-3.2-20.el7.x86_64\n file /lib/systemd from install of striim-node-3.6.7-1.noarch conflicts with file from <span class=\"code-keyword\">package</span> systemd-219-19.el7_2.13.x86_64</pre>\n<p> </p>\n<p>You can suppress these errors by using rpm option --replacefiles</p>\n<p>rpm -ivh --replacefiles striim-dbms-3.6.7-Linux.rpm <br>rpm -ivh --replacefiles striim-node-3.6.7-Linux.rpm</p>\n<p>Also our current rpm installation doesn't support systemd yet, so for CentOS 7.x and Redhat 7.x, if you want striim start automatically upon system reboot, you have to manually configure the systemd to add striim services. See detail steps below.</p>\n<p>1. Make sure striim package is installed correctly by using rpm command</p>\n<p>2. We need manually create the systemd files and enable those service upon system startup, this will address the issue tracked in this ticket</p>\n<div class=\"code panel\">\n<div class=\"codeContent panelContent\">\n<pre class=\"code-java\">a. For a single node cluster, or the primary node in a multi-node cluster, create two files uder /etc/systemd/system\nls -l striim*\n-rw-r--r--. 1 root root 173 Jan 17 19:45 striim-dbms.service\n-rw-r--r--. 1 root root 154 Jan 17 19:45 striim-node.service\n[root@WHMCentOS7 system]# cat striim-dbms.service\n[Unit]<br>Description=WebAction DBMS<br>After=syslog.target network.target<br><br>[Service]<br>ExecStart=/opt/Striim-3.6.7/sbin/striim-dbms start<br>ExecStop=/opt/Striim-3.6.7/sbin/striim-dbms stop<br><br>[Install]<br>RequiredBy=striim-node.service<br>WantedBy=multi-user.target<br>\n\n[root@WHMCentOS7 system]# cat striim-node.service\n[Unit]<br>Description=WebAction Cluster Node<br>After=network.target<br>After=syslog.target<br>After=striim-dbms.service<br>Requires=striim-dbms.service<br><br>[Service]<br>ExecStart=/opt/Striim-3.6.7/sbin/striim-node start<br><br>[Install]<br>WantedBy=multi-user.target<br><br>b. For other nodes in a multi-node cluster, create only one files uder /etc/systemd/system <br>ls -l striim* <br><br>-rw-r--r--. 1 root root 154 Jan 17 19:45 striim-node.service <br><br>[root@WHMCentOS7 system]# cat striim-node.service [Unit]<br>Description=WebAction Cluster Node<br>After=network.target<br>After=syslog.target<br><br>[Service]<br>ExecStart=/opt/Striim-3.6.7/sbin/striim-node start<br><br>[Install]<br>WantedBy=multi-user.target<br>\n</pre>\n</div>\n</div>\n<p>3. Run the following commands after both files are created<br>systemctl enable striim-dbms (**)<br>systemctl enable striim-node</p>\n<p>(**) This is only needed on the primary node in a multi-node clustering installation. The same applies to the steps below, the striim-dbms service is only needed on the primary node.</p>\n<p>4. Create symbolic link under /etc/systemd/system/multi-user.target.wants<br>shell&gt;cd /etc/systemd/system/multi-user.target.wants<br>shell&gt;ln -s ../striim-node.service ./striim-node.service<br>shell&gt;ln -s ../striim-dbms.service ./striim-dbms.service<br>So you will get these two links<br>lrwxrwxrwx. 1 root root 22 Jan 17 19:47 striim-dbms.service -&gt; ../striim-dbms.service<br>lrwxrwxrwx. 1 root root 22 Jan 17 19:46 striim-node.service -&gt; ../striim-node.service</p>\n<p>5. Edit /opt/Striim-3.6.7/conf/striim.conf, fill out correct values in the configuration file, such as license, cluster name, company name, cluster password, admin password and etc.</p>\n<p>6. Test out the following scripts to make sure they run correctly<br>/opt/Striim-3.6.7/sbin/striim-dbms start<br>/opt/Striim-3.6.7/sbin/striim-dbms stop<br>/opt/Striim-3.6.7/sbin/striim-node start</p>\n<p>7. Kill all the processes generated by the testing in step 6. Test the following commands<br>shell&gt;systemctl start striim-node<br>This should start both striim-node and striim-dbms service</p>\n<p>If all test in step 6 and 7 are successful, you are all set. The striim-node and striim-dbms services will be started when the system reboot in the future.</p>\n<p> </p>\n<p> </p>"} {"page_content": "<div class=\"accept-terms\">\n<div style=\"text-align: center;\">\n<h2 id=\"01H8C9Q5T6NGK385T1PGK2ZG53\">\n<span>You must accept the Striim License Agreement </span><span>to download the software.</span>\n</h2>\n<h3 id=\"01H8C9Q5T6MDJSFN74MF9VM727\"><a title=\"https://support.striim.com/hc/en-us/articles/360038194454-Striim-Version-Support-Policy\" href=\"https://support.striim.com/hc/en-us/articles/360038194454-Striim-Version-Support-Policy\"> Version Support Policy</a></h3>\n<textarea style=\"width: 500px; height: 300px;\"> \nIMPORTANT: Please read this End User License Agreement (“Agreement”) before clicking the “accept” button, installing, configuring and/or using the Software (as defined below) that accompanies or is provided in connection with this Agreement. By clicking the “Accept” button, installing, configuring and/or using the Software, you and the entity that you represent (“Customer”) agree to be bound by this Agreement with Striim, Inc. (“Striim”). You represent and warrant that you have the authority to bind such entity to these terms. If Customer does not unconditionally agree to all of the terms of this Agreement, use of the Software is strictly prohibited.\n\nTO THE EXTENT CUSTOMER HAS SEPARATELY ENTERED INTO AN END USER LICENSE AGREEMENT WITH STRIIM COVERING THE SAME SOFTWARE, THE TERMS AND CONDITIONS OF SUCH END USER LICENSE AGREEMENT SHALL SUPERSEDE THIS AGREEMENT IN ITS ENTIRETY.\n\nThis Agreement includes and incorporates by reference the following documents:\n\nStandard Terms and Conditions\nExhibit A – Support and Maintenance Addendum\nOrder Forms (as defined below)\n\nThe Agreement includes the documents listed above and states the entire agreement between the parties regarding its subject matter and supersedes all prior and contemporaneous agreements, terms sheets, letters of intent, understandings, and communications, whether written or oral. All amounts paid by Customer under this Agreement shall be non-refundable and non-recoupable, unless otherwise provided herein. Any pre-printed terms in any Order Forms, quotes, or other similar written purchase authorization that add to, or conflict with or contradict, any provisions in the Agreement will have no legal effect. The provisions of this Agreement may be amended or waived only by a written document signed by both parties.\n\nSTANDARD TERMS AND CONDITIONS\n1.\tDEFINITIONS\n1.1 “CPU” means a single central processing unit of a Customer System, with one or more Cores. \n1.2 “Core” means each of the independent processor components within a single CPU.\n1.3 “Customer” means that person or entity listed on the Order Form.\n1.4 “Customer System” means one or more computer system(s) that is: (a) owned or leased by Customer or its Subsidiary; and (b) within the possession and control of Customer or its Subsidiary.\n1.5 “Documentation” means the standard end-user technical documentation, specifications, materials and other information Striim supplies in electronic format with the Software or makes available electronically. Advertising and marketing materials are not Documentation.\n1.6 “Effective Date” has the same meaning as used in the Order Form.\n1.7 “Error” means a reproducible failure of the Software to perform in substantial conformity with its Documentation.\n1.8 “Intellectual Property Rights” means copyrights, trademarks, service marks, trade secrets, patents, patent applications, moral rights, contractual rights of non-disclosure or any other intellectual property or proprietary rights, however arising, throughout the world.\n1.9 “Order Form” means the order form executed by Customer substantially in the form set forth on Exhibit B.\n1.10 “Product Use Environment” means the environment, including without limitation the number of Cores or Sources and Targets identified in an Order Form. \n1.11 “Product Use Environment Upgrade” means the addition of any additional Cores or Sources and Targets.\n1.12 “Release” means any Update or Upgrade if and when such Update or Upgrade is made available to Customer by Striim pursuant to Exhibit A. In the event of a dispute as to whether a particular Release is an Upgrade or an Update, Striim’s published designation will be dispositive.\n1.13 “Software” means the software that Striim provides to Customer or its Subsidiary (in object code format only) as identified on the Order Form, and any Releases thereto if and when such Releases are made available by Striim. \n1.14 “Sources and Targets” means the source and target systems of the data being analyzed.\n1.15 “Subsidiary” means with respect to Customer, any person or entity that (a) is controlled by Customer, where “control” means ownership of fifty percent (50%) or more of the outstanding voting securities (but only as long as such person or entity meets these requirements) and (b) has a primary place of business in the United States.\n1.16 “Update” means, if and when available, any Error corrections, fixes, workarounds or other maintenance releases to the Software provided by Striim to Customer.\n1.17 “Upgrade” means, if and when available, new releases or versions of the Software, that materially improve the functionality of, or add material functional capabilities to the Software. “Upgrade” does not include the release of a new product for which there is a separate charge. If a question arises as to whether a release is an Upgrade or a new product, Striim’s determination will prevail.\n1.18 “Use” means to cause a Customer System to execute any machine-executable portion of the Software in accordance with the Documentation or to make use of any Documentation, Releases, or related materials in connection with the execution of any machine-executable portion of the Software.\n1.19 “User” means an employee of Customer or its Subsidiary or independent contractor to Customer or its Subsidiary that is working for Customer or its Subsidiary and has been authorized by Customer or its Subsidiary to Use the Software. \n2.\tGRANT AND SCOPE OF LICENSE\n2.1 Software License. Subject to the terms and conditions of this Agreement, during the term specified on the Order Form, Striim hereby grants Customer and its Subsidiaries a non-exclusive, non-transferable (except as provided under Section 12.6), non-sublicensable license for Users to install (if Customer elects to self-install the Software), execute and Use the Software supplied to Customer hereunder, solely within the Product Use Environment on a Customer System and use the Documentation, solely for Customer’s or its Subsidiaries’ own internal business purposes. Customer shall be solely responsible for all acts or omissions of its Subsidiaries and any breach of this Agreement by a Subsidiary of Customer shall be deemed a breach by Customer.\n2.2 License Restrictions. Customer shall not: (a) Use the Software except as expressly permitted under Section 2.1; (b) separate the component programs of the Software for use on different computers; (c) adapt, alter, publicly display, publicly perform, translate, create derivative works of, or otherwise modify the Software; (d) sublicense, lease, rent, loan, or distribute the Software to any third party; (e) transfer the Software to any third party (except as provided under Section 12.6); (f) reverse engineer, decompile, disassemble or otherwise attempt to derive the source code for the Software, except as permitted by applicable law; (g) remove, alter or obscure any proprietary notices on the Software or Documentation; or (h) allow third parties to access or use the Software, including any use in any application service provider environment, service bureau, or time-sharing arrangements. No portion of the Software may be duplicated by Customer, except as otherwise expressly authorized in writing by Striim. Customer may, however, make a reasonable number of copies of the machine-readable portion of the Software solely for back-up purposes, provided that such back-up copy is used only to restore the Software on a Customer System, and not for any other use or purpose. Customer will reproduce on each such copy all notices of patent, copyright, trademark or trade secret, or other notices placed on such Software by Striim or its suppliers. \n2.3 License Keys. Customer acknowledges that the Software may require license keys or other codes (“Keys”) in order for Customer to install and/or Use the Software. Such Keys may also control continued access to, and Use of, the Software, and may prevent the Use of the Software on any systems except a Customer System. Customer will not disclose the Keys or information about the Keys to any third party. Customer shall not Use any Software except pursuant to specific Keys issued by Striim that authorizes such Use.\n3.\tPROPRIETARY RIGHTS. Customer acknowledges and agrees that the Software, including its sequence, structure, organization, source code and Documentation contains valuable Intellectual Property Rights of Striim and its suppliers. The Software and Documentation are licensed and not sold to Customer, and no title or ownership to such Software, Documentation, or the Intellectual Property Rights embodied therein passes as a result of this Agreement or any act pursuant to this Agreement. The Software, Documentation, and all Intellectual Property Rights therein are the exclusive property of Striim and its suppliers, and all rights in and to the Software and Documentation not expressly granted to Customer in this Agreement are reserved. Striim owns all rights, title, and interest to the Software and Documentation. Nothing in this Agreement will be deemed to grant, by implication, estoppel or otherwise, a license under any existing or future patents of Striim, except to the extent necessary for Customer to Use the Software and Documentation as expressly permitted under this Agreement.\n4.\tCONFIDENTIALITY\n4.1 Confidential Information. Each party (the “Disclosing Party”) may during the term of this Agreement disclose to the other party (the “Receiving Party”) non-public information regarding the Disclosing Party’s business, including technical, marketing, financial, employee, planning, and other confidential or proprietary information, that (1) if in tangible form, is clearly marked at the time of disclosure as being confidential, or (2) if disclosed orally or visually, is designated at the time of disclosure as confidential, or (3) is reasonably understood to be confidential or proprietary information, whether or not marked. (“Confidential Information”). Without limiting the generality of the foregoing, the Software and the Documentation constitute Striim’s Confidential Information and Customer Data constitutes Customer's Confidential Information.\n4.2 Protection of Confidential Information. The Receiving Party will not use any Confidential Information of the Disclosing Party for any purpose not permitted by this Agreement, and will disclose the Confidential Information of the Disclosing Party only to employees or contractors of the Receiving Party who have a need to know such Confidential Information for purposes of this Agreement and are under a duty of confidentiality no less restrictive than the Receiving Party’s duty hereunder. The Receiving Party will protect the Disclosing Party’s Confidential Information from unauthorized use, access, or disclosure in the same manner as the Receiving Party protects its own confidential or proprietary information of a similar nature and with no less than reasonable care.\n4.3 Exceptions. The Receiving Party’s obligations under Section 4.2 with respect to Confidential Information of the Disclosing Party will terminate to the extent such information: (a) was already known to the Receiving Party at the time of disclosure by the Disclosing Party; (b) is disclosed to the Receiving Party by a third party who had the right to make such disclosure without any confidentiality restrictions; (c) is, or through no fault of the Receiving Party has become, generally available to the public; or (d) is independently developed by the Receiving Party without access to, or use of, the Disclosing Party’s Confidential Information. In addition, the Receiving Party will be allowed to disclose Confidential Information of the Disclosing Party to the extent that such disclosure is (i) approved in writing by the Disclosing Party, (ii) necessary for the Receiving Party to enforce its rights under this Agreement in connection with a legal proceeding; or (iii) required by law or by the order or a court of similar judicial or administrative body, provided that the Receiving Party notifies the Disclosing Party of such required disclosure promptly and in writing and cooperates with the Disclosing Party, at the Disclosing Party’s reasonable request and expense, in any lawful action to contest or limit the scope of such required disclosure.\n4.4 Return of Confidential Information. The Receiving Party will either return to the Disclosing Party or destroy all Confidential Information of the Disclosing Party in the Receiving Party’s possession or control and permanently erase all electronic copies of such Confidential Information promptly upon the written request of the Disclosing Party or the termination of this Agreement, whichever comes first. Upon request, the Receiving Party will certify in writing that it has fully complied with its obligations under this Section 4.4.\n4.5 Confidentiality of Agreement. Neither party will disclose the terms of this Agreement to anyone other than its attorneys, accountants, and other professional advisors under a duty of confidentiality except (a) as required by law, or (b) pursuant to a mutually agreeable press release, or (c) in connection with a proposed merger, financing, or sale of such party’s business.\n5.\tADDITIONAL ORDERS; DELIVERY; INSTALLATION\n5.1 Additional Orders. Subject to the terms and conditions of this Agreement, Customer or a Subsidiary of Customer may place orders with Striim for renewals to Software licenses, additional licenses to the Software and/or support and maintenance or training services, including but not limited to Product Use Environment Upgrades (collectively “Additional Products and Services”) by contacting Striim and executing another Order Form with Striim for the Additional Products and Services.\n5.2 Delivery and Installation. Striim will install the Software on a Customer System unless Customer elects to self-install, in which case Striim will deliver the Software and its related Documentation electronically to Customer and Customer will be solely responsible for installing the Software on its Customer System (“Delivery”). Customer will receive all Updates and Upgrades from Striim under this Agreement by electronic delivery. Customer shall promptly provide to Striim all information that is necessary to enable Striim to transmit electronically all such items to Customer. Customer acknowledges that certain internet connections and hardware capabilities are necessary to complete electronic deliveries, and agrees that Customer personnel will receive electronic deliveries by retrieving the Software placed by Striim on a specific Striim controlled server. Customer acknowledges that the electronic deliveries may be slow and time-consuming depending upon network traffic and reliability. In furtherance of the purpose of the electronic deliveries, Striim will not deliver to Customer, and Customer will not accept from Striim, any Software or Documentation deliverable under this Agreement in any tangible medium including, but not limited to, CD-ROM, tape or paper. Customer will be deemed to have unconditionally and irrevocably accepted the Software and related Documentation upon Delivery. \n6.\tSUPPORT; TRAINING.\n6.1 Support and Maintenance. Support and maintenance services provided by Striim (if any) for the Software will be subject to the timely and full payment of all support fees as set forth in an Order Form and will be subject to the terms and conditions of Exhibit A (Support and Maintenance Addendum) to this Agreement. Other than as expressly provided in Exhibit A, this Agreement does not obligate Striim to provide any support or maintenance services. For the avoidance of doubt, Striim has the right to suspend any and all support and maintenance services if Customer has not made timely and full payment of all support and maintenance fees as set forth in an Order Form.\n6.2 Training. Striim shall have no obligation to provide training of Customer personnel regarding Use of the Software unless Customer purchases training services from Striim, as specified in the relevant Order Form, which training services will be provided, based on Striim’s then-current training services policy. Customer must purchase training services from Striim if Customer elects to self-install the Software.\n7.\tTERM AND TERMINATION\n7.1 Term. The term of this Agreement will begin on the Effective Date and continue in force until this Agreement is terminated in accordance with Section 7.2. The term of the Software license shall be as set forth on the Order Form. \n7.2 Termination of Agreement. Each party may terminate this Agreement for material breach by the other party which remains uncured thirty (30) days after delivery of written notice of such breach to the breaching party. Notwithstanding the foregoing, Striim may immediately terminate this Agreement and all licenses granted hereunder if Customer breaches Section 2 hereof. The foregoing rights of termination are in addition to any other rights and remedies provided in this Agreement or by law. \n7.3 Effect of Termination. Upon termination of this Agreement (or termination of any license granted hereunder), all rights of Customer to Use the Software (or under the relevant license) will cease and: (a) all license rights granted under this Agreement will immediately terminate and Customer shall promptly stop all Use of the Software; (b) Striim’s obligation to provide support for the Software will terminate; (c) Customer shall erase all copies of the Software from Customer’s computers, and destroy all copies of the Software and Documentation on tangible media in Customer’s possession or control or return such copies to Striim; and (d) upon request by Striim, Customer shall certify in writing to Striim that that it has returned or destroyed such Software and Documentation. \n7.4 Survival. Sections 1, 3, 4, 7.3, 7.4, 8, 9, 10 (only for claims arising based on Use of the Software prior to termination of the applicable license), 11, and 12 will survive the termination of this Agreement.\n8.\tFEES. Customer shall pay Striim the fees as set forth on the applicable Order Form. Striim shall send invoices to Customer based on the invoice schedules set forth on the applicable Order Form. All payments shall be made in U.S. dollars. Unless otherwise specified in the applicable Order Form, Customer will pay all fees payable to Striim within thirty (30) days following the receipt by Customer of an invoice from Striim. Late payments will accrue interest at the rate of one and one-half percent (1.5%) per month, or if lower, the maximum rate permitted under applicable law. Striim reserves the right to increase fees each calendar year with thirty (30) days prior written notice to Customer. Additional payment terms may be set forth in the applicable Order Form. All fees are exclusive of any sales, use, excise, import, export or value-added tax, levy, duty or similar governmental charge which may be assessed based on any payment due hereunder, including any related penalties and interest (“Taxes”). Customer is solely responsible for all Taxes resulting from transactions under this Agreement, except Taxes based on Striim’s net income. Customer will indemnify and hold Striim harmless from (a) the Customer’s failure to pay (or reimburse Striim for the payment of) all such Taxes; and (b) the imposition of and failure to pay (or reimburse Striim for the payment of) all governmental permit fees, license fees, customs fees and similar fees levied upon delivery of the Software or Documentation which Striim may incur in respect of this Agreement or any other fees required to be made by Customer under this Agreement, together with any penalties, interest, and collection or withholding costs associated therewith.\n9.\tLIMITED WARRANTY\n9.1 Software Warranty. Striim warrants to, and for the sole benefit of, Customer that, subject to Section 9.2, any Software, as delivered by Striim and properly installed and operated within the Product Use Environment and used as permitted under this Agreement and in accordance with the Documentation, will perform substantially in accordance with the Documentation for ninety (90) days from the date of Delivery. Customer’s exclusive remedy and Striim’s sole liability for breach of this warranty is for Striim, at its own expense, to replace the Software with a version of the Software that corrects those Errors that Customer reports to Striim during such warranty period. Any Error correction provided will not extend the original warranty period. \n9.2 Exclusions. Striim will have no obligation under this Agreement to correct, and Striim makes no warranty with respect to, Errors related to: (a) improper installation of the applicable Software; (b) changes that Customer has made to the applicable Software; (c) Use of the applicable Software in a manner inconsistent with the Documentation and this Agreement; (d) combination of the applicable Software with third party hardware or software not conforming to the operating environment specified in the Documentation; or (e) malfunction, modification, or relocation of Customer’s servers.\n9.3 Disclaimer. EXCEPT AS PROVIDED IN SECTION 9.1, STRIIM HEREBY DISCLAIMS ALL WARRANTIES WHETHER EXPRESS, IMPLIED OR STATUTORY WITH RESPECT TO THE SOFTWARE, DOCUMENTATION, INSTALLATION SERVICES, SUPPORT SERVICES, TRAINING SERVICES AND ANY OTHER PRODUCTS OR SERVICES PROVIDED TO CUSTOMER UNDER THIS AGREEMENT, INCLUDING WITHOUT LIMITATION ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NON-INFRINGEMENT, AND ANY WARRANTY AGAINST INTERFERENCE WITH CUSTOMER’S ENJOYMENT OF THE SOFTWARE, DOCUMENTATION, INSTALLATION SERVICES, SUPPORT SERVICES, AND ANY OTHER PRODUCTS OR SERVICES PROVIDED TO CUSTOMER UNDER THIS AGREEMENT. \n10.\tPROPRIETARY RIGHTS INDEMNITY\n10.1 Striim’s Obligation. Subject to the terms and conditions of Section 10, Striim will defend at its own expense any suit or action brought against Customer by a third party to the extent that the suit or action is based upon a claim that the Software infringes such third party’s United States copyrights or misappropriates such third party’s trade secrets recognized as such under the Uniform Trade Secrets Act or such other similar laws, and Striim will pay those costs and damages finally awarded against Customer in any such action or those costs and damages agreed to in a monetary settlement of such claim, in each case that are specifically attributable to such claim. However, such defense and payments are subject to the conditions that: (a) Striim will be notified promptly in writing by Customer of any such claim; (b) Striim will have sole control of the defense and all negotiations for any settlement or compromise of such claim; and (c) Customer will cooperate and, at Striim’s request and expense, assist in such defense. THIS SECTION 10.1 STATES STRIIM’S ENTIRE LIABILITY AND CUSTOMER’S SOLE AND EXCLUSIVE REMEDY FOR ANY INTELLECTUAL PROPERTY RIGHT INFRINGEMENT AND/OR MISAPPROPRIATION.\n10.2 Alternative. If Customer’s or its Subsidiaries’ Use of Software is prevented by injunction or court order because of infringement, or should any Software be likely to become the subject of any claim in Striim’s opinion, Customer will permit Striim, at the sole discretion of Striim and no expense to Customer, to: (i) procure for Customer and its Subsidiaries the right to continue using such Software in accordance with this Agreement; or (ii) replace or modify such Software so that it becomes non-infringing while providing substantially similar features. Where (i) and (ii) above are not commercially feasible for Striim, the applicable licenses will immediately terminate and Striim will refund pro rated fees for the remainder of the term to End User. \n10.3 Exclusions. Striim will have no liability to Customer or any of its Subsidiaries for any claim of infringement or misappropriation to the extent based upon: (a) Use of the Software not in accordance with this Agreement or the Documentation; (b) the combination of the applicable Software with third party hardware or software not conforming to the operating environment specified in Documentation; (c) Use of any Release of the Software other than the most current Release made available to Customer; or (d) any modification of the Software by any person other than Striim. Customer will indemnify Striim against all liability, damages and costs (including reasonable attorneys’ fees) resulting from any such claims.\n10.4 Required Updates. In the event the Software become subject to a claim or in Striim’s opinion is likely to be subject to a claim, upon notice from Striim to Customer that required updates are available, Customer agrees to download and install such updates to the Software onto Customer Systems within five (5) business days (the “Required Update Period”). At the end of any Required Update Period, Customer’s and its Subsidiaries’ right and license to Use all prior versions of the Software shall automatically terminate and Striim shall have no liability for any Use of the prior versions of the Software occurring after the Required Update Period.\n11.\tLIMITATION OF LIABILITY. IN NO EVENT WILL STRIIM BE LIABLE TO CUSTOMER OR ANY OTHER PARTY FOR ANY SPECIAL, PUNITIVE, INDIRECT, INCIDENTAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATED TO THIS AGREEMENT UNDER ANY LEGAL THEORY, INCLUDING, BUT NOT LIMITED TO, LOSS OF DATA, LOSS OF THE USE OR PERFORMANCE OF ANY PRODUCTS, LOSS OF REVENUES, LOSS OF PROFITS, OR BUSINESS INTERRUPTION, EVEN IF STRIIM KNOWS OF OR SHOULD HAVE KNOWN OF THE POSSIBILITY OF SUCH DAMAGES. IN NO EVENT WILL STRIIM’S TOTAL CUMULATIVE LIABILITY ARISING OUT OF OR RELATED TO THIS AGREEMENT EXCEED THE TOTAL AMOUNT OF FEES RECEIVED BY STRIIM FROM CUSTOMER UNDER THIS AGREEMENT DURING THE TWELVE (12) MONTHS IMMEDIATELY PRECEDING SUCH CLAIM. THIS SECTION 11 WILL APPLY EVEN IF AN EXCLUSIVE REMEDY OF CUSTOMER UNDER THIS AGREEMENT HAS FAILED OF ITS ESSENTIAL PURPOSE.\n12.\tGENERAL\n12.1 Audit Rights. During the term of this Agreement and for two (2) years thereafter, Striim or its representatives, may upon at least ten (10) days’ written notice, inspect and audit records, Customer Systems, and premises of Customer during normal business hours to verify Customer’s compliance with this Agreement. \n12.2 Notices. All notices, consents and approvals under this Agreement must be delivered in writing by courier, by facsimile or by certified or registered mail (postage prepaid and return receipt requested) to the other party at the address set forth above, and will be effective upon receipt or three (3) business days after being deposited in the mail as required above, whichever occurs sooner. Either party may change its address by giving notice of the new address to the other party.\n12.3 Relationship of Parties. The parties hereto are independent contractors. Nothing in this Agreement will be deemed to create an agency, employment, partnership, fiduciary or joint venture relationship between the parties. \n12.4 Publicity. Striim may use Customer’s name and a description of Customer’s Use of the Software for investor relations and marketing purposes.\n12.5 Compliance with Export Control Laws. The Software may contain encryption technology controlled under U.S. export law, the export of which may require an export license from the U.S. Commerce Department. Customer will comply with all applicable export control laws and regulations of the U.S. and other countries. Customer will defend, indemnify, and hold harmless Striim from and against all fines, penalties, liabilities, damages, costs and expenses (including reasonable attorneys’ fees) incurred by Striim as a result of Customer’s breach of this Section 12.5.\n12.6 Assignment. Customer may not assign or transfer, by operation of law, merger or otherwise, any of its rights or delegate any of its duties under this Agreement (including, without limitation, its licenses for the Software) to any third party without Striim’s prior written consent. Any attempted assignment or transfer in violation of the foregoing will be null and void. Striim may assign its rights or delegate its obligations under this Agreement. \n12.7 Governing Law and Venue. This Agreement will be governed by the laws of the State of California, excluding any conflict of law provisions that would require the application of the laws of any other jurisdiction. The United Nations Convention on Contracts for the International Sale of Goods shall not apply to this Agreement. Any action or proceeding arising from or relating to this Agreement must be brought exclusively in a federal or state court located in Santa Clara, California. Each party irrevocably consents to the personal jurisdiction and venue in, and agrees to service of process issued by, any such court. Notwithstanding the foregoing, either party may bring an action or suit seeking injunctive relief to protect its Intellectual Property Rights or Confidential Information in any court having jurisdiction.\n12.8 Force Majeure. Any delay in or failure of performance by either party under this Agreement, other than a failure to pay amounts when due, will not be considered a breach of this Agreement and will be excused to the extent caused by any occurrence beyond the reasonable control of such party.\n12.9 Remedies. Except as provided in Sections 9 and 10 of this Agreement, the parties’ rights and remedies under this Agreement are cumulative. Customer acknowledges that the Software contains valuable trade secrets and proprietary information of Striim, that any actual or threatened breach of Section 2 (Grant and Scope of License) or Section 4 (Confidentiality) will constitute immediate, irreparable harm to Striim for which monetary damages would be an inadequate remedy, and that injunctive relief is an appropriate remedy for such breach. If any legal action is brought to enforce this Agreement, the prevailing party will be entitled to receive its attorneys’ fees, court costs, and other collection expenses, in addition to any other relief it may receive.\n12.10 Waiver; Severability. Any waiver or failure to enforce any provision of this Agreement on one occasion will not be deemed a waiver of any other provision or of such provision on any other occasion. If any provision of this Agreement is adjudicated to be unenforceable, such provision will be changed and interpreted to accomplish the objectives of such provision to the greatest extent possible under applicable law and the remaining provisions will continue in full force and effect. \n12.11 Order of Precedence; Construction. The provisions of the standard terms and conditions will prevail regardless of any inconsistent or conflicting provisions on any Order Forms. The Section headings of this Agreement are for convenience and will not be used to interpret this Agreement. As used in this Agreement, the word “including” means “including but not limited to.” \n \nEXHIBIT A\n\n\nSUPPORT AND MAINTENANCE POLICY\n\nTHE TERMS AND CONDITIONS IN THIS ADDENDUM APPLY TO THE SUPPORT AND MAINTENANCE SERVICES PROVIDED BY STRIIM TO CUSTOMER (IF ANY). SUBJECT TO CUSTOMER’S PAYMENT OF THE APPLICABLE SUPPORT AND MAINTENANCE FEES, STRIIM WILL PROVIDE THE SUPPORT AND MAINTENANCE SERVICES DESCRIBED IN THIS ADDENDUM.\n1.\tDEFINITIONS. For purposes of this Addendum, the following terms have the following meanings. Capitalized terms not defined in this Addendum have the meanings described in the Agreement.\n1.1\t“Response Time” means the period of time between (a) Customer’s registration of an Error pursuant via Striim’s online ticketing system in accordance with Section 2.3 (Error Correction); and (b) the commencement of steps to address the Error in accordance with this Addendum by Striim.\n1.2\t “Support Services” means the support and maintenance services described in Section 2 (Support Services) to be performed by Striim pursuant to this Addendum. \n2.\tSUPPORT SERVICES \n2.1\tForm of Support. Striim will provide Support Services by means set forth in the following table, subject to the conditions regarding availability or response times with respect to each such form of access as set forth in the table. Support Services will consist of answering questions regarding the proper Use of, and providing troubleshooting assistance for, the Software. \nFORM OF SUPPORT\tAVAILABILITY\nTelephonic support +1 (650) 241-0680 or such other phone number as Striim may provide from time to time)\t8 am to 7 pm Pacific Time, Mon. – Fri. (excluding Striim Holidays)\nEmail Support (support@Striim.com or such other email address as Striim may provide from time to time)\t24 x 7 x 365\nWeb-based Support (http://www.Striim.com/ or such other URL as Striim may provide from time to time)\t24 x 7 x 365\n\n2.2\tSeverity Levels. If Customer identifies an Error and would like such Error corrected, Customer will promptly report such Error in writing to Striim, specifying (a) the nature of the Error; (b) the circumstances under which the Error was encountered, including the processes that were running at the time that the Error occurred; (c) technical information for the equipment upon which the Software was running at the time of the Error; (d) the steps, if any, that Customer took immediately following the Error; and (e) the immediate impact of the Error upon Customer’s ability to operate the Software. Upon receipt of any such Error report, Striim will evaluate the Error and classify it into one of the following Severity Levels based upon the following severity classification criteria:\nSEVERITY LEVEL\tSEVERITY CLASSIFICATION CRITERIA\nSeverity 1\tError renders continued Use of the Software commercially infeasible\nSeverity 2\tError prevents a critical function of the Software from operating in substantial accordance with the Documentation.\nSeverity 3\tError prevents a major non-critical function of the Software from operating in substantial accordance with the Documentation.\nSeverity 4\tError adversely affects a minor function of the Software or consists of a cosmetic nonconformity, error in Documentation, or other problem of similar magnitude.\n\n2.3\tError Correction. Striim will use commercially reasonable efforts to provide a correction or workaround to all reproducible Errors that are reported in accordance with Section 2.2 (Severity Levels) above. Such corrections or workarounds may take the form of Updates, procedural solutions, correction of Documentation errors, or other such remedial measures as Striim may determine to be appropriate. Striim will also endeavor to affect the following Response Times for each of the following categories of Errors. \nSEVERITY LEVEL\tRESPONSE TIME\nSeverity 1\tOne (1) Hour during M-F; two (2) hours on weekends\nSeverity 2\tTwo (2) Hours M-F; four (4) hours on weekends\nSeverity 3\tFour (4) business days\nSeverity 4\tSeven (7) business days\n\n3.\tMAINTENANCE\n3.1\tUpdates. Customer will be entitled to obtain and Use all Updates and Upgrades that are generally released during the term of this Addendum provided that Customer has paid the applicable support and maintenance fees. Striim may make such Updates and Upgrades available to Customer through electronic download. The provision of any Update or Upgrade to Customer will not operate to extend the original warranty period on the Software.\n3.2\tIntellectual Property. Upon release of an Update or Upgrade to Customer, such Update or Upgrade will be deemed to be “Software” within the meaning of the Agreement, and subject to payment by Customer of the applicable support and maintenance fees, Customer will acquire license rights to Use such Update or Upgrade in accordance with the terms and conditions of the Agreement. There are no express or implied licenses in this Addendum, and all rights are reserved to Striim.\n4.\tCUSTOMER RESPONSIBILITIES AND EXCLUSIONS\n4.1\tCustomer Responsibilities. As a condition to Striim’s obligations under this Addendum, Customer will provide the following:\n(a)\tGeneral Cooperation. Customer will cooperate with Striim to the extent that such cooperation would facilitate Striim’s provision of Support Services hereunder. Without limiting the foregoing, at Striim’s request, Customer will (i) provide Striim with reasonable access to appropriate personnel, records, network resources, maintenance logs, physical facilities, and equipment; (ii) refrain from undertaking any operation that would directly or indirectly block or slow down any maintenance service operation; and (iii) comply with Striim’s instructions regarding the Use and operation of the Software.\n(b)\tData Backup. Customer agrees and acknowledges that Striim’s obligations under this Addendum are limited to the Software, and that Striim is not responsible for the operation and general maintenance of Customer’s computing environment. Striim will not be responsible for any losses or liabilities arising in connection with any failure of data backup processes. \n(c)\tSpecific Customer Assistance Requests. Customer may request that, in providing support services hereunder, Striim directly access Customer’s production systems, either by logging in using Customer’s access credentials and/or through a remote (e.g., WebEx) session initiated by Customer. Striim is not responsible for any effect on, loss of, or damage to, Customer’s technology systems or data from Striim’s attempt to address trouble tickets from within Customer’s production environment, nor is Striim agreeing to any Customer-prescribed security requirements as a condition of such access. Customer also may request that, in providing support services hereunder, Striim receive Customer data from one or more specific transactions for the purpose of attempting to re-create errors. Customer will provide only such data that Customer may legally provide to Striim, in compliance with Customer’s contractual obligations to third parties. Striim does not promise any level of protection with respect to such data other than as required under the applicable confidentiality provisions in effect between Striim and Customer, even if such data in Customer’s possession is subject to additional legal requirements, and does not warrant that such data will not be lost or compromised. With respect to either of the foregoing scenarios, Striim will require that such request be documented in the support ticketing system and confirmed by Customer in writing, and at its discretion may decline to (as the case may be) access the production system or receive Customer’s transaction data. The provisions of this paragraph supersede any conflicting provision in this Addendum or in the underlying agreement between the parties.\n4.2\tExclusions. Notwithstanding anything to the contrary in this Addendum, Striim will have no obligation to provide any Support Services to Customer to the extent that such Support Services arise from or relate to any of the following: (a) any modifications or alterations of the Software by any party other than Striim or Striim’s subcontractors; (b) any Use of the Software in a computing environment not meeting the system requirements set forth in the Documentation, including hardware and operating system requirements; (c) any issues arising from the failure of the Software to interoperate with any other software or systems, except to the extent that such interoperability is expressly mandated in the applicable Documentation; (d) any breakdowns, fluctuations, or interruptions in electric power or the telecommunications network; (e) any Error that is not reproducible by Striim; or (f) any violation of the terms and conditions of this Agreement, including any breach of the scope of a license grant. In addition, Customer agrees and acknowledges that any information relating to malfunctions, bugs, errors, or vulnerabilities in the Support Services constitutes Confidential Information of Striim, and Customer will refrain from using such information for any purpose other than obtaining Support Services from Striim, and will not disclose such information to any third party.\n5.\tTERM AND TERMINATION \n5.1\tTerm. As long as Customer timely pays, as applicable, the annual fees for a term license or the support and maintenance fees applicable for a perpetual license as set forth on the applicable Order Form, the term of this Addendum will commence upon the original date of Delivery of the applicable Software and continue during the term of the Agreement, unless earlier terminated in accordance with this section. \n5.2\tTermination. This Addendum will automatically terminate upon the termination of Customer’s license to the Software set forth in the Agreement. In addition, each party will have the right to terminate this Addendum immediately upon written notice if the other party materially breaches this Addendum and fails to cure such breach within thirty (30) days after written notice of breach by the non-breaching party. Sections 1 (Definitions), 5.2 (Termination), 5.3 (Lapsed Support), 6 (Warranty), and any payment obligations accrued by Customer prior to termination or expiration of this Addendum will survive such termination or expiration. \n5.3\tLapsed Support. For a period of twelve (12) months after any lapse of Support Services through the termination or expiration of this Addendum (other than Striim’s termination for Customer’s breach), Customer subsequently may elect to reinstate such Support Services for such Software upon the terms and conditions set forth in this Agreement; provided, however, that (a) such Support Services have not been discontinued by Striim; (b) the Agreement continues to be in effect; and (c) Customer pays to Striim an amount equal to all of the fees that would have been due to Striim had the Support Services been provided under this Agreement during the entire period of such lapse.\n6.\tWARRANTY. Striim warrants that the Support Services will be performed with at least the same degree of skill and competence normally practiced by consultants performing the same or similar services. Customer’s sole and exclusive remedy, and Striim’s entire liability, for any breach of the foregoing warranty shall be for Striim to reperform, in a conforming manner, any nonconforming Support Services that are reported to Striim by Customer in writing within thirty (30) days after the date of completion of such Services. \nEXCEPT AS EXPRESSLY SET FORTH IN THE PRECEDING PARAGRAPH, THE SUPPORT SERVICES AND ALL MATERIALS FURNISHED TO CUSTOMER UNDER THIS ADDENDUM ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND. WITHOUT LIMITING THE FOREGOING, EXCEPT AS SET FORTH IN THIS SECTION, STRIIM DISCLAIMS ANY AND ALL REPRESENTATIONS AND WARRANTIES, GUARANTEES, AND CONDITIONS, WHETHER EXPRESS, IMPLIED, OR STATUTORY, WITH RESPECT TO THE SUPPORT SERVICES AND ANY MATERIALS FURNISHED HEREUNDER, INCLUDING THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NONINFRINGEMENT, ACCURACY, AND QUIET ENJOYMENT.\n\n\n \n </textarea><br><button id=\"accept-terms\" style=\"border: 2px solid #00A7E5; background: #00A7E5; color: #fff;\">I accept the Striim License Agreement</button>\n</div>\n<div style=\"text-align: center;\"></div>\n<div style=\"text-align: center;\"></div>\n</div>\n<div class=\"download-links\" style=\"display: none;\">\n<p>Here are the latest version of Striim GA software</p>\n<p><em><span class=\"wysiwyg-font-size-large\">TGZ installation packages 4.2.0.3</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/Striim_4.2.0.3.tgz\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/Striim_4.2.0.3.tgz\">Striim TGZ Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/Striim_Agent_4.2.0.3.tgz\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/Striim_Agent_4.2.0.3.tgz\">Striim Agent Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">RPM Installation Packages 4.2.0.3</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-dbms-4.2.0.3-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-dbms-4.2.0.3-Linux.rpm\">Linux RPM DBMS Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-node-4.2.0.3-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-node-4.2.0.3-Linux.rpm\">Linux RPM Node Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-samples-4.2.0.3-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-samples-4.2.0.3-Linux.rpm\">Linux RPM Sample Applications Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-agent-4.2.0.3-Linux.rpm\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-agent-4.2.0.3-Linux.rpm\">Linux RPM Agent Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">DEB Installation Package 4.2.0.3</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-dbms-4.2.0.3-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-dbms-4.2.0.3-Linux.deb\">Linux Debian DBMS Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-node-4.2.0.3-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-node-4.2.0.3-Linux.deb\">Linux Debian Node Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-samples-4.2.0.3-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-samples-4.2.0.3-Linux.deb\">Linux Debian Sample Applications Package</a></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-agent-4.2.0.3-Linux.deb\" href=\"https://striim-downloads.striim.com/Releases/4.2.0.3/striim-agent-4.2.0.3-Linux.deb\">Linux Debian Agent Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">NSK/NSX Package 4.1.2</span></em></p>\n<p><a title=\"https://striim-downloads.s3.us-west-1.amazonaws.com/Releases/4.1.2/E4012\" href=\"https://striim-downloads.s3.us-west-1.amazonaws.com/Releases/4.1.2/E4012\">NSK Package</a></p>\n<p><a title=\"https://striim-downloads.s3.us-west-1.amazonaws.com/Releases/4.1.2/X4012\" href=\"https://striim-downloads.s3.us-west-1.amazonaws.com/Releases/4.1.2/X4012\">NSX Package</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">EventPublish API 4.1.0.1</span></em></p>\n<p><a title=\"https://striim-downloads.striim.com/Releases/4.1.0.1/Striim_EventPublishAPI_4.1.0.1.zip\" href=\"https://striim-downloads.striim.com/Releases/4.1.0.1/Striim_EventPublishAPI_4.1.0.1.zip\">EventPublish API</a></p>\n<p><em><span class=\"wysiwyg-font-size-large\">Striim User Guide</span></em></p>\n<p>Please get the pdf version of the user guide thru the WebUI. Click help -&gt; Documentation (PDF)</p>\n<p> </p>\n<h2 id=\"01H8C9Q5T7N0E7GV5130313CM9\"><em>Previous Versions of Striim</em></h2>\n<p>For downloading previous versions of Striim, please open a ticket to Striim support.</p>\n</div>\n<p>\n<script src=\"https://code.jquery.com/jquery-3.4.1.min.js\" integrity=\"sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo=\" crossorigin=\"anonymous\"></script>\n<script>// <![CDATA[\n$(document).ready(function(){\n \t$(\"#accept-terms\").click(function(){\n \t$('.accept-terms').hide();\n $('.download-links').show();\n })\n \n })\n// ]]></script>\n</p>\n<p><br><br><br></p>\n<!--\n<h5>\n <a title=\"https://support.striim.com/hc/en-us/articles/4410867845911-Download-of-Extended-Support-Version-of-Striim\" href=\"https://support.striim.com/hc/en-us/articles/4410867845911-Download-of-Extended-Support-Version-of-Striim\"> Previous Version For Extended Support</a>\n</h5>\n-->\n<p> </p>"} {"page_content": "<p>You can modify the look and feel of your Help Center by changing colors and fonts. You can also change the way your content is organized by using themes. If you're comfortable working with page code, you can dig into the site's HTML, CSS, and Javascript to customize your theme.</p>\n\n<p>To get started, see <a href=\"https://support.zendesk.com/hc/en-us/articles/203664326\">Customizing the Help Center</a>.</p>"} {"page_content": "<div class=\"page\" title=\"Page 212\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p>The following are the minimum requirements for a Striim cluster node. Additional memory and disk space will be required depending on the size and number of events stored in memory and persisted to disk by your applications.</p>\r\n<p>memory:</p>\r\n<p>for evaluation and development: 4 GB available for use by Striim (so the system should have at least 5GB, preferably 8GB or more)<br> for production: 32 GB or more depending on application requirements</p>\r\n<p>free disk space required:</p>\r\n<p>for evaluation and development: 5GB including application, sample applications, and their sample data for production: 100GB or more depending on application requirements<br> free disk space must never drop below 10% on any node</p>\r\n<p>supported operating systems</p>\r\n<p>64-bit CentOS 6.6<br> 64-bit Ubuntu 14.04<br> Mac OS X 10 (for evaluation and development purposes only)</p>\r\n<p>supported Java environments 64-bit OpenJDK 7</p>\r\n<p>64-bit Oracle SE Development Kit 7 (required to use HTTPReader or SNMPParser) firewall: the following ports must be open for communication among nodes in the cluster</p>\r\n<p>on the node running the Derby metadata repository, port 1527 for TCP<br> on the node running the web UI, port 9080 (http) and/or 9081 (https) for TCP (or see Changing the web UI ports)<br> on all nodes:</p>\r\n<p>port 5701 for TCP (Hazelcast)<br> port 9300 for TCP (Elasticsearch)<br> ports 49152-65535 for TCP (Java Message Queue)<br> port 54327 for multicast UDP on an IP address in the 239 range chosen based on the cluster name (to ensure that each cluster uses a different address). To find that address, install Striim on the first node and look in webaction-node.log (see Reading log files) for a message such as \"Using Multicast to discover the cluster members on group 239.189.210.200 port 54327.\" (If you do not wish to use multicast UDP, see Using TCP/IP instead of multicast UDP.)</p>\r\n<p>The web client has been tested on Chrome. Other web browsers may work, but if you encounter bugs, try Chrome.</p>\r\n</div>\r\n</div>\r\n</div>"} {"page_content": "<p>To run OracleReader to capture data from Oracle redo log, Striim requires the Database level supplemental logging to be enabled. Here is the syntax on how to check that in Oracle sqlplus.</p>\n<pre>COLUMN log_min HEADING 'Minimum|Supplemental|Logging?' FORMAT A12\nCOLUMN log_pk HEADING 'Primary Key|Supplemental|Logging?' FORMAT A12\nCOLUMN log_fk HEADING 'Foreign Key|Supplemental|Logging?' FORMAT A12\nCOLUMN log_ui HEADING 'Unique|Supplemental|Logging?' FORMAT A12\nCOLUMN log_all HEADING 'All Columns|Supplemental|Logging?' FORMAT A12\n\nSELECT SUPPLEMENTAL_LOG_DATA_MIN log_min, \n SUPPLEMENTAL_LOG_DATA_PK log_pk, \n SUPPLEMENTAL_LOG_DATA_FK log_fk,\n SUPPLEMENTAL_LOG_DATA_UI log_ui,\n SUPPLEMENTAL_LOG_DATA_ALL log_all\n FROM V$DATABASE; \n </pre>\n<p>'Minimum Supplemental Logging' flag should be YES. Then check if either 'Primary Key Supplemental Logging' or 'All Columns Supplemental Logging' flag is also YES. If either of them is YES, then you are all set. Otherwise, if none of that is YES, you will have to continue to check if the table level supplemental logging is enabled.</p>\n<pre>COLUMN LOG_GROUP_NAME HEADING 'Log Group' FORMAT A20\nCOLUMN TABLE_NAME HEADING 'Table' FORMAT A15\nCOLUMN ALWAYS HEADING 'Conditional or|Unconditional' FORMAT A14\nCOLUMN LOG_GROUP_TYPE HEADING 'Type of Log Group' FORMAT A20\n\nSELECT \n LOG_GROUP_NAME,<br> OWNER,\n TABLE_NAME, \n DECODE(ALWAYS,\n 'ALWAYS', 'Unconditional',\n 'CONDITIONAL', 'Conditional') ALWAYS,\n LOG_GROUP_TYPE\n FROM DBA_LOG_GROUPS<br>WHERE OWNER='&lt;upper case owner name&gt;' and TABLE_NAME='&lt;upper case table name&gt;';</pre>\n<p>If the 'Type of Log Group' shows either 'ALL COLUMN LOGGING' or 'PRIMARY KEY LOGGING', then you are good. Otherwise, please follow the user guide to enable either the Database level or table level supplemental logging for primary key columns.</p>"} {"page_content": "<p>1. What is the unit of refreshinterval?</p>\n<p>The Cache refershinterval parameter uses microsecond as its unit. So if you need set the refresh interval to one minute, you have to specify below</p>\n<p>refreshinterval 60000000</p>\n<p> </p>\n<p>2. How do I set the refresh to always happen on a specific time on the system everyday? </p>\n<p>From Version 3.5.1, we have introduced new parameter refreshStartTime. You could specify that time and combined with the refreshinterval to control the cache refresh.</p>\n<p> </p>\n<p>For example, the code below will tell system to refresh the cache at 23:00:00 on the system, once a day.</p>\n<p class=\"p1\"><span class=\"s1\">CREATE CACHE ZipCache USING DatabaseReader (</span></p>\n<p class=\"p1\"><span class=\"s1\">ConnectionURL:'jdbc:mysql://10.1.1.1/datacenter', Username:'username',</span></p>\n<p class=\"p1\"><span class=\"s1\">Password:'passwd', Query: \"SELECT * FROM ZipData\" )</span></p>\n<p class=\"p2\"> </p>\n<p class=\"p1\"><span class=\"s1\">PARSE USING DSVPARSER ( columndelimiter: '\\t', header: 'true' )</span></p>\n<p class=\"p2\"> </p>\n<p class=\"p1\"><span class=\"s1\">QUERY (keytomap: 'Zip', , refreshinterval: '86400000000',</span></p>\n<p class=\"p1\"><span class=\"s1\">refreshStartTime:'23:00:00') OF ZipCache_Type;</span></p>"} {"page_content": "<p>Sometimes the source system will send null value to the column you are processing, if the column has null value, the to_string function will throw NULL Pointer Exception Errors (NPE). It will either crash your application or makes you lose data.</p>\n<p>To prevent from feeding any conversion function with NULL pointers, we could use NVL() function to set a specific value in case there is NULL in the source field.</p>\n<p> </p>\n<p>For example.</p>\n<p>CASE WHEN IS_PRESENT(x,data,0)==true <br>THEN TO_STRING(NVL(data[0], 'NULL_VAL'))<br>ELSE \"NOT_PRESENT\" <br>END as POLICE_SIRA_NO</p>\n<p> </p>\n<p>This way, it will replace NULL with a literal string 'NULL_VAL'. </p>"} {"page_content": "<p>Striim has Kafka 0.8.0 server shipped with the product package, which is under Kafka directory. In the same time, Striim also has Kafka client library for both 0.8.0 and 0.9.0 shipped. So when you use KafkaReader, you could choose either 0.8.0 or 0.9.0. The client library could be used to connect to the Kafka server version that is same or higher. </p>\n<p>But not opposite. If you use Kafka Reader 0.9.0 connecting to Kafka 0.8.0 server, you will get some strange error message. For example, you will see something like this</p>\n<p> </p>\n<p class=\"p1\"><span class=\"s1\">2017-05-18 15:47:45,959 @ -ERROR BaseServer_WorkingThread-6 com.webaction.runtime.components.Source.run (Source.java:100) Exception thrown by Source Adapter SRCKFK1</span></p>\n<p class=\"p1\"><span class=\"s1\">com.webaction.common.exc.ConnectionException: Failure in Kafka Connection. Closing the KafkaReader. </span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at com.webaction.proc.KafkaV9Reader.receiveImpl(KafkaV9Reader.java:464)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at com.webaction.proc.BaseProcess.receive(BaseProcess.java:321)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at com.webaction.runtime.components.Source.run(Source.java:98)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at java.util.concurrent.FutureTask.run(FutureTask.java:266)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at java.lang.Thread.run(Thread.java:745)</span></p>\n<p class=\"p1\"><span class=\"s1\">Caused by: com.webaction.common.exc.ConnectionException: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'responses': Error reading field 'partition_responses': Error reading array of size 574235203, only 3539234 bytes available</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at com.webaction.proc.KafkaV9Reader.startConsumingData(KafkaV9Reader.java:431)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at com.webaction.proc.KafkaV9Reader.receiveImpl(KafkaV9Reader.java:459)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>... 7 more</span></p>\n<p class=\"p1\"><span class=\"s1\">Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'responses': Error reading field 'partition_responses': Error reading array of size 574235203, only 3539234 bytes available</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:439)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:265)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:908)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>at com.webaction.proc.KafkaV9Reader.startConsumingData(KafkaV9Reader.java:379)</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>... 8 more</span></p>\n<p class=\"p1\"> </p>\n<p class=\"p1\"><span class=\"s1\">On the other hand, if you have your own Kafka server, say V0.10.0, then you can use either KafkaReader 0.9.0 or KafkaReader 0.8.0 to connect without any issue. </span></p>\n<p class=\"p1\"> </p>\n<p class=\"p1\"><span class=\"s1\">If you decide use Kafka 0.10.0 to provide the persist stream for Striim, that should also work.</span></p>\n<p class=\"p1\"> </p>"} {"page_content": "<p>This FAQ is a section in the General category of your Help Center knowledge base. We created this category and a few common sections to help you get started with your Help Center.</p>\n\n<p>The knowledge base in the Help Center consists of three main page types: category pages, section pages, and articles. Here's the structure:</p>\n\n<p><img src=\"////p6.zdassets.com/hc/assets/sample-articles/article0_image.png\" alt=\"image\"></p>\n\n<p>You can add your own content and modify or completely delete our content. See the <a href=\"https://support.zendesk.com/hc/en-us/articles/203664366\">Contributor guide to the Help Center</a> to learn how.</p>"} {"page_content": "<p class=\"p1\">TQL applications may import Java functions. For example, say you have the following custom function:</p>\r\n<p class=\"p1\"><img src=\"https://support.striim.com/hc/en-us/article_attachments/205219757/Untitled.png\" alt=\"\"></p>\r\n<p class=\"p1\">To use it in TQL, you would compile this to <strong><span class=\"s1\">AggFunctions.jar</span> </strong>, add that file to <strong><span class=\"s1\">.../WebAction/lib</span> </strong>, and restart Striim.</p>\r\n<p class=\"p1\">Then you could use the custom function in your TQL as follows:</p>\r\n<p class=\"p1\"><img src=\"https://support.striim.com/hc/en-us/article_attachments/205187418/Screen_Shot_2016-02-03_at_2.39.40_PM.png\" alt=\"\"></p>"} {"page_content": "<p><strong>How to test Oracle connection through ojdbc?</strong></p>\n<p>When Striim DatabaseReader or DatabaseWriter could not connect to Oracle database, its connection may be tested with following simple java script, from the same server where Striim is installed.</p>\n<p>1. copy ojdbc jar driver to $JAVA_HOME/jre/lib/ext/ directory</p>\n<p>e.g. cp $STRIIM_HOME/lib/<span class=\"s1\">ojdbc8.jar /Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre/lib/ext/</span></p>\n<p><span class=\"s1\">To check ojdbc version: example</span></p>\n<p class=\"p1\"><span class=\"s1\">$ java -jar ojdbc8.jar -getversion</span></p>\n<p class=\"p1\"><span class=\"s1\">Oracle 12.2.0.1.0 JDBC 4.2 compiled with javac 1.8.0_91 on Tue_Dec_13_06:08:31_PST_2016</span></p>\n<p class=\"p1\"><span class=\"s1\">#Default Connection Properties Resource</span></p>\n<p class=\"p1\"><span class=\"s1\">#Sat Jul 20 07:31:56 PDT 2019</span><span class=\"s1\"></span></p>\n<p class=\"p1\"><span class=\"s1\">***** JCE UNLIMITED STRENGTH IS INSTALLED ****</span></p>\n<p class=\"p1\"><span class=\"s1\">2. Download attached Conn.java file</span></p>\n<p class=\"p1\"><span class=\"s1\">3. compile the java fie</span></p>\n<p class=\"p1\"><span class=\"s1\">os&gt; javac Conn.java</span></p>\n<p class=\"p1\"><span class=\"s1\">os&gt;</span><span class=\"s1\"> ls -l Conn*</span></p>\n<p class=\"p1\"><span class=\"s1\">-rw-r--r--<span class=\"Apple-converted-space\"> </span>1 user1<span class=\"Apple-converted-space\"> </span>staff<span class=\"Apple-converted-space\"> </span>1548 Sep 11 09:10 Conn.class</span></p>\n<p class=\"p1\"><span class=\"s1\">-rw-r--r--<span class=\"Apple-converted-space\"> </span>1 user1<span class=\"Apple-converted-space\"> </span>staff <span class=\"Apple-converted-space\"> </span>903 Sep 11 09:10 Conn.java</span></p>\n<p>4. test the connection.</p>\n<p>syntax: java Conn \"&lt;connection_URL&gt;\" \"username\" \"password\"</p>\n<p>e.g., </p>\n<p>os&gt; <span class=\"s1\">java Conn \"jdbc:oracle:thin:@10.1.186.102:1521:orcl\" \"striim\" \"striim\"</span></p>\n<p class=\"p1\"><span class=\"s1\">Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production</span></p>\n<p class=\"p1\"><span class=\"s1\">PL/SQL Release 11.2.0.4.0 - Production</span></p>\n<p class=\"p1\"><span class=\"s1\">CORE 11.2.0.4.0 Production</span></p>\n<p class=\"p1\"><span class=\"s1\">TNS for Linux: Version 11.2.0.4.0 - Production</span></p>\n<p class=\"p1\"><span class=\"s1\">NLSRTL Version 11.2.0.4.0 - Production</span></p>\n<p class=\"p1\">5. test connection description for RAC</p>\n<p class=\"p1\"><span class=\"s1\">OS&gt; java Conn \"jdbc:oracle:thin:@(DESCRIPTION_LIST=(LOAD_BALANCE=off)(FAILOVER=on)(DESCRIPTION= (CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= 10.1.10.63)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=racdb)))(DESCRIPTION= (CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST= 10.1.10.111)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=racdb))))\" \"user1\" \"password1\"</span></p>\n<p class=\"p1\"> stop listener on first description and test again to confirm it works.</p>\n<p class=\"p1\"> </p>\n<p>6. connect through Oracle Connect Manager (CM)<br>CM: (HOST=192.168.0.106)(PORT=1964)<br>DB Listener: (HOST=192.168.56.106)(PORT=1522) <br> <br>$ java Conn \"jdbc:oracle:thin:@(DESCRIPTION=(SOURCE_ROUTE=YES)(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.106)(PORT=1964))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.106)(PORT=1522))(CONNECT_DATA=(service_name=RACDB)))\" user1 password1</p>\n<p>Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production<br>PL/SQL Release 12.1.0.2.0 - Production<br>CORE 12.1.0.2.0 Production<br>TNS for Linux: Version 12.1.0.2.0 - Production<br>NLSRTL Version 12.1.0.2.0 - Production</p>\n<p>(if CM is stopped, the connection will fail)</p>\n<p><strong>List of potential errors (may not be exclusive):</strong></p>\n<p>(1) wrong oracle SID (changed orcl to orcl1 here)<br>java Conn \"jdbc:oracle:thin:@10.1.186.102:1521:orcl1\" \"striim\" \"striim\"<br>Exception in thread \"main\" java.sql.SQLException: Listener refused the connection with the following error:<br>ORA-12505, TNS:listener does not currently know of SID given in connect descriptor</p>\n<p>(2) wrong username or password (changed username from striim to st1riim here)<br>java Conn \"jdbc:oracle:thin:@10.1.186.102:1521:orcl\" \"st1riim\" \"striim\"<br>Exception in thread \"main\" java.sql.SQLException: ORA-01017: invalid username/password; logon denied</p>\n<p>(3) wrong oracle listener port number (changed from 1521 to 1524 here)<br>java Conn \"jdbc:oracle:thin:@10.1.186.102:1524:orcl\" \"striim\" \"striim\"<br>Exception in thread \"main\" java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection</p>\n<p>(4) wrong IP (changed from 10.1.186.102 to 10.1.186.109 here)<br>java Conn \"jdbc:oracle:thin:@10.1.186.109:1521:orcl1\" \"striim\" \"striim\"<br>Exception in thread \"main\" java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection</p>\n<p>(5) oracle listener is not started</p>\n<p>Failure in connection to database with url : jdbc:oracle:thin:@192.168.56.3:1521:orcl username : fzhang ErrorCode : 17002;SQLCode : 08006;SQL Message : IO Error: The Network Adapter could not establish the connection</p>\n<p>(6) no enough database privilege<br>(A) SQL&gt; create user a identified by a;<br>java Conn \"jdbc:oracle:thin:@10.1.186.102:1521:orcl\" \"a\" \"a\"<br>Exception in thread \"main\" java.sql.SQLException: ORA-01045: user A lacks CREATE SESSION privilege; logon denied<br>(B) SQL&gt; grant connect , resource to a;<br>java Conn \"jdbc:oracle:thin:@10.1.186.102:1521:orcl\" \"a\" \"a\"<br>Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production<br>PL/SQL Release 11.2.0.4.0 - Production<br>CORE 11.2.0.4.0 Production<br>TNS for Linux: Version 11.2.0.4.0 - Production</p>\n<p><strong>Formats of oracle jdbc connection string supported in DatabaseReader/DatabaseWriter</strong></p>\n<p>1. Using SID name</p>\n<p>Example: jdbc:oracle:thin:@10.1.186.109:1521:orcl1</p>\n<p>2. Using Service Name</p>\n<p>Exapmle: </p>\n<p class=\"p1\"><span class=\"s1\">jdbc:oracle:thin:@192.168.55.81:1521/racdb.localdomain</span></p>\n<p>3. Using TNSNAME</p>\n<p>Example: </p>\n<p class=\"p1\"><span class=\"s1\">jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.55.81)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=racdb.localdomain)))</span></p>\n<p class=\"p1\"><strong><span class=\"s1\">Difference of oracle jdbc connection string between OracleReader and DatabaseReader/DatabaseWriter</span></strong></p>\n<p class=\"p1\"><span class=\"s1\">When you use OracelReader, you don't have to add the \"jdbc:oracle:thin:@\" prefix part. You only need to supple the rest of the connection string. When Striim App is ready to connect, it will add the prefix automatically. So in the OracleReader, the format of the connection string of above examples will be like </span></p>\n<p>1. Using SID name</p>\n<p>Example: 10.1.186.109:1521:orcl1</p>\n<p>2. Using Service Name</p>\n<p>Exapmle: </p>\n<p class=\"p1\"><span class=\"s1\">192.168.55.81:1521/racdb.localdomain</span></p>\n<p>3. Using TNSNAME</p>\n<p>Example: </p>\n<p class=\"p1\"><span class=\"s1\">(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.55.81)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=racdb.localdomain)))</span></p>\n<p class=\"p1\"> </p>"} {"page_content": "<p>Sometimes, different users need different timezone setting so they could view the time info in dashboard result using their own timezone setting. This could be achieved by using the ALTER USER command in console.</p>\n<p>console&gt;alter user &lt;user name&gt; set (timezone:'&lt;timezone value&gt;');</p>\n<p>Example</p>\n<p>console&gt;alter user admin set (timezone:'US/Pacific');</p>\n<p>The value of the timezone could be found in the tz database, https://en.wikipedia.org/wiki/List_of_tz_database_time_zones</p>"} {"page_content": "<p>Striim supports KafkaReader and KafkaWriter. When customer uses SASL Kerberos authentication and/or SSL encryption, there are additional steps/configuration files needed to make it work. The detail is discussed in the chapter below in Striim user guide.</p>\n<h4>Using Kafka SASL (Kerberos) authentication with SSL encryption</h4>\n<p>This info is also available online thru the user guide in our support portal.</p>\n<p>https://support.striim.com/hc/en-us/articles/115011723568-3-7-4-Configuring-Kafka</p>\n<p>However, there are several considerations when using this. Please make sure you check the following out to avoid any known issues.</p>\n<p>For SASL_SSL protocol with Kafka Brokers and Cloudera (may be relevant for Confluent, Hortonworks and other non-Apache distros)</p>\n<p>1) Striim and Kafka Utilities must run on Oracle or Open JDK JVMs. (IBM JVM is not supported by Kafka Client API for SASL)<br>This error may surface in the Striim Logs or from the Kafka Producer / Consumer Command Line Utilities:</p>\n<div class=\"code panel\">\n<div class=\"codeContent panelContent\">\n<pre class=\"code-java\">javax.security.auth.login.LoginException: unable to find LoginModule class: com.sun.security.auth.module.Krb5LoginModule\n</pre>\n</div>\n</div>\n<p>IBM JVM does not include the package \"com.sun.security.auth.module...\" in it's JVM runtime jars, but expects the package to be replaced with \"com.ibm.security.auth....\" and these do not implement the same compatible configuration fields of a Kafka Client JAAS configurations as documented in our configuration guide.</p>\n<p>2) These Kafka properties are not strictly necessary for SASL_SSL protocols:</p>\n<div class=\"code panel\">\n<div class=\"codeContent panelContent\">\n<pre class=\"code-java\">ssl.keystore.location=/etc/striim/kafkaconf/server.keystore.jks,\nssl.keystore.password=password,\nssl.key.password=password\n</pre>\n</div>\n</div>\n<p>3) KafkaConfig properties in TQL are sensitive to additional whitespace between ';' property delimiters (no '\\n' or SPACE prior to 3.7.5)</p>\n<p>4) 'retry.backoff.ms' and 'metadata.fetch.timeout.ms' producer configs are best removed from the KafkaConfigs.</p>\n<p>5) Validate Kafka console producer and consumer to rule out other port and environment factors with the Kafka configuration.</p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\">1. To Find AMI image</span><u></u><u></u></p>\r\n<p>Please make sure you login to your AWS console, change the region to “US West (N. California)” if that isn’t your default region. This could be done on the upper right corner of your console screen. Then choose Private Images. You should be able to see the AMI image we shared.</p>\r\n<p>A sample screen capture is shown below, please be aware the name of the image might be slightly different.<u></u><u></u></p>\r\n<p><img src=\"https://support.striim.com/hc/en-us/article_attachments/205186818/unnamed.png\" alt=\"\"></p>\r\n<p> </p>\r\n<p><span class=\"wysiwyg-font-size-large\">2. To deploy the image</span></p>\r\n<ol>\r\n<li>Select it</li>\r\n<li>Click Actions-&gt;Launch</li>\r\n<li>Select c3.xlarge instance and click \"Next:Configure Instance Details\"</li>\r\n<li>Select network into which instance is to be launched. Other details are optional, like selecting subnet, assigning public ip. (Good to have a subnet and launch into it)</li>\r\n<li>Click \"Next:Add Storage\"</li>\r\n<li>Enter size (suggested &gt;=32, can be increased depending on the application data)</li>\r\n<li>Click 'Next:Tag Instance'</li>\r\n<li>Enter the name of the instance in the 'Value' text box and click 'Next:Configure Security Group'</li>\r\n<li>Create a security group with following ports open to the network into which it is launched.<br>22 - ssh<br>9080 - web access</li>\r\n<li>Click 'Review and Launch'</li>\r\n<li>Click Launch</li>\r\n<li>Once the instance is launched, look for the ip-address of the instance and go to http://&lt;ip&gt;:9080 and login with admin/admin.</li>\r\n</ol>\r\n<p> </p>"} {"page_content": "<p>If user has a field that has hex value, they could use Java parseInt() function to do the conversion. Please make sure you add the following line in the beginning of the TQL.</p>\n<p class=\"p1\"><span class=\"s1\">import static java.lang.Integer.parseInt;</span></p>\n<p> </p>\n<p>Then you could use this function in TQL to do the conversion. Here is an example</p>\n<p class=\"p1\"><span class=\"s1\">select parseInt(hexInStringField, 16) as intValue</span></p>\n<p class=\"p1\"> </p>\n<p class=\"p1\"> </p>"} {"page_content": "<p>When you use Oracle Database as your Striim metadata repository database, if you want to reset everything and start from scratch, here are two files under conf directory that will help you out.</p>\n<p>1. <span class=\"s1\">DropMetadataReposOracle.sql</span></p>\n<p><span class=\"s1\">This file will drop all Striim Metadata Repository tables.</span></p>\n<p><span class=\"s1\">2. </span><span class=\"s1\">DefineMetadataReposOracle.sql</span></p>\n<p><span class=\"s1\">This file will create all Striim Metadata Repository tables.</span></p>\n<p><span class=\"s1\">To execute those two files, please login to the Oracle Database by using the proper userid and password that creates your Metadata Repository tables.</span></p>\n<p><span class=\"s1\">sqlplus &lt;user&gt;/&lt;password&gt;</span></p>\n<p><span class=\"s1\">Then do the following commands in sqlplus prompt</span></p>\n<p><span class=\"s1\">sqlplus&gt;@DropMetadataReposOracle.sql</span></p>\n<p><span class=\"s1\">sqlplus&gt;@DefineMetadataReposOracle.sql</span></p>\n<p><span class=\"s1\">Now you will get a new metadata repository database that has nothing configured. All your previous Application/Type/Cache will be wiped out. </span></p>"} {"page_content": "<p>When using SQLServer JDBC driver, be default, it will bind all string type data as unicode. If you are using DatabaseWriter to send data into MSSQL Server Database and the tables are not created with unicode (nchar/nvarchar), especially for the primary key or unique index columns, you will suffer some performance penalty for delete and update operations. The reason is that the SQLServer query optimizer will first try to convert the data from the table to unicode, then compare those values in where clause. The direct consequence is that it will use clustering index scan instead of index seek. SQLServer DBA will observe high read I/O and slow performance when there are large update or delete operations happening on this tables.</p>\n<p>To disable the data conversion, so that the query could use Index seek, you will need add the following option to the JDBC Connection String</p>\n<p class=\"p1\"><span class=\"s1\"><strong>sendStringParametersAsUnicode=false</strong></span></p>\n<p>Here is an example,</p>\n<p class=\"p1\"><span class=\"s1\">jdbc:sqlserver://52.163.124.129:1433;databaseName=qatest;</span><span class=\"s2\"><strong>sendStringParametersAsUnicode=false</strong></span></p>\n<p>This will allow all the string data be bound and sent as regular single byte ascii characters, which will allow the query optimizer to skip the data conversion part in the where clause, which will result in Index seek, a faster way to locate the record in the table.</p>"} {"page_content": "<p>Currently the support site only allows up to 20 Megabytes attachment in the support ticket. If you have any files bigger than that size, we have AWS S3 upload site ready for the customers. Please follow the procedures below to request for the access to the S3 upload site.</p>\n<p>1. Create a support ticket requesting S3 upload account. Please make sure you supply the following information</p>\n<p>a. Your full name</p>\n<p>b. Your Email </p>\n<p>c. Your direct phone number</p>\n<p>d. Your title in the organization</p>\n<p>e. If this request is on behalf of your customer, please supply the detail customer contact (full name, email, direct phone number, title in the organization)</p>\n<p>2. Once approved, you will get the following information from the support engineer</p>\n<p>a. AWS Access Key ID</p>\n<p>b. AWS Secret Access Key</p>\n<p>c. S3 bucket name</p>\n<p>3. Download and install AWS CLI on the system where the file needs to be uploaded. Alternatively, you could use other tools such as CloudBerry and Cyberduck, etc.</p>\n<ul>\n<li>Follow link, http://docs.aws.amazon.com/cli/latest/userguide/installing.html to install the AWS CLI<br>After AWS CLI installed, please configure the aws<br>run 'aws configure'<br>Enter the 'Access Key ID' and 'Secret Access Key' you shared with them from step 1. Use us-west-1 as the region name, default output format.<br>For example<br>shell&gt;aws configure<br>AWS Access Key ID [****************E4AA]: <br>AWS Secret Access Key [****************KnIr]: <br>Default region name [us-west-1]: <br>Default output format [json]:</li>\n<li>Use aws s3 cp command to copy local file to your S3 bucket, please make sure you include the ticket number in the filename so we can associate with the correct support ticket.<br>For example<br>shell&gt;aws s3 cp Ticket3314.log.tgz s3://striim-mycompany/</li>\n</ul>\n<p class=\"wysiwyg-indent4\"><br>upload: ./Ticket3314.log.tgz to s3://striim-mycompany/Ticket3314.log.tgz</p>\n<ul>\n<li>Once the upload is done, please update your support ticket with the uploaded file name, so the support engineer will be able to check out the file.</li>\n</ul>\n<p>**With this account, you could upload and download the files from the designated S3 bucket. However, you don't have list privilege, so you can't use the console to login to list all the objects. Also your account will only have privilege to upload and download files to and from your company's bucket only**</p>"} {"page_content": "<p>Current WebAction free trial version available on the company website doesn't include the Windows distribution. It only supports Linux and Mac OS. For you to get a Windows distribution, please contact WebAction Sales Rep. Once the sales rep gets contacted, you will get an Email on where to download the Windows distribution via either sftp site or Google Drive. </p>\n<p>Once you get the file, here is the detail steps on how to install and start the server. This applies to the Jar file you downloaded. </p>\n<p> </p>\n<p>After you downloaded the file, say Striim_3.6.6.jar , please create a directory on your C drive (or whichever drive you prefer), for example, c:\\StriimEval</p>\n<p> </p>\n<p>Make sure you have 64-bit Java installed and is the default JAVA on your PC. To verify, please open a DOS prompt and type</p>\n<p> </p>\n<p>dos&gt;java -version.</p>\n<p> </p>\n<p>You should see something like this</p>\n<p> </p>\n<p>C:\\Users\\werner&gt;java -version</p>\n<p>java version \"1.7.0_79\"</p>\n<p>Java(TM) SE Runtime Environment (build 1.7.0_79-b15)</p>\n<p>Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)</p>\n<p> </p>\n<p>We prefer Java 1.7, also 64-bit version.</p>\n<p> </p>\n<p>If your Java is 32bit or lower than 1.7, please go to Oracle’s Java download site to download the correct version before you proceed further.</p>\n<p> </p>\n<p>Now, please double-click the Jar file, i.e. Striim_3.6.6.jar, you downloaded and it will start the installation process. Please make sure you specify the installation directory as the one you created previously, for example, c:\\StriimEval</p>\n<p>Follow the instruction on the screen till you reach the screen asking you to launch, i.e. the last step, please click \"Start Later\". </p>\n<p>Now please open a DOS window and cd to the installation directory</p>\n<p>Dos&gt;cd c:\\StriimEval\\bin</p>\n<p> </p>\n<p>Then run the server.bat file to configure and start the server. Please answer all the question accordingly. Please make sure you write down the admin password as you will need it later.</p>\n<p> </p>\n<p>Here is a screen copy of the configuration process</p>\n<p>-------------------------------------------------</p>\n<p>C:\\StriimTrial\\bin&gt;.\\server.bat</p>\n<p>Starting WebAction Server - Version 3.6.6 (9d2289de86)</p>\n<p>Cluster password not set, it is mandatory</p>\n<p>Enter Cluster Password for Cluster werner : *********</p>\n<p>Re-enter the Password : *********</p>\n<p>Company name not set, it is mandatory</p>\n<p>Enter Company Name : striim</p>\n<p>Product Key: D429D5DCB-6F6E73912-847D0C07 registered to Company:striim</p>\n<p>Enter License Key : (will generate trial license key if left empty).</p>\n<p>Using Multicast to discover the cluster members on group 239.115.213.208 port 54</p>\n<p>327</p>\n<ol>\n<li>192.168.55.116</li>\n<li>192.168.211.1</li>\n<li>192.168.227.1</li>\n<li>192.168.56.1</li>\n</ol>\n<p>Please enter an option number to choose the corresponding interface :</p>\n<p>1</p>\n<p>Using 192.168.55.116</p>\n<p>Starting Server on cluster : werner</p>\n<p>Taking Metadata Repository Details..</p>\n<p>Did not get Metadata Repository Details, please enter them below</p>\n<p>Enter Metadata Repository Location [Format = IPAddress:port] [default 192.168.55</p>\n<p>.116:1527 (Press Enter/Return to default)] :</p>\n<p>DB details : 192.168.55.116:1527 , wactionrepos , waction</p>\n<p>Current node started in cluster : werner, with Metadata Repository</p>\n<p>Registered to: striim</p>\n<p>ProductKey: D429D5DCB-6F6E73912-847D0C07</p>\n<p>License Key: 4D6EFE4FF-92283EE4B-D8DA61F64-C78C0D540-32F20CC49-0D214</p>\n<p>License expires in 29 days 7 hours 7 minutes 34 seconds</p>\n<p>Admin password not set.</p>\n<p>Please enter admin password: <span class=\"wysiwyg-color-red\">*****</span></p>\n<p>Re-enter the password : <span class=\"wysiwyg-color-red\">*****</span></p>\n<p>Servers in cluster:</p>\n<p> [this] S192_168_55_116 [be75cab1-c251-467d-9111-f29391d8a5be]started.</p>\n<p> </p>\n<p>Please go to <a href=\"http://192.168.55.116:9080\">http://192.168.55.116:9080</a> or <a href=\"https://192.168.55.116:9081\">https://192.168.55.116:9081</a> to administer, or use console</p>\n<p> </p>\n<p>------------------------------------------------</p>\n<p> </p>\n<p>The red colored part is the admin password you need remember. If you don't have a license key, just leave it blank and type enter. It will generate a evaluation license for you.</p>\n<p> </p>\n<p>Also if you have multiple network interface, please make sure you choose the right one to run the WebAction Server on.</p>\n<p> </p>\n<p>Once the configuration steps are done, please go to file explorer and unzip the following zip files for the Demo Data and put them in the same directory where the zip files are. Here is the list</p>\n<p> </p>\n<p>Samples\\MultiLogApp\\appData.zip</p>\n<p>Samples\\PosApp\\appData.zip</p>\n<p>Samples\\RetailApp\\appData.zip</p>\n<p>Samples\\SaasMonitorApp\\appData.zip</p>\n<p> </p>\n<p>Once all the data files are unzipped, you will get the appData directory on each of the directory above.</p>\n<p> </p>\n<p>Now goto <a href=\"http://192.168.55.116:9080\">http://192.168.55.116:9080</a>, it will prompt you login and password, the userid should be admin, the password is the one you have entered during the configuration, which was highlighted in above step.</p>\n<p> </p>\n<p>After you logged in, at home page, please Click the icon Says “Apps”, then you will see four Apps configured for you in that webpage. Those are the sample applications that you could explore. </p>"} {"page_content": "<p>Note: make this KM internal and it appeared from #5429. The issue in that ticket was not solved by this setting, and a new property was introduced in S3Writer.</p>\n<p>The Striim server runs inside Java VM. To configure it use the proxy setting, use the following java VM parameters</p>\n<p> </p>\n<p>-Dhttp.useProxy=true<br>-Dhttp.proxyHost=[IP addr]<br>-Dhttp.proxyPort=[proxy port]<br>-Dhttp.proxyUser=[basic proxy auth user]</p>\n<p>Detail of these parameters could be found here</p>\n<p class=\"p1\"><span class=\"s1\"><a href=\"http://docs.oracle.com/javase/7/docs/api/java/net/doc-files/net-properties.html\">http://docs.oracle.com/javase/7/docs/api/java/net/doc-files/net-properties.html</a></span></p>\n<p class=\"p1\"><span class=\"s1\">All the parameters are setup in server.sh file under &lt;Striim_Home&gt;/bin directory. Here are the detail steps</span></p>\n<ol>\n<li>Take a backup of bin/server.sh file</li>\n<li>Stop the Striim server</li>\n<li>Open bin/server.sh using an editor</li>\n<li>Locate line : ${JAVA} \\</li>\n<li>Add above 4 parameters , save the file</li>\n<li>Restart the server</li>\n</ol>\n<p class=\"p1\"> </p>\n<p class=\"p1\"> </p>"} {"page_content": "<p><span class=\"wysiwyg-font-size-large\"><strong>Problem:</strong></span></p>\n<p>I have granted following database privileges to the Oracle user in 12c.</p>\n<p>create role striim_privs;<br>grant create session,<br>execute_catalog_role,<br>select any transaction,<br>select any dictionary to striim_privs;<br>create user striim identified by ********;<br>grant striim_privs to striim;<br>alter user striim quota unlimited on users.</p>\n<p>at start, OracleReader hit:</p>\n<p>Start failed! com.webaction.exception.Warning: java.util.concurrent.ExecutionException: com.webaction.source.oraclecommon.OracleException: 2034 : Start Failed: SQL Query Execution Error ;ErrorCode : 1031;SQLCode : 42000;SQL Message : ORA-01031: insufficient privileges ORA-06512: at \"SYS.DBMS_LOGMNR\", line 58 ORA-06512: at line 2</p>\n<p><br><span class=\"wysiwyg-font-size-large\"><strong>Solution:</strong></span></p>\n<p class=\"p1\"><strong>If using Oracle 11g, or 12c, 18c, or 19c without CDB</strong></p>\n<p class=\"p1\">Enter the following commands:</p>\n<p class=\"p2\">create role striim_privs;</p>\n<p class=\"p2\">grant create session,</p>\n<p class=\"p2\">execute_catalog_role,</p>\n<p class=\"p2\">select any transaction,</p>\n<p class=\"p2\">select any dictionary</p>\n<p class=\"p2\">to striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_COL$ to striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_USER$ to striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_UID$ to striim_privs;</p>\n<p class=\"p2\">create user striim identified by ******** default tablespace users;</p>\n<p class=\"p2\">grant striim_privs to striim;</p>\n<p class=\"p2\">alter user striim quota unlimited on users;</p>\n<p class=\"p1\">For Oracle 12c or later, also enter the following command:</p>\n<p class=\"p2\">grant LOGMINING to striim_privs;</p>\n<p class=\"p1\">If using Database Vault, omit <span class=\"s1\">execute_catalog_role, </span>and also enter the following commands:</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR to striim_privs;</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR_D to striim_privs;</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;</p>\n<p class=\"p1\"><strong>If using Oracle 12c, 18c, or 19c with PDB</strong></p>\n<p class=\"p1\">Enter the following commands. Replace <span class=\"s1\">&lt;PDB name&gt; </span>with the name of your PDB.</p>\n<p class=\"p2\">create role c##striim_privs;</p>\n<p class=\"p2\">grant create session,</p>\n<p class=\"p2\">execute_catalog_role,</p>\n<p class=\"p2\">select any transaction,</p>\n<p class=\"p2\">select any dictionary,</p>\n<p class=\"p2\">logmining</p>\n<p class=\"p2\">to c##striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;</p>\n<p class=\"p2\">grant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;</p>\n<p class=\"p2\">create user c##striim identified by ******* container=all;</p>\n<p class=\"p2\">grant c##striim_privs to c##striim container=all;</p>\n<p class=\"p2\">alter user c##striim set container_data = (cdb$root, &lt;PDB name&gt;) container=current;</p>\n<p class=\"p1\">If using Database Vault, omit <span class=\"s1\">execute_catalog_role, </span>and also enter the following commands:</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR to c##striim_privs;</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;</p>\n<p class=\"p2\">grant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;</p>"} {"page_content": "<p>When testing KafkaReader, it is convenient to use the console producer packaged with the Kafka distribution to generate the testing records into the testing topic. These records won't be picked up by the DSVParser if you don't set parameter <span class=\"s1\">blockascompleterecord to 'true'. Reason is that the console producer will automatically \"filter\" out control character, such as new line character, \\n. Without the newline character, DSVParser won't know where the end of record is, as the default setting is using \\n as rowdelimiter. </span></p>\n<p>However, if you program does append new line character to the end of each line when it writes to Kafka, this won't be an issue.</p>\n<p> </p>\n<p>Below is a typical Python program that will write \\n to each line when pushing to Kafka</p>\n<p>-------------------</p>\n<p class=\"p1\"><span class=\"s1\">from kafka import KafkaProducer</span></p>\n<p class=\"p2\"> </p>\n<p class=\"p1\"><span class=\"s1\">producer = KafkaProducer(bootstrap_servers=['localhost:9092'])</span></p>\n<p class=\"p1\"><span class=\"s1\">for _ in range(5):</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>producer.send('wtest1003', 'Hello World !\\n')</span></p>\n<p>---------------</p>\n<p>If you do the following, there won't be \\n written in the end of line as the console producer will just strip it off.</p>\n<p>echo 'Hello World !\\n' <span class=\"s1\">|./kafka-console-producer.sh --broker-list localhost:9092 --topic wtest1003</span></p>\n<p>To pick up records like this, you will need use the following setting in DSVParser</p>\n<p>blockascompleterecord:'true'</p>"} {"page_content": "<p>When you have special characters in the source as column delimiter, such as anything non-displayable character lower than ASCII code 32, you could use the 'ctrl v' in vi to add that in the tql file. The character will be kept as is. </p>\n<p>For example, if you want to use 0x001 for columndelimiter, you could type ctrl+v, ctrl+a.</p>\n<p>To check if the correct code is preserved after you save, you could use the xxd command to dump the file and check the hex code.</p>\n<p>shell&gt;xxd &lt;filename&gt;</p>\n<p> </p>"} {"page_content": "<p>While you running evaluation copy of Striim software, from time to time, the license will expire. To request the Striim license extension, you will need send the following information.</p>\n<p>1. Under conf directory, there is a startUp.properties file. Please send that file.</p>\n<p>2. Current CPU/Core numbers on each of the node of your system. If you are running Linux, please issue 'lscpu' command. If you are running Windows, please check Task Manager -&gt; Performance -&gt; CPU, check how many logical processors shown there.</p>\n<p>3. The new date you want to set for expiration.</p>\n<p>If you are requesting a new license key for a system that has never ran Striim before, then you don't have a startUp.properties file. Instead you will need send the following information instead of startUp.properties file</p>\n<p>a. Company Name</p>\n<p>b. Cluster Name</p>\n<p>Upon approval from the account manager, you will get the following information.</p>\n<p>If you are requesting a license extension and has supplied the starUp.properties file and other information we request, we will send you a new license code. Upon receive of the new license code, please update your startUp.properties file, replace current 'LiceseKey' with the new key we sent. If this is for a multi-node cluster, please do this on each node. Then stop the whole cluster, make sure no Striim process is running. Then restart the cluster, the new license will take effect.</p>\n<p>If this is a new system and you requested a new license. You will get both Product Key and License Key. Together with e Company Name and Cluster Name, you can start the Striim using the new license following the steps below.</p>\n<p>A. If you are using tgz package, please start up your server.sh by using below options</p>\n<p>-P product Key</p>\n<p>-L License Key</p>\n<p>-N Company Name</p>\n<p>-c Cluster Name</p>\n<p>-p Cluster password</p>\n<p>-a Admin password</p>\n<p>For example</p>\n<p class=\"p1\"><span class=\"s1\">./bin/server.sh -P XXXXXXXXX-XXXXXXXXX-XXXXXXXX -L XXXXXXXXX-XXXXXXXXX-XXXXXXXXX-XXXXXXXXX-XXXXXXXXX-XXXXX -c WN105V374A -p clusterpass -N TestStriim -a adminpass</span></p>\n<p class=\"p1\"><span class=\"s1\">B. If you are using RPM package, please update the following fields properly in striim.conf file under conf directory</span></p>\n<p class=\"p1\"><span class=\"s1\">WA_CLUSTER_NAME=\"\"<br>WA_CLUSTER_PASSWORD=\"\"<br>WA_ADMIN_PASSWORD=\"\"<br>WA_PRODUCT_KEY=\"\"<br>WA_LICENSE_KEY=\"\"<br>WA_COMPANY_NAME=\"\"</span></p>\n<p class=\"p1\"><span class=\"s1\">Then start the Striim from system service command. </span></p>\n<p class=\"p1\"> </p>"} {"page_content": "<p>Starting V3.6.4, Striim is using elastic search as default storage for waction store as well as some internal monitoring data. Depends on the system event load and transaction volume, sometimes the Striim Server will need open quite many files. The default setting of max open files on most of the Linux system is 4096. Customer will see the following warning message in Striim server event log, or similar messages,</p>\n<p> </p>\n<p>2016-11-23 01:40:34,562 @S10_1_70_104 -WARN elasticsearch[S10_1_70_104][refresh][T#3] org.elasticsearch.index.engine.Engine.failEngine (Engine.java:483) [S10_1_70_104] [$internal.monitoring%1479846822435][2] failed engine [refresh failed] <br>java.io.FileNotFoundException: /opt/Striim-3.6.6A/data/Prod/nodes/0/indices/$internal.monitoring%1479846822435/2/index/_1s2_Lucene410_0.dvm (<strong><span class=\"wysiwyg-color-red90\">Too many open files</span></strong>) <br>at java.io.FileOutputStream.open0(Native Method) <br>at java.io.FileOutputStream.open(FileOutputStream.java:270) <br>at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:213) <br>at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:162) <br>at org.apache.lucene.store.FSDirectory$FSIndexOutput.&lt;init&gt;(FSDirectory.java:384) <br>at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:277) <br>at org.apache.lucene.store.FileSwitchDirectory.createOutput(FileSwitchDirectory.java:152) <br><br></p>\n<p>If you are running Striim version lower than V3.7.2 and use Kafka persist stream with recovery enabled for Application, please follow the steps below to check if you are running into a known issue.</p>\n<p>1. Find out your striim server PID. </p>\n<p>shell&gt;ps -ef|grep <span class=\"s1\">'webaction.runtime.Server'</span></p>\n<p><span class=\"s1\">Use that process ID do the following command to check the tcpip connection numbers</span></p>\n<p><span class=\"s1\">shell&gt;lsof -p serverpid | grep TCP | wc -l</span></p>\n<p><span class=\"s1\">Run this command once every minute, if you see the connection number keep increasing, then you run into bug DEV-10275. Please upgrade to V3.7.2 and higher that has the fix.</span></p>\n<p><span class=\"s1\">Otherwise, if it is purely legitimate file open handler limitation issue, please follow the steps below to increase the max open file number.</span> depends on how the Striim server was started.</p>\n<p><span class=\"wysiwyg-underline\"><span class=\"wysiwyg-color-black wysiwyg-font-size-large\">1. If the Striim server is started using a regular user</span></span></p>\n<p>Run the following command to set the ulimit value to a higher number</p>\n<p>shell&gt;ulimit -n 65536</p>\n<p>Stop the Striim server process and restart it</p>\n<p><span class=\"wysiwyg-underline\"><span class=\"wysiwyg-color-black wysiwyg-font-size-large\">2. If Striim server is started using the systemctl command</span></span></p>\n<p> </p>\n<p>Locate striim-node.service <span class=\"wysiwyg-color-red\">(the location of this file may differ depending on the OS type)</span>, </p>\n<p><span class=\"wysiwyg-color-red\"><font color=\"#2f3941\">The </font></span><span class=\"wysiwyg-color-black\">file location may be found with:</span></p>\n<p class=\"p1\"><span class=\"wysiwyg-color-black\"><span class=\"s1\">$</span><span class=\"s2\"> sudo systemctl status striim-node</span></span></p>\n<p class=\"p1\"><span class=\"s1\"><strong>●</strong></span><span class=\"s2\"> striim-node.service - Striim Cluster Node</span></p>\n<p class=\"p1\"><span class=\"s2\"><span class=\"Apple-converted-space\"> </span>Loaded: loaded (/usr/lib/systemd/system/striim-node.service; enabled; vendor preset: disabled)</span></p>\n<p class=\"p1\"><span class=\"s2\">......</span></p>\n<p class=\"p1\"><span class=\"s2\">)</span></p>\n<p class=\"p1\">Add the following line to your service config file</p>\n<p><span class=\"wysiwyg-color-red\">LimitNOFILE=65536</span></p>\n<p>An example of the this file</p>\n<p>---------------------------</p>\n<p>[Unit]<br>Description=WebAction Cluster Node<br>After=network.target<br>After=syslog.target<br>After=striim-dbms.service<br>Requires=striim-dbms.service</p>\n<p>[Service]</p>\n<p><span class=\"wysiwyg-color-red\">LimitNOFILE=65536</span><br>ExecStart=/opt/Striim-3.6.7/sbin/striim-node start</p>\n<p>[Install]<br>WantedBy=multi-user.target</p>\n<p>---------------------------</p>\n<p>After the change, issue the following command as root</p>\n<p>shell&gt;systemctl daemon-reload</p>\n<p>Then restart the striim-node service</p>\n<p>shell&gt;systemctl stop striim-node</p>\n<p>shell&gt;systemctl start striim-node</p>\n<p>To verify if the limit has been changed correctly, check /proc/&lt;pid&gt;/limits file content, pid is the process id of your striim-node service that could be obtained thru systemctl status striim-node</p>\n<p> </p>\n<p><span class=\"wysiwyg-underline wysiwyg-font-size-large\">3. If your striim server is started using the initctl command</span></p>\n<p> </p>\n<p>Modify /etc/init/striim-node.conf, adding the following line</p>\n<p>limit nofile 65536 65536</p>\n<p>The file will look like this</p>\n<p>---------------------------------<br>start on runlevel [2345]<br>stop on shutdown<br><span class=\"wysiwyg-color-red\">limit nofile 65536 65536</span><br>script<br> exec \"/opt/Striim-3.6.6/sbin/striim-node\" start<br>end script<br>------------------------------------</p>\n<p>Once you made the change, stop and start the striim-node service</p>\n<p>shell&gt;initctl stop striim-node</p>\n<p>shell&gt;initctl start striim-node</p>\n<p>To verify if the limit has been changed correctly, check /proc/&lt;pid&gt;/limits file content, pid is the process id of your striim-node service that could be obtained thru initctl status striim-node</p>\n<p><span class=\"wysiwyg-underline\"><span class=\"wysiwyg-font-size-large\">Other considerations</span></span></p>\n<p>Also, before you restart the Striim server, please make sure you remove all the files under \"data\" directory as some of the files might be inconsistent due to the previous error. Upon restart of the Striim server, all the files under data directory will be recreated. </p>\n<p>If you want to change the system wise file open limit numbers, you could modify fs.file-max property in /etc/sysctl.conf. </p>\n<pre class=\"screen\">shell&gt;echo \"fs.file-max=65536\" &gt;&gt; /etc/sysctl.conf</pre>\n<p>However, this is not recommended for a production system that are shared with other applications, as it will affect all the users on that system. </p>\n<p>To make change on the user level, edit /etc/security/limits.conf, add the following (assuming user is striim)</p>\n<p>striim soft nofile 65536<br>striim hard nofile 65536</p>\n<p> </p>\n<p><strong>PS:</strong></p>\n<p>Other symptoms that may be solved by above OS level changes:</p>\n<p>1. <span>Couldn't perform operation with exclusive file lock</span></p>\n<p> </p>"} {"page_content": "<p>See the following error message when trying to start SQLServer CDC</p>\n<p>Caused by: com.webaction.exception.Warning: java.util.concurrent.ExecutionException: com.webaction.source.mssql.MSSqlException: 2522 : Could not position at EOF, its equivalent LSN is NULL <br> at com.webaction.appmanager.AppManagerRequestClient.sendRequest(AppManagerRequestClient.java:180)<br> at com.webaction.appmanager.AppManagerRequestClient.sendRequest(AppManagerRequestClient.java:141)<br> at com.webaction.runtime.Context.changeApplicationState(Context.java:3627)<br> at com.webaction.runtime.compiler.Compiler.compileActionStmt(Compiler.java:2504)<br> ... 20 more</p>\n<p> </p>\n<p>This usually mean that the CDC job we need is not running. Usually it is caused by the SQLServer Agent isn't started. Please check it in the SQLServer Management Studio and make sure the SQLServer Agent is up and running.</p>\n<p> </p>\n<p> </p>"} {"page_content": "<p>Sometimes the source data will have multiple fields in a string that is separated by certain delimiter. One way is to parse them out when reading from the file by using DSVParser specifying the delimiter. Another way is to just read in that big string and parse it out later by using regex. We are focusing on the latter in this article.</p>\n<p>Example</p>\n<p>We have a source file, say TestData.txt, that has the following line</p>\n<p>DataTest Test_cola=2345&amp;Test_colb=23445&amp;Test_colc=weksdk&amp;Test_cold=weiopk23&amp;asdkwe</p>\n<p>There are two text strings in it, separated by a space, \"DataTest\" and \"Test_cola=....\"</p>\n<p>We are trying to parse out all the variable names and values in the second string. The \"Test_\" is more like a record begin flag, \"&amp;\" is more like a record end flag, so the info we want to finally parse out is something like this</p>\n<p>cola=2345</p>\n<p>colb=23445</p>\n<p>colc=weksdk</p>\n<p>cold=weiopk23</p>\n<p>asdkwe...garbage that could be discarded.</p>\n<p>We first need to get all data in from the source fileadapter</p>\n<p>Assuming this file resides in directory /Users/Werner/Temp, here is the FileReader definition. In the DSVParser, we specified space, ' ', as the columnedelimiter.</p>\n<p>CREATE OR REPLACE SOURCE SRCRegFile USING FileReader ( <br> blocksize: 64,<br> charset: 'UTF-8',<br> positionbyeof: false,<br> rolloverstyle: 'Default',<br> compressiontype: '',<br> adapterName: 'FileReader',<br> directory: '/Users/Werner/Temp',<br> skipbom: true,<br> wildcard: 'TestData.txt'<br> ) <br> PARSE USING DSVParser ( <br> handler: 'com.webaction.proc.DSVParser_1_0',<br> nocolumndelimiter: false,<br> trimwhitespace: false,<br> columndelimiter: ' ',<br> columndelimittill: '-1',<br> ignoremultiplerecordbegin: 'true',<br> ignorerowdelimiterinquote: false,<br> parserName: 'DSVParser',<br> separator: ':',<br> recordbegin: '',<br> recordend: '',<br> blockascompleterecord: false,<br> ignoreemptycolumn: false,<br> rowdelimiter: '\\n',<br> header: false,<br> headerlineno: 0,<br> quoteset: '\\\"',<br> trimquote: true<br> ) <br>OUTPUT TO StrmDataReg;</p>\n<p>After reading, all the data now is in StrmDataReg.</p>\n<p>Now, we define a type that has string array</p>\n<p>create type parsedtype (<br>content string[]);</p>\n<p>Then, we define a stream by using this type</p>\n<p>create stream strmSplit of parsedtype;</p>\n<p>We can parse the data by using Java Split() function to split all the data separated by \"Test_\" into this string array content[]. Since the data we are interested is in the 2nd field, which is data[1]</p>\n<p>CREATE OR REPLACE CQ CQParsedData0 <br>INSERT INTO strmSplit<br>select <br>data[1].toString().split(\"Test_\") as content<br>from StrmDataReg;<br>;</p>\n<p>After the data flows thru this CQ, we have the following in the content string array</p>\n<p>content[0], empty, since there is nothing in front of the first \"Test_\" </p>\n<p>content[1], cola=2345&amp;</p>\n<p>content[2], colb=23445&amp;</p>\n<p>content[3], colc=weksdk</p>\n<p>content[4], cold=weiopk23&amp;</p>\n<p>Now, we could use regex to further processing </p>\n<p>CREATE OR REPLACE CQ CQFurtherParse <br>INSERT INTO StrmFparsed<br>SELECT</p>\n<p>MATCH( s.content[1], '(.*?)(?=\\\\=)\\\\=(.*?)(?=\\\\&amp;)\\\\&amp;', 1) as colname1,</p>\n<p>MATCH( s.content[1], '(.*?)(?=\\\\=)\\\\=(.*?)(?=\\\\&amp;)\\\\&amp;', 2) as colval1,<br> <br>MATCH( s.content[2], '(.*?)(?=\\\\=)\\\\=(.*?)(?=\\\\&amp;)\\\\&amp;', 1) as colname2,</p>\n<p>MATCH( s.content[2], '(.*?)(?=\\\\=)\\\\=(.*?)(?=\\\\&amp;)\\\\&amp;', 2) as colval2,<br>...</p>\n<p>...</p>\n<p>FROM strmSplit s;</p>\n<p>This way, you will be able to parse out the data like this</p>\n<p class=\"p1\"><span class=\"s1\"> \"colname1\":\"cola\",</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"colval1\":\"2345\",</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"colname2\":\"colb\",</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"colval2\":\"23445\",</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"colname3\":\"colc\",</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"colval3\":\"weiopk23\",</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"colname4\":\"cold\",</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>\"colval4\":\"weiopk23\",</span></p>\n<p class=\"p1\"><span class=\"s1\">Now, if you have multiple occurrences of this pair of colname and colval and you are not sure if each record will have the same occurrences, you could rely on arlen() function to detect how many elements the content[] array has and depends on the value, you could put different values to the fields. </span></p>\n<p class=\"p1\"><span class=\"s1\">Please see attached tql file for the detail.</span></p>"} {"page_content": "<p>The Help Center is made up of two parts: a knowledge base and a separate community platform. The community consists of questions and answers organized by topic. Questions can include ideas, tips, or any other community item. To get started, see <a href=\"https://support.zendesk.com/hc/en-us/articles/203664406\">Managing community content</a>.</p>\n\n<p><strong>Note</strong>: Don't confuse topics with articles. In the community, topics are top-level containers for questions.</p>\n\n<p>A key ingredient to a successful community are the moderators. Recruit knowledgeable users who are eager to share their product knowledge, lead lost users to safety, and pick up the latest tricks of the trade from other product experts.</p>"} {"page_content": "<div class=\"page\" title=\"Page 8\">\r\n<div class=\"section\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<p> 1. DownloadWebAction_version.jar.<br> 2. Open a shell window, switch to the directory where you downloaded the file, and enter the following command:</p>\r\n<p>java -jar WebAction_version.jar<br> 3. When the installer appears, click Next.<br> 4. Change the installation path to the directory you created above. Click Next &gt; OK.</p>\r\n<div class=\"page\" title=\"Page 9\">\r\n<div class=\"layoutArea\">\r\n<div class=\"column\">\r\n<ol start=\"5\">\r\n<li>\r\n<p>If you do not need the sample applications and their sample data, uncheck Samples. Click Next.</p>\r\n</li>\r\n<li>\r\n<p>The web configuration tool will open in your default browser. Return to the installer and click Next &gt; Done.</p>\r\n</li>\r\n<li>\r\n<p>Return to the web client, read the license agreement, then click Accept and Continue.</p>\r\n</li>\r\n<li>\r\n<p>If you see \"Congratulations,\" click Continue. If you see messages that your computer can not run Striim, resolve</p>\r\n<p>the problems indicated, for example by installing the required version of Java or switching to Chrome. To</p>\r\n<p>restart the web configuration tool, run .../WebAction/bin/WebConfig.sh.</p>\r\n</li>\r\n<li>\r\n<p>Enter the following in the appropriate fields:</p>\r\n<p>a. your company name<br> b. the name for the Striim cluster (this value defaults to the current user name, but you may change it) c. the cluster password<br> d. the password for the admin user</p>\r\n</li>\r\n<li>\r\n<p>If the system has more than one network interface and the installer has chosen the wrong one, choose the correct one.</p>\r\n</li>\r\n<li>\r\n<p>Click Save &amp; Continue.</p>\r\n</li>\r\n<li>\r\n<p>If you have a license key, enter it. If not, leave the field blank to get a 30-day trial license. Click Continue.</p>\r\n</li>\r\n<li>\r\n<p>Click Launch.</p>\r\n</li>\r\n</ol>\r\n<p>When the \"Starting server\" message is replaced by the Log In prompt, continue with Viewing dashboards.</p>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>\r\n</div>"} {"page_content": "<p>When Striim server is start or restarted, it will check the original status of the existing Apps. It will try to restore the Apps to those status. So if the App was in running status before the server was shutdown, it will try to deploy the App and start it up. In some cases, user might not want certain App to be deployed due to other reasons. There are couple of ways in achieving this</p>\n<p>1) from server.sh</p>\n<p>-Dcom.webaction.config.doNotDeployApps=\"YourAppName\"</p>\n<p>Here, just replace YourAppName with your Striim App name without namespace. </p>\n<p>2) from <strong>conf/startUp.properties</strong></p>\n<pre class=\"p1\"><span class=\"s1\">DoNotDeployApps=admin.randomStream,admin.test</span></pre>\n<p class=\"p1\"><span class=\"s1\">Multiple applications can be mentioned comma separated</span></p>\n<p class=\"p1\"><strong><span class=\"s1\">Note: Using option 2) is preferred</span></strong></p>\n<p class=\"p1\"> </p>\n<p>It will skip the App in the Server startup time and the App will be in a status of 'not enough server' once the Striim server is up and running. </p>\n<p>You can manually deploy the App and start it later.</p>"} {"page_content": "<p dir=\"auto\">The following TQL example shows the following usage</p>\n<p dir=\"auto\">1. Use Kafka persist stream,StrmOraData, to hold data from the Oracle CDC adapter</p>\n<p dir=\"auto\">2. Use CQ, FILTERCQ , to filter the data based on the COMMITSCN value from CDC metadata</p>\n<p dir=\"auto\">3. Use DatabaseWriter, TGTDBWriter, to process the data from the filtered stream to replicate transaction into the target Database</p>\n<p dir=\"auto\">4. Use FileWriter, TGTFileOut andTGTFileOut1, to dump the stream data to disk file for debug or diagnostic purpose.</p>\n<p dir=\"auto\">Here is the TQL.</p>\n<p dir=\"auto\">====================================</p>\n<p dir=\"auto\">USE PSTEST;</p>\n<p dir=\"auto\">stop application TestOraPS; <br>undeploy application TestOraPS; <br>drop application TestOraPS cascade;</p>\n<p dir=\"auto\">CREATE OR REPLACE PROPERTYSET myKafka ( <br>zk.address:'10.77.22.198:2181', <br>bootstrap.brokers:'10.77.22.198:9092');</p>\n<p dir=\"auto\">CREATE APPLICATION TestOraPS RECOVERY 2 SECOND INTERVAL;</p>\n<p dir=\"auto\">CREATE STREAM StrmOraData of Global.WAEvent persist using myKafka; <br>--CREATE STREAM StrmOraData of Global.WAEvent ;</p>\n<p dir=\"auto\">create flow src_flow;</p>\n<p dir=\"auto\">CREATE SOURCE SRCOraTcust USING OracleReader ( <br>Compression: false, <br>DDLTracking: true, <br>StartTimestamp: 'null', <br>SupportPDB: false, <br>FetchSize: 1, <br>RedoLogfiles: 'null', <br>CommittedTransactions: true, <br>QueueSize: 2048, <br>OnlineCatalog: true, <br>SkipOpenTransactions: true, <br>FilterTransactionBoundaries: true, <br>Password_encrypted: true, <br>SendBeforeImage: true, <br>XstreamTimeOut: 600, <br>ConnectionURL: '10.1.186.105:1521:orcl', <br>Tables: 'WERNER.TCUST%', <br>adapterName: 'OracleReader', <br>Password: 'xxxxx', <br>connectionRetryPolicy: 'timeOut=30, retryInterval=30, maxRetries=3', <br>StartSCN: 'null', <br>ReaderType: 'LogMiner', <br>Username: 'werner', <br>OutboundServerProcessName: 'WebActionXStream' <br>) <br>OUTPUT TO StrmOraData ;</p>\n<p dir=\"auto\">end flow src_flow;</p>\n<p dir=\"auto\">CREATE OR REPLACE STREAM DataStream OF Global.WAEvent;</p>\n<p dir=\"auto\">CREATE OR REPLACE CQ FILTERCQ <br>insert into DataStream <br>select x <br>from StrmOraData x <br>where TO_LONG(META(x,'COMMITSCN')) &gt; 324553;</p>\n<p dir=\"auto\"><span class=\"s1\">--Can also do </span></p>\n<p dir=\"auto\"><span class=\"s1\">--where TO_STRING(META(x,'OperationName')) == 'INSERT';</span></p>\n<p dir=\"auto\">CREATE OR REPLACE TARGET TGTFileOut1 USING FileWriter ( <br>rolloverpolicy: 'DefaultRollingPolicy', <br>directory: 'test', <br>filename: 'StrmOraData.txt' <br>) <br>FORMAT USING JSONFormatter ( jsonobjectdelimiter: '\\n', <br>jsonMemberDelimiter: '\\n', <br>EventsAsArrayOfJsonObjects: 'true' <br>) <br>INPUT FROM StrmOraData;</p>\n<p dir=\"auto\">CREATE OR REPLACE TARGET TGTFileOut USING FileWriter ( <br>rolloverpolicy: 'DefaultRollingPolicy', <br>directory: 'test', <br>filename: 'DataStream.txt' <br>) <br>FORMAT USING JSONFormatter ( jsonobjectdelimiter: '\\n', <br>jsonMemberDelimiter: '\\n', <br>EventsAsArrayOfJsonObjects: 'true' <br>) <br>INPUT FROM DataStream;</p>\n<p dir=\"auto\">CREATE OR REPLACE TARGET TGTDBWriter USING DatabaseWriter ( <br>PreserveSourceTransactionBoundary: false, <br>Username: 'werner', <br>BatchPolicy: 'EventCount:1,Interval:1', <br>ConnectionURL: 'jdbc:oracle:thin:@10.77.22.198:1521:orcl', <br>Tables: 'WERNER.%,WERNER.%', <br>adapterName: 'DatabaseWriter', <br>IgnorableExceptionCode: '1', <br>Password: 'xxxxxx', <br>Password_encrypted: true <br>) <br>INPUT FROM DataStream; <br>END APPLICATION TestOraPS;</p>"} {"page_content": "<p>From time to time, customer will want to split their stream for load balancing. However, they don't have a good \"partition key\" field to choose, or they are not quite sure the business logic for the data. Using method below, it will allow you evenly distribute data from one stream into four streams.</p>\n<p>Say we have Data comes from the following source, reading from CSV file.</p>\n<p>CREATE SOURCE TSrc1 USING FileReader ( <br> Wildcard: 'source1.csv',<br> Directory: '/Users/Werner/Lab/V3.7.3B/test/',<br> PositionByEof: false<br> ) <br> PARSE USING DSVParser ( <br> linenumber: '-1',<br> charset: 'UTF-8',<br> commentcharacter: '',<br> nocolumndelimiter: false,<br> trimwhitespace: false,<br> columndelimiter: ',',<br> columndelimittill: '-1',<br> ignoremultiplerecordbegin: true,<br> ignorerowdelimiterinquote: false,<br> separator: ':',<br> recordbegin: '',<br> recordend: '',<br> blockascompleterecord: false,<br> rowdelimiter: '\\n',<br> ignoreemptycolumn: false,<br> header: 'False',<br> headerlineno: '0',<br> trimquote: 'True',<br> quoteset: '\\\"'<br> ) <br>OUTPUT TO TSrc1_Stream ;</p>\n<p> </p>\n<p>We would like to split the data from TSrc1_Stream evenly into four Streams, STRMD0/STRMD1/STRMD2/STRMD3. To achieve this, we will first add a field cnt into the original Stream,</p>\n<p>Create Type NumStrmType (<br>cnt long,<br>MyEvent Global.WAEvent);</p>\n<p>Create Stream StrmCntData of NumStrmType;</p>\n<p>CREATE CQ CQAddNum <br>INSERT INTO StrmCntData<br>SELECT count(*), t FROM TSrc1_Stream t;<br>;</p>\n<p>This will allow CNT to be incremented by 1 for every event coming into Stream StrmCntData</p>\n<p>Then we use the following four CQs to split out the data by calculating cnt mod 4.</p>\n<p>CREATE CQ CQSTRM0<br>INSERT INTO STRMD0<br>SELECT MyEvent from StrmCntData<br>Where (cnt % 4) == 0;</p>\n<p>CREATE CQ CQSTRM1<br>INSERT INTO STRMD1<br>SELECT MyEvent from StrmCntData<br>Where (cnt % 4) == 1;</p>\n<p>CREATE CQ CQSTRM2<br>INSERT INTO STRMD2<br>SELECT MyEvent from StrmCntData<br>Where (cnt % 4) == 2;</p>\n<p>CREATE CQ CQSTRM3<br>INSERT INTO STRMD3<br>SELECT MyEvent from StrmCntData<br>Where (cnt % 4) == 3;</p>\n<p>Hence we split all data into four streams evenly. </p>\n<p>Attached is the demo file and the TQL that you could run to see the result in the target files.</p>\n<p> </p>\n<p> </p>"} {"page_content": "<p>During the demo installation, there are two passwords have been set thru the question/answer in the config screen, cluster password and admin password. If you can't remember what they were set to, here is a quick way to recover those. </p>\r\n<p>Try to rerun the WebConfig.sh from the Striim_Home/bin directory. On the screen where it asks you the passwords, if you click on the small icon on the right side of the text box, it will flip the \"masked\" password from ****** to the plain text value. Please refer to the attached screen copy.</p>\r\n<p>This is only available for the demo version. For regular product version of Striim, you won't be able to recover the password. If you forgot the password for Admin user, you have to reinstall the whole Striim environment from scratch. The cluster password is always stored encrypted in the startUp.properties file.</p>\r\n<p> </p>"} {"page_content": "<p>Striim can use your OpenLDAP or Microso Active Directory server to authenticate users. The user guide has detail information on this. Here is the link</p>\n<p>https://support.striim.com/hc/en-us/articles/115011723328-3-7-4-Using-LDAP-authentication</p>\n<p>However, it only covers the regular LDAP. Some users uses SSL for LDAP server. Striim supports this type of integration as well. Here are the steps that need to be followed to make it work.</p>\n<p>1. Verify the LDAP Server works correctly thru SSL by running ldapsearch on the server where Striim is installed.</p>\n<p class=\"wysiwyg-indent1\">Assuming you have an admin/mgr account on the LDAP server that could be used to verify the other accounts, for example, the account's full dn is 'uid=StriimMGR,ou=Applications,o=striim.com', password is 'Striim123'. The user account you want to use for Striim is 'uid=StriimDEV,ou=Applications,o=striim.com'. The ldap server is ldapserver.striim.com</p>\n<p class=\"wysiwyg-indent1\">Here is an example of typical ldapsearch command to use ldaps to verify this user. </p>\n<p class=\"wysiwyg-indent1\">shell&gt; ldapsearch -x -H ldaps://ldapserver.striim.com -b o=striim.com -D 'uid=StriimMGR,ou=Applications,o=striim.com' -w \"Striim123\" uid=StriimDEV</p>\n<p class=\"wysiwyg-indent1\">If this command can pull all the information of StriimDEV user, then you are good to use ldaps on this server. Otherwise, please work with your system admin to get this work first.</p>\n<p>2. Make sure the SSL certificate is imported to the Java Keystore on the Striim server correctly.</p>\n<p class=\"wysiwyg-indent1\">First, please get the public key from the ldap server. </p>\n<p class=\"wysiwyg-indent1\">Here is an example of typical openssl command to get the key and put that into a cert file, assuming your ldap server ssl port is the default port 636. Please do this on the server where your Striim will be running.</p>\n<p class=\"wysiwyg-indent1\">shell&gt;openssl s_client -connect ldapserver.striim.com 636&lt; /dev/null | sed -ne '/-BEGIN CERTIFICATE/,/END CERTIFICATE/p' &gt; public.crt</p>\n<p class=\"wysiwyg-indent1\">It will generate a certificate file public.crt</p>\n<p class=\"wysiwyg-indent1\">We will need import this file to your Java Keystore. Please check with your system admin on JAVA_HOME location of the java running on the striim server. Assuming it is defined in $JAVA_HOME, please run the command below to import the public.crt file</p>\n<p class=\"wysiwyg-indent1\">shell&gt;sudo keytool -import -alias ldapserver.striim.com -keystore $JAVA_HOME/jre/lib/security/cacerts -file public.crt</p>\n<p class=\"wysiwyg-indent1\">It will ask for the Keystore password. If you are using the default, it is 'changeit'. Otherwise, please check with your system admin</p>\n<p class=\"wysiwyg-indent1\">To verify the certificate is imported correctly, please issue the following command to check</p>\n<p class=\"wysiwyg-indent1\">shell&gt;$JAVA_HOME/jre/bin/keytool -list -keystore $JAVA_HOME/jre/lib/security/cacerts -alias ldapserver.striim.com</p>\n<p class=\"wysiwyg-indent1\">If all checks out correctly, you could proceed to the next step to configure the integration with Striim.</p>\n<p>3. Configure the Striim LDAP propertyset to use LDAPS URI</p>\n<p class=\"wysiwyg-indent1\">First, create a propertyset in Admin namespace</p>\n<p class=\"wysiwyg-indent1\">CREATE OR REPLACE PROPERTYSET admin.LDAP_STRIIM (<br>PROVIDER_URL: 'ldaps://ldapserver.striim.com',<br>SECURITY_AUTHENTICATION: 'simple',<br>SECURITY_PRINCIPAL: 'uid=StriimMGR,ou=Applications,o=striim.com',<br>SECURITY_CREDENTIALS: 'Striim123',<br>USER_BASE_DN:'o=striim.com',<br>USER_RDN:'uid', <br>User_userId:'uid' ); </p>\n<p class=\"wysiwyg-indent1\">Then, you could issue create user command in Striim console to authorize the LDAP user to use Striim. For example</p>\n<p class=\"wysiwyg-indent1\">console&gt;create user StriimDEV using ldap admin.ldap_striim;</p>\n<p class=\"wysiwyg-indent1\">Once the user is created, you should be able to use this LDAP user to login to Striim.</p>"} {"page_content": "<p>When you use RPM packages to install WebAction, you will notice that the WebAction is started as service upon system boot time. This is done by upstart process. Typically all the environmental variables setting is stripped off when processes started this way by the system. </p>\r\n<p>If you want to pass any environmental variables to the WebAction server process in this configuration, you will have to modify or add the setting into file &lt;WebAction_Install_Home&gt;/sbin/webaction-node, which is the script used by upstart to start WebAction node.</p>\r\n<p>Example 1</p>\r\n<p>You can put the following line to make sure the characterset is set to use UTF-8</p>\r\n<p>LANG=\"en_US.UTF-8\" </p>\r\n<p>export LANG</p>\r\n<p> </p>\r\n<p>Example 2</p>\r\n<p>Use the following to set the LD_LIBRARY_PATH to load correspondent libraries path, such as Oracle client library, so the WebAction application could access Oracle Database</p>\r\n<p>LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib</p>\r\n<p>export LD_LIBRARY_PATH</p>"} {"page_content": "<p>Preparation Work</p>\n<p class=\"wysiwyg-indent1\">1. Download tgz package from the download site</p>\n<p class=\"wysiwyg-indent1\">2. Install the tgz package to the designated Striim installation directory by unzip and untar the file</p>\n<p class=\"wysiwyg-indent1\">3. Manually start the striim server by running the server.bat under a DOS prompt under the desired user id.</p>\n<p class=\"wysiwyg-indent1\">For example, we are setting the striim cluster name to Node148V373, Admin user password to 'admin'</p>\n<p class=\"wysiwyg-indent1\">C:\\Striim\\Striim_3.7.4&gt;.\\bin\\server.bat -c Node148V373 -a admin<br>Starting Striim Server - Version 3.7.4 (1e444838f1)<br>Required property \"Cluster Password\" is undefined<br>Enter Cluster Password for Cluster Node148V373 : *********<br>Re-enter the Password : *********<br>Required property \"Company Name\" is undefined<br>Enter Company Name : StriimTest<br>Product Key 898C23F02-865EA0F74-1F8E6C6F registered to StriimTest<br>Enter License Key : (will generate trial license key if left empty).<br>Starting Server on cluster : Node148V373<br>Required property \"Metadata Repository Location\" is undefined<br>Enter Metadata Repository Location [Format = IPAddress:port] [default 10.1.110.1<br>48:1527 (Press Enter/Return to default)] :<br>DB details : 10.1.110.148:1527 , wactionrepos , waction<br>Current node started in cluster : Node148V373, with Metadata Repository<br>Registered to: StriimTest<br>ProductKey: 898C23F02-865EA0F74-1F8E6C6F<br>License Key: 5572AEC3A-25AFC83F5-326F58A2D-A916D9EE1-EA7B338A4-7C825<br>License expires in 15 days 8 hours 15 minutes 36 seconds<br>Servers in cluster:<br> [this] S10_1_110_148 [775e209c-1521-41ab-8259-23fcc24eaebe]</p>\n<p class=\"wysiwyg-indent1\">started.<br>Please go to http://10.1.110.148:9080 or https://10.1.110.148:9081 to administer<br>, or use console</p>\n<p class=\"wysiwyg-indent1\">4. Check the Striim server is running fine by login into the WebUI, in this case, http://10.1.110.148:9080. Terminate the server process once everything checks out clean. Then stop the server.bat, or just close/terminate the dos prompt window.</p>\n<p class=\"wysiwyg-indent1\">5. Create a batch file startDerby.bat with the following content and place it under the Striim installation directory</p>\n<p class=\"wysiwyg-indent1\">java -Dderby.stream.error.file=\".\\logs\\striim-dbms.log\" -jar .\\derby\\lib\\derbyrun.jar server start -h 0.0.0.0 -noSecurityManager</p>\n<p class=\"wysiwyg-indent1\"> </p>\n<p>Configure Striim Servces</p>\n<ol>\n<li class=\"wysiwyg-indent1\">Download NSSM from https://nssm.cc/download</li>\n<li class=\"wysiwyg-indent1\">Install it to your Windows server where Striim runs</li>\n<li class=\"wysiwyg-indent1\">Configure Striim DBMS Service\n<ul>\n<li class=\"wysiwyg-indent1\">Goto where NSSM installed, cd to directory win64</li>\n<li class=\"wysiwyg-indent1\">Type 'nssm install Striim374Dbms', It will start the GUI interface</li>\n<li class=\"wysiwyg-indent1\">Follow the windows below to configure the service properties. Assuming Striim is installed under C:\\Striim\\Striim_3.7.4 </li>\n<li class=\"wysiwyg-indent1\"><img src=\"https://support.striim.com/hc/article_attachments/115020629147/Screen_Shot_2017-08-28_at_9.42.06_AM.png\" alt=\"Screen_Shot_2017-08-28_at_9.42.06_AM.png\"></li>\n<li class=\"wysiwyg-indent1\"><img src=\"https://support.striim.com/hc/article_attachments/115020695088/Screen_Shot_2017-08-28_at_9.42.26_AM.png\" alt=\"Screen_Shot_2017-08-28_at_9.42.26_AM.png\"></li>\n<li class=\"wysiwyg-indent1\">This will allow the service running under local system account <img src=\"https://support.striim.com/hc/article_attachments/115020629667/Screen_Shot_2017-08-28_at_9.42.41_AM.png\" alt=\"Screen_Shot_2017-08-28_at_9.42.41_AM.png\">\n</li>\n<li class=\"wysiwyg-indent1\">\n<img src=\"https://support.striim.com/hc/article_attachments/115020695208/Screen_Shot_2017-08-28_at_9.43.02_AM.png\" alt=\"Screen_Shot_2017-08-28_at_9.43.02_AM.png\"><img src=\"https://support.striim.com/hc/article_attachments/115020629927/Screen_Shot_2017-08-28_at_9.43.21_AM.png\" alt=\"Screen_Shot_2017-08-28_at_9.43.21_AM.png\">\n</li>\n<li class=\"wysiwyg-indent1\">The last screen of I/O configuration gives the service ability to write the standard output and error output to file striim-DBMS-service.log under logs directory. Please make sure local system account SYSTEM user has proper privileges to the write and read from that directory</li>\n<li class=\"wysiwyg-indent1\">Leave other setting as default</li>\n</ul>\n</li>\n<li class=\"wysiwyg-indent1\">Configure Striim Node Service\n<ul>\n<li class=\"wysiwyg-indent1\">Goto where NSSM installed, cd to directory win64</li>\n<li class=\"wysiwyg-indent1\">Type 'nssm install Striim374Node', It will start the GUI interface</li>\n<li class=\"wysiwyg-indent1\">Follow the windows below to configure the service properties. Assuming Striim is installed under C:\\Striim\\Striim_3.7.4 </li>\n<li class=\"wysiwyg-indent1\">\n<img src=\"https://support.striim.com/hc/article_attachments/115020696528/Screen_Shot_2017-08-28_at_10.32.32_AM.png\" alt=\"Screen_Shot_2017-08-28_at_10.32.32_AM.png\"><img src=\"https://support.striim.com/hc/article_attachments/115020631007/Screen_Shot_2017-08-28_at_10.32.48_AM.png\" alt=\"Screen_Shot_2017-08-28_at_10.32.48_AM.png\">\n</li>\n<li class=\"wysiwyg-indent1\">This will allow service run under local SYSTEM account<img src=\"https://support.striim.com/hc/article_attachments/115020696548/Screen_Shot_2017-08-28_at_10.32.59_AM.png\" alt=\"Screen_Shot_2017-08-28_at_10.32.59_AM.png\">\n</li>\n<li class=\"wysiwyg-indent1\">This will make sure the Srtiim374Dbms service is started before the Striim374Node Service<img src=\"https://support.striim.com/hc/article_attachments/115020631027/Screen_Shot_2017-08-28_at_10.33.18_AM.png\" alt=\"Screen_Shot_2017-08-28_at_10.33.18_AM.png\"><img src=\"https://support.striim.com/hc/article_attachments/115020631047/Screen_Shot_2017-08-28_at_10.34.58_AM.png\" alt=\"Screen_Shot_2017-08-28_at_10.34.58_AM.png\"><img src=\"https://support.striim.com/hc/article_attachments/115020696568/Screen_Shot_2017-08-28_at_10.35.15_AM.png\" alt=\"Screen_Shot_2017-08-28_at_10.35.15_AM.png\">\n</li>\n<li class=\"wysiwyg-indent1\">The last screen of I/O configuration gives the service ability to write the standard output and error output to file striim-node-service.log under logs directory. Please make sure local system account SYSTEM user has proper privileges to the write and read from that directory</li>\n<li class=\"wysiwyg-indent1\">Leave other setting as default</li>\n</ul>\n</li>\n<li class=\"wysiwyg-indent1\">Now you will see the following two services configured from the services under control panel\n<ul>\n<li class=\"wysiwyg-indent1\">Striim 3.7.4 Node and Striim 3.7.4 DBMS<img src=\"https://support.striim.com/hc/article_attachments/115020697308/Screen_Shot_2017-08-28_at_10.55.30_AM.png\" alt=\"Screen_Shot_2017-08-28_at_10.55.30_AM.png\">\n</li>\n<li class=\"wysiwyg-indent1\">If you click Striim 3.7.4 Node and start it, it will also startt Striim 3.7.4 DBMS automatically.</li>\n</ul>\n</li>\n</ol>\n<p>Now you have finished configuring Striim as a service on the windows server. By default, the service startup type is Manual. After you test it out and believes everything runs correctly, you could change it to Automatic, so every time the system boots or reboots, the Striim service will be automatically started. For any error or warning messages from the service, please check those two I/O files configured under the logs directory.</p>\n<p>Finally, if you want to remove the Striim services, please use the command below</p>\n<p>nssm remove '&lt;service-name&gt;' confirm</p>"} {"page_content": "<p>Background Info</p>\n<p>In a multi-node environment, if user wants to take advantage of the parallel processing power by running the App simultaneously on more than one node, the Application/Flow will need to be partitioned. The Application/flow should be deployed on multiple nodes so each partitioned flow will be processed on different node. To achieve this, we will need follow the steps below.</p>\n<p>1. Create a deployment group that has at least 2 nodes</p>\n<p>2. Partition the stream in your App so it could be split to multiple nodes when the down stream component is deployed on multiple nodes.</p>\n<p>3. When you deploy the App, make sure you deploy the flow that does the parallel processing on All nodes in the deployment group you created in step 1. **</p>\n<p>**In current version of Striim, the deployment option is either ONE or ALL. For example, if your deployment group has five nodes, you won't be able to deploy to three of the nodes. So please create your deployment group properly. </p>\n<p>Here is a quick example Application</p>\n<p>=============================================</p>\n<p>CREATE APPLICATION TestSplitTarget;</p>\n<p>--Creating Source flow, which will be deployed only on one node</p>\n<p>CREATE FLOW src_flow;</p>\n<p>CREATE SOURCE SRC11 USING FileReader ( <br>blocksize: 64,<br>positionbyeof: false,<br>rolloverstyle: 'Default',<br>includesubdirectories: false,<br>directory: 'test',<br>skipbom: true,<br>wildcard: 'test_src.txt'<br>) <br>PARSE USING DSVParser ( <br>linenumber: '-1',<br>charset: 'UTF-8',<br>commentcharacter: '',<br>nocolumndelimiter: false,<br>trimwhitespace: false,<br>columndelimiter: ',',<br>columndelimittill: '-1',<br>ignoremultiplerecordbegin: 'true',<br>ignorerowdelimiterinquote: false,<br>separator: ':',<br>blockascompleterecord: false,<br>rowdelimiter: '\\n',<br>ignoreemptycolumn: false,<br>header: false,<br>headerlineno: 0,<br>trimquote: true,<br>quoteset: '\\\"'<br>) <br>OUTPUT TO StrmSRC ;</p>\n<p>Create type cq_out_part0_type (<br>all_data Object[],<br>metadata java.util.HashMap,<br>unique_id String);</p>\n<p>CREATE OR REPLACE STREAM cq_out_part0 OF cq_out_part0_type partition by unique_id;</p>\n<p><br>CREATE OR REPLACE CQ cq_part0 <br>INSERT INTO cq_out_part0<br>SELECT data, metadata, data[0].toString() as unique_id<br>FROM StrmSRC;</p>\n<p>END FLOW src_flow;</p>\n<p>--End of Source flow</p>\n<p>--Create Target flow, which will be deployed to multiple nodes</p>\n<p>CREATE FLOW tgt_flow;<br>CREATE OR REPLACE TARGET TGT11 USING FileWriter ( <br>filename: 'Test_out.txt',<br>flushpolicy: 'eventcount:1,interval:1',<br>adapterName: 'FileWriter',<br>directory: 'test',<br>rolloverpolicy: 'DefaultRollingPolicy'<br>) <br>FORMAT USING DSVFormatter ( nullvalue: 'NULL',<br>usequotes: 'false',<br>rowdelimiter: '\\n',<br>quotecharacter: '\\\"',<br>columndelimiter: ','<br>) <br>INPUT FROM cq_out_part0;<br>END FLOW tgt_flow;</p>\n<p>--End of Target flow</p>\n<p><br>END APPLICATION TestSplitTarget;</p>\n<p>--Deploy the src_flow on only ONE node in deployment group node45</p>\n<p>--Deploy the tgt_flow on ALL nodes in deployment group twonodes</p>\n<p>--Deploy the rest of this Application(in this case, nothing left) in only ONE node in deployment group node43</p>\n<p>DEPLOY APPLICATION TestSplitTarget ON ONE IN node43 with src_flow ON ONE IN node45, tgt_flow ON ALL IN twonodes;</p>\n<p>================================================</p>\n<p> </p>\n<p>Here are the description of the deployment groups. You can obtain the information by running \"list dgs\" command in console.</p>\n<p class=\"p1\"><span class=\"s1\">DG 3 =&gt;<span class=\"Apple-converted-space\"> </span>node43 has actual servers [S10_1_10_43] and configured servers [S10_1_10_43] with mininum required servers 0</span></p>\n<p class=\"p1\"><span class=\"s1\">DG 14 =&gt;<span class=\"Apple-converted-space\"> </span>node45 has actual servers [S10_1_10_45] and configured servers [S10_1_10_45] with mininum required servers 0</span></p>\n<p class=\"p1\"><span class=\"s1\">DG 10 =&gt;<span class=\"Apple-converted-space\"> </span>twonodes has actual servers [S10_1_10_45, S10_1_10_44] and configured servers [S10_1_10_44, S10_1_10_45] with mininum required servers 0</span></p>\n<p class=\"p1\"> </p>\n<p class=\"p1\"><span class=\"s1\">Once the App is deployed, you will see the following output if you issue \"status &lt;application&gt;\" command.</span></p>\n<p class=\"p1\"><span class=\"s1\">W (wtest) &gt; status TestSplitTarget;</span></p>\n<p class=\"p1\"><span class=\"s1\">Processing - status TestSplitTarget</span></p>\n<p class=\"p1\"><span class=\"s1\">TestSplitTarget is DEPLOYED</span></p>\n<p class=\"p1\"><span class=\"s1\">Status per node....</span></p>\n<p class=\"p1\"><span class=\"s1\">[</span></p>\n<p class=\"p1\"><span class=\"s1\">ON S10_1_10_44 IN [default, twonodes]</span></p>\n<p class=\"p1\"><span class=\"s1\">[</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.TestSplitTarget APPLICATION),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.tgt_flow FLOW),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.TGT11 TARGET),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.cq_out_part0 STREAM),</span></p>\n<p class=\"p1\"><span class=\"s1\">]</span></p>\n<p class=\"p1\"><span class=\"s1\">, </span></p>\n<p class=\"p1\"><span class=\"s1\">ON S10_1_10_43 IN [node43, default]</span></p>\n<p class=\"p1\"><span class=\"s1\">[</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.TestSplitTarget APPLICATION),</span></p>\n<p class=\"p1\"><span class=\"s1\">]</span></p>\n<p class=\"p1\"><span class=\"s1\">, </span></p>\n<p class=\"p1\"><span class=\"s1\">ON S10_1_10_45 IN [default, node45, twonodes]</span></p>\n<p class=\"p1\"><span class=\"s1\">[</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.TestSplitTarget APPLICATION),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.src_flow FLOW),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.SRC11 SOURCE),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.cq_part0 CQ),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.cq_out_part0 STREAM),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.StrmSRC STREAM),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.tgt_flow FLOW),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.TGT11 TARGET),</span></p>\n<p class=\"p1\"><span class=\"s1\"><span class=\"Apple-converted-space\"> </span>(wtest.cq_out_part0 STREAM),</span></p>\n<p class=\"p1\"><span class=\"s1\">]</span></p>\n<p class=\"p1\"><span class=\"s1\">]</span></p>\n<p class=\"p1\"><span class=\"s1\">-&gt; SUCCESS </span></p>\n<p> </p>"}