<!DOCTYPE html>
<html>
<head>
	<meta charset="utf-8">

	<title>SpeechRecognitionServiceFactory Class Reference</title>

	<link rel="stylesheet" href="../css/style.css">
	<meta name="viewport" content="initial-scale=1, maximum-scale=1.4">
	<meta name="generator" content="appledoc 2.2.1 (build 1333)">
</head>
<body class="appledoc">
	<header>
		<div class="container" class="hide-in-xcode">
			
			<h1 id="library-title">
				<a href="../index.html">SpeechSDK-1_0-for-iOS </a>
			</h1>

			<p id="developer-home">
				<a href="../index.html">Microsoft</a>
			</p>
			
		</div>
	</header>

	<aside>
		<div class="container">
			<nav>
				<ul id="header-buttons" role="toolbar">
					<li><a href="../index.html">Index</a></li>
<li><a href="../hierarchy.html">Hierarchy</a></li>

					<li id="on-this-page" role="navigation">
						<label>
							On This Page

							<div class="chevron">
								<div class="chevy chevron-left"></div>
								<div class="chevy chevron-right"></div>
							</div>

							<select id="jump-to">
	<option value="top">Jump To&#133;</option>
	
	<option value="overview">Overview</option>
	

	
	
	<option value="tasks">Tasks</option>
	
	

	
	

	
	<optgroup label="Class Methods">
		
		<option value="//api/name/createDataClient:withLanguage:withKey:withProtocol:">+ createDataClient:withLanguage:withKey:withProtocol:</option>
		
		<option value="//api/name/createDataClient:withLanguage:withKey:withProtocol:withUrl:">+ createDataClient:withLanguage:withKey:withProtocol:withUrl:</option>
		
		<option value="//api/name/createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:">+ createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:</option>
		
		<option value="//api/name/createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:">+ createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:</option>
		
		<option value="//api/name/createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:">+ createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:</option>
		
		<option value="//api/name/createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+ createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</option>
		
		<option value="//api/name/createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:">+ createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:</option>
		
		<option value="//api/name/createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+ createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</option>
		
		<option value="//api/name/createMicrophoneClient:withLanguage:withKey:withProtocol:">+ createMicrophoneClient:withLanguage:withKey:withProtocol:</option>
		
		<option value="//api/name/createMicrophoneClient:withLanguage:withKey:withProtocol:withUrl:">+ createMicrophoneClient:withLanguage:withKey:withProtocol:withUrl:</option>
		
		<option value="//api/name/createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:">+ createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:</option>
		
		<option value="//api/name/createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:">+ createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:</option>
		
		<option value="//api/name/createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:">+ createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:</option>
		
		<option value="//api/name/createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+ createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</option>
		
		<option value="//api/name/createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:">+ createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:</option>
		
		<option value="//api/name/createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+ createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</option>
		
		<option value="//api/name/createPrefs:withLanguage:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withUrl:">+ createPrefs:withLanguage:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withUrl:</option>
		
		<option value="//api/name/getAPIVersion">+ getAPIVersion</option>
		
	</optgroup>
	

	
	
</select>
						</label>
					</li>
				</ul>
			</nav>
		</div>
	</aside>

	<article>
		<div id="overview_contents" class="container">
			<div id="content">
				<main role="main">
					<h1 class="title">SpeechRecognitionServiceFactory Class Reference</h1>

					
					<div class="section section-specification"><table cellspacing="0"><tbody>
						<tr>
	<th>Inherits from</th>
	<td>NSObject</td>
</tr><tr>
	<th>Declared in</th>
	<td>SpeechRecognitionService.h<br />SpeechRecognitionServiceFactory.mm</td>
</tr>
						</tbody></table></div>
					

                    
					
					<div class="section section-overview">
						<a title="Overview" name="overview"></a>
						<h2 class="subtitle subtitle-overview">Overview</h2>
						<p>Factory for creating clients for Azure Intelligent Services speech recognition. This factory can be used to create a client that interacts with the speech recognition service. There are four types of clients this factory can create.</p>

<p><a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a>  This client is optimal for applications that require speech recognition with previously acquired data, for example from a file or Bluetooth audio source.</p>

<p>Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz. Returns speech recognition results.</p>

<p><a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a>  This client is optimal for applications that require speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>

<p>Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz. Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see <a href="https://LUIS.ai">https://LUIS.ai</a>)</p>

<p><a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a>  This client is optimal for applications that require for speech recognition from microphone input.</p>

<p>When the microphone is turned on, audio data from the microphone is streamed to the speech recognition service. A built in Silence Detector is applied to the microphone data before it is sent to the recognition service. Returns speech recognition results.</p>

<p><a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a>  This client is optimal for applications that require for speech recognition <em>and</em> intent detection from microphone input.</p>

<p>When the microphone is turned on, audio data from the microphone is streamed to the speech recognition service. A built in Silence Detector is applied to the microphone data before it is sent to the recognition service. Returns speech recognition and intent results. Intent results are returned in structured JSON form (see <a href="https://LUIS.ai">https://LUIS.ai</a>).</p>
					</div>
					
					

					
					
					<div class="section section-tasks">
						<a title="Tasks" name="tasks"></a>
						

						
						

						<div class="task-list">
							<div class="section-method">
	<a name="//api/name/getAPIVersion" title="getAPIVersion"></a>
	<h3 class="method-title"><code><a href="#//api/name/getAPIVersion">+&nbsp;getAPIVersion</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Gets the API version</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (NSString *)getAPIVersion</code></div>

		    
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The version of the of the API you are currently using</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Gets the API version</p>
			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createPrefs:withLanguage:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withUrl:" title="createPrefs:withLanguage:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createPrefs:withLanguage:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withUrl:">+&nbsp;createPrefs:withLanguage:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Allocates and initializes a preferences object based on the specified recognition mode.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (AdmRecoOnlyPreferences *)createPrefs:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result.</p>

<p>In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The preferences object</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Allocates and initializes a preferences object based on the specified recognition mode.</p>
			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClient:withLanguage:withKey:withProtocol:" title="createDataClient:withLanguage:withKey:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClient:withLanguage:withKey:withProtocol:">+&nbsp;createDataClient:withLanguage:withKey:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result.</p>

<p>In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>

<p>Data is broken up into buffers and each buffer is sent to the speech recognition service.

No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.

Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>




<p>The recognition service returns only speech recognition results and does not perform intent detection.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:" title="createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:">+&nbsp;createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result.</p>

<p>In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>

<p>Data is broken up into buffers and each buffer is sent to the speech recognition service.

No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.

Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>




<p>The recognition service returns only speech recognition results and does not perform intent detection.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClient:withLanguage:withKey:withProtocol:withUrl:" title="createDataClient:withLanguage:withKey:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClient:withLanguage:withKey:withProtocol:withUrl:">+&nbsp;createDataClient:withLanguage:withKey:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>

<p>Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>




<p>The recognition service returns only speech recognition results and does not perform intent detection.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:" title="createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:">+&nbsp;createDataClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClient *)createDataClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the service thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClient.html">DataRecognitionClient</a> with Acoustic Model Adaptation for speech recognition with acquired data, for example from a file or Bluetooth audio source.</p>

<p>Data is broken up into buffers and each buffer is sent to the speech recognition service. No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data. Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>




<p>The recognition service returns only speech recognition results and does not perform intent detection.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:" title="createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:">+&nbsp;createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>

<p><p>Data is broken up into buffers and each buffer is sent to the speech recognition service.</p>

<p>No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.</p>

<p>Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>

<p><p>Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see <a href="https://LUIS.ai">https://LUIS.ai</a>)</p></p>
			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:" title="createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:">+&nbsp;createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>

<p><p>Data is broken up into buffers and each buffer is sent to the speech recognition service.</p>

<p>No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.</p>

<p>Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>

<p><p>Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see <a href="https://LUIS.ai">https://LUIS.ai</a>)</p></p>
			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:" title="createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+&nbsp;createDataClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> with Acoustic Model Adaptation for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>the created <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> with Acoustic Model Adaptation for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>

<p><p>Data is broken up into buffers and each buffer is sent to the speech recognition service.</p>

<p>No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.</p>

<p>Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>

<p><p>Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see <a href="https://LUIS.ai">https://LUIS.ai</a>)</p></p>
			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:" title="createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+&nbsp;createDataClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> with Acoustic Model Adaptation for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (DataRecognitionClientWithIntent *)createDataClientWithIntent:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>the created <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/DataRecognitionClientWithIntent.html">DataRecognitionClientWithIntent</a> with Acoustic Model Adaptation for speech recognition <em>and</em> intent detection with previously acquired data, for example from a file or Bluetooth audio source.</p>

<p><p>Data is broken up into buffers and each buffer is sent to the speech recognition service.</p>

<p>No modification is done to the buffers; if silence detection is required, it must be performed in an external pre-processing pass over the data.</p>

<p>Audio data must be PCM, mono, 16-bit sample, with sample rate of 16000 Hz.</p>

<p><p>Returns speech recognition results and structured intent results. Intent results are returned in structured JSON form (see <a href="https://LUIS.ai">https://LUIS.ai</a>)</p></p>
			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClient:withLanguage:withKey:withProtocol:" title="createMicrophoneClient:withLanguage:withKey:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClient:withLanguage:withKey:withProtocol:">+&nbsp;createMicrophoneClient:withLanguage:withKey:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is

turned on, data from the microphone is sent to the speech recognition service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service.

The recognition service returns only speech recognition results and does not perform intent detection.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:" title="createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:">+&nbsp;createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is

turned on, data from the microphone is sent to the speech recognition service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service.

The recognition service returns only speech recognition results and does not perform intent detection.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClient:withLanguage:withKey:withProtocol:withUrl:" title="createMicrophoneClient:withLanguage:withKey:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClient:withLanguage:withKey:withProtocol:withUrl:">+&nbsp;createMicrophoneClient:withLanguage:withKey:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> with Acoustic Model Adaptation that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a></p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> with Acoustic Model Adaptation that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is

turned on, data from the microphone is sent to the speech recognition service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service.

The recognition service returns only speech recognition results and does not perform intent detection.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:" title="createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:">+&nbsp;createMicrophoneClient:withLanguage:withPrimaryKey:withSecondaryKey:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> with Acoustic Model Adaptation that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClient *)createMicrophoneClient:(SpeechRecognitionMode)<em>speechRecognitionMode</em> withLanguage:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>speechRecognitionMode</code></th>
						<td><p>The speech recognition mode. <p>In Short Phrase mode, the client receives one final multiple N-best choice result. In Long-form Dictation mode, the client receives multiple final results, based on where the server thinks sentence pauses are.</p></p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events upon during speech recognition.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a></p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClient.html">MicrophoneRecognitionClient</a> with Acoustic Model Adaptation that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is

turned on, data from the microphone is sent to the speech recognition service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service.

The recognition service returns only speech recognition results and does not perform intent detection.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:" title="createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:">+&nbsp;createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a> that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a MicrophoneRecognitionClientWithIntent that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>




<p>The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:" title="createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:">+&nbsp;createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a> that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a MicrophoneRecognitionClientWithIntent that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>




<p>The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:" title="createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+&nbsp;createMicrophoneClientWithIntent:withPrimaryKey:withSecondaryKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a> with Acoustic Model Adaptation that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)<em>language</em> withPrimaryKey:(NSString *)<em>primaryKey</em> withSecondaryKey:(NSString *)<em>secondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryKey</code></th>
						<td><p>The primary key.  It&rsquo;s a best practice that the application rotate keys periodically. Between rotations, you would disable the primary key, making the secondary key the default, giving you time to swap out the primary.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>secondaryKey</code></th>
						<td><p>The secondary key.  Intended to be used when the primary key has been disabled.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>




<p>The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div><div class="section-method">
	<a name="//api/name/createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:" title="createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:"></a>
	<h3 class="method-title"><code><a href="#//api/name/createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:">+&nbsp;createMicrophoneClientWithIntent:withKey:withLUISAppID:withLUISSecret:withProtocol:withUrl:</a></code>
</h3>

	<div class="method-info">
		<div class="pointy-thing"></div>

		<div class="method-info-container">
			
			
			<div class="method-subsection brief-description">
				<p>Creates a <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a> with Acoustic Model Adaptation that uses the microphone as the input source.</p>
			</div>
			
		    

			<div class="method-subsection method-declaration"><code>+ (MicrophoneRecognitionClientWithIntent *)createMicrophoneClientWithIntent:(NSString *)<em>language</em> withKey:(NSString *)<em>primaryOrSecondaryKey</em> withLUISAppID:(NSString *)<em>luisAppID</em> withLUISSecret:(NSString *)<em>luisSubscriptionID</em> withProtocol:(id&lt;SpeechRecognitionProtocol&gt;)<em>delegate</em> withUrl:(NSString *)<em>url</em></code></div>

		    
			
			<div class="method-subsection arguments-section parameters">
				<h4 class="method-subtitle parameter-title">Parameters</h4>
				<table class="argument-def parameter-def">
				
					<tr>
						<th scope="row" class="argument-name"><code>language</code></th>
						<td><p>The language of the speech being recognized. The supported languages are:</p>

<ul>
<li><p>en-us: American English</p></li>
<li><p>en-gb: British English</p></li>
<li><p>de-de: German</p></li>
<li><p>es-es: Spanish</p></li>
<li><p>fr-fr: French</p></li>
<li><p>it-it: Italian</p></li>
<li><p>zh-cn: Mandarin Chinese</p></li>
</ul>
</td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>primaryOrSecondaryKey</code></th>
						<td><p>The primary or the secondary key.</p>

<p>You should periodically renew your key to prevent unauthorized use of your subscription. The recommended approach is to acquire two keys, a primary and a secondary and to rotate key usage between these two keys. While one key is disabled, the other key will still work, allowing your application to remain active while the disabled key is replaced.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisAppID</code></th>
						<td><p>Once you have configured the LUIS service to create and publish an intent model (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Application ID GUID. Use that GUID here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>luisSubscriptionID</code></th>
						<td><p>Once you create a LUIS account (see <a href="https://LUIS.ai">https://LUIS.ai</a>) you will be given an Subscription ID. Use that secret here.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>delegate</code></th>
						<td><p>The protocol used to perform the callbacks/events during speech recognition and intent detection.</p></td>
					</tr>
				
					<tr>
						<th scope="row" class="argument-name"><code>url</code></th>
						<td><p>The endpoint with a Acoustic Model that you specially created with the Acoustic Model Specialization Service.</p></td>
					</tr>
				
				</table>
			</div>
			

			
			<div class="method-subsection return">
				<h4 class="method-subtitle parameter-title">Return Value</h4>
				<p>The created <a href="../Classes/MicrophoneRecognitionClientWithIntent.html">MicrophoneRecognitionClientWithIntent</a>.</p>
			</div>
			

			

			
			<div class="method-subsection discussion-section">
				<h4 class="method-subtitle">Discussion</h4>
				<p>Creates a MicrophoneRecognitionClientWithIntent with Acoustic Model Adaptation that uses the microphone as the input source.</p>

<p>To initiate speech recognition, call the startMicAndRecognition method of this client. Once the microphone is turned on, data from the microphone is sent to the service.

A built-in Silence Detector is applied to the microphone data before it is sent to the recognition service. The service returns speech recognition results and structured intent results.

To terminate speech recognition and stop sending data to the service, call endMicAndRecognition.</p>




<p>The service returns structured intent results in JSON form (see [https://LUIS.ai](https://LUIS.ai)).</p>

			</div>
			

			

			

			
			<div class="method-subsection declared-in-section">
				<h4 class="method-subtitle">Declared In</h4>
				<p><code class="declared-in-ref">SpeechRecognitionServiceFactory.mm</code></p>
			</div>
			
			
		</div>
	</div>
</div>
						</div>
						
					</div>
					
					

                    
				</main>

				<footer>
					<div class="footer-copyright">
						
						<p class="copyright">Copyright &copy; 2016 Microsoft. All rights reserved. Updated: 2016-03-21</p>
						
						
						<p class="generator">Generated by <a href="http://appledoc.gentlebytes.com">appledoc 2.2.1 (build 1333)</a>.</p>
						
					</div>
				</footer>
			</div>
		</div>
	</article>

	<script src="../js/script.js"></script>
</body>
</html>