doc_id
stringlengths
7
11
appl_id
stringlengths
8
8
flag_patent
int64
0
1
claim_one
stringlengths
13
18.3k
9542612
14994638
1
1. A computer-implemented method comprising: receiving a plurality of different images of a first scene, wherein each image has a different exposure level; generating a high dynamic range image using the plurality images; detecting one or more features in each of one or more regions of the high dynamic range image; determining for each region of the high dynamic range image whether the region is a candidate text region potentially containing text based on the detected one or more features; and generating text by performing optical character recognition on a plurality of the regions determined to contain text.
20130339027
13524351
0
1. A computer-implemented method of recognizing verbal commands, comprising: capturing at least one depth image by a depth camera positioned in a vehicle, each of the depth image covering at least part of a user and comprising pixels representing distances from the depth camera to the at least part of the user; recognizing a pose or gesture of the user based on the captured depth image; and generating the gesture information based on the recognized pose or gesture. determining one or more devices among a plurality of devices that are likely to be targeted by the user for an operation based on the gesture information; selecting a plurality of verbal commands associated with the one or more devices determined as being targeted; receiving an audio signal including utterance by the user at a time when the user is taking the pose or the gesture; and determining a device command for operating the one or more devices by performing speech recognition on the audio signal using the selected plurality of verbal commands.
20060235688
11105076
0
1. A method of repeating a computer recognized string in a telematics unit in a vehicle, comprising: receiving a user utterance at the telematics unit from a user, the user utterance including a plurality of words and a plurality of user pauses between the words; parsing the user utterance into a plurality of phonemes; forming a data string in which each user pause is associated with a phoneme adjacent to the user pause; and playing back the data string.
9269273
13865996
1
1. A computer-implemented method for building an analysis database associating each of a plurality of n-grams with corresponding respective cognitive motivation orientations, comprising: receiving a training corpus of training documents in electronic form; wherein the receiving of the training corpus of training documents comprises scanning at least one training document using OCR technology, and thereby transforming the at least one training document into electronic form; each training document comprising a plurality of meaningfully arranged words; each training document having at least one annotated word sequence therein; wherein within each training document, each particular annotated word sequence is annotated with a corresponding word-sequence-level annotation identifying at least one cognitive motivation orientation that is associated with that particular annotated word sequence; for each training document: for each annotated word sequence in that particular training document: extracting n-grams overlapping that particular annotated word sequence; and associating each extracted n-gram with the at least one cognitive motivation orientation associated with that particular annotated word sequence; generating a set of indicator candidate n-grams wherein: each indicator candidate n-gram represents all instances of a particular n-gram in the training corpus for which at least one instance of that particular n-gram was extracted from any annotated word sequence in any training document; each indicator candidate n-gram being associated with every cognitive motivation orientation that is associated with at least one instance of the particular n-gram represented by that particular indicator candidate n-gram; applying at least one relevance filter to each indicator candidate n-grams in the set of indicator candidate n-grams to obtain a set of indicator n-grams, wherein: the set of indicator n-grams is a subset of the set of indicator candidate n-grams, so that each indicator n-gram corresponds to only one indicator candidate n-gram and thereby each indicator n-gram represents all instances of a corresponding particular n-gram in the training corpus for which at least one instance of that particular n-gram was extracted from any annotated word sequence in any training document; each indicator n-gram is associated with only a single cognitive motivation orientation; and each indicator n-gram has, as its associated single cognitive motivation orientation, that single cognitive motivation orientation with which the instances of the particular n-gram represented by that particular indicator n-gram are most frequently associated.
8208959
12950531
1
1. An anti-blinding welding helmet having a wireless communication function, comprising: a light detecting device to measure the luminance of light based on a signal input from a photo sensor, in order to protect the welder's eyes from light generated from a welding or cutting torch; a control switch to input a user command; a main control device to set a reference value required to control the detection sensitivity of the light detecting device in response to the user command input from the control switch and also, to set a light transmittance and operation delay time of an anti-blinding plate, the main control device determining generation of welding light if the luminance of light detected by the light detecting device is a preset reference value or more and thus operating a light transmission control device; a light transmission control device to operate the anti-blinding plate according to the light transmittance set by the main control device so as to allow the anti-blinding plate to maintain a constant light transmittance value, a first receiving/transmitting unit to receive and convert an incoming call signal of a cellular phone and a user voice signal into a wireless signal and transmit the wireless signal to a second receiving/transmitting unit and also, to convert a wireless signal transmitted from the second receiving/transmitting unit into a digital signal and transmit the digital signal to the cellular phone to enable implementation of a voice call; the second receiving/transmitting unit to convert a wireless signal transmitted from the first receiving/transmitting unit into a digital signal and transmit the digital signal to the main control device and also, to convert a digital signal transmitted from the main control device into a wireless signal and transmit the wireless signal to the first receiving/transmitting unit; a voice output device to output a digital voice signal transmitted from the main control device so as to output a voice command that is previously input by a user; and a voice input device to input and transmit a user voice command to the main control device, wherein the main control device analyzes the digital signal transmitted from the second receiving/transmitting unit, and if it is determined that the digital signal is the incoming call signal, turns on a lamp provided inside the anti-blinding welding helmet and simultaneously, outputs a ringtone through the voice output device and also, transmits the digital voice signal, transmitted from the second receiving/transmitting unit, to the voice output device, wherein the main control device transmits the digital voice signal, converted from the user voice signal input through the voice input device, to the second receiving/transmitting unit to enable implementation of a voice call, wherein the voice output device includes: a voice information database to store the digital voice signal transmitted from the main control device and a voice command signal input by a user; an amplifier to amplify the digital voice signal or the ringtone transmitted from the main control device or the voice command signal output from the voice information database; and a speaker to output voice amplified by the amplifier to the outside so as to allow the user to hear the voice, and wherein the voice input device includes: a microphone to receive user voice, the microphone disposed in a position corresponding to the mouth of a user of the welding helmet; a filter to filter the voice input through the microphone; and a digital voice input signal processor to convert an analogue voice signal, input through the filter, into a digital signal.
20080249877
12099343
0
1. A method of publishing, comprising: providing a publication site that includes first time-sensitive data from a first data provider and second time-sensitive data pushed from a second data provider; and the site differentially updating the first and second data according to a fee schedule.
20160026618
14877272
0
1. A method comprising: inserting, via a discriminative classification approach, boundary tags into speech utterance text, the boundary tags identifying boundaries selected from a group comprising phrase boundaries, sentence boundaries, and paragraph boundaries, wherein the discriminative classification approach utilizes syntactic features before and after each word being tagged, to yield boundary marked speech utterance text and unedited text; identifying a coordinating conjunction within the unedited text based on a conjunction tag; and identifying clauses in the speech utterance text based on the boundary marked speech utterance text and the coordinating conjunction.
20150221320
14640912
0
1. A method comprising: in one or more computer processes functioning in at least one computer processor: processing an input speech utterance to produce a sequence of representative speech vectors; and performing a single time-synchronous speech recognition pass using a decoding search to determine a recognition output corresponding to the speech input, the decoding search including: i. for each speech vector before some first threshold number of speech vectors, estimating a feature transform based on a conventional feature normalization transform, ii. for each speech vector after the first threshold number of speech vectors, estimating the feature transform based on the preceding speech vectors in the utterance and partial decoding results of the decoding search, iii. adjusting a current speech vector based on the feature transform, and iv. using the adjusted current speech vector in a current frame of the decoding search; wherein for each speech vector after the first threshold number of speech vectors and before a second threshold number of speech vectors, the feature transform is interpolated between the transform based on the conventional feature normalization and the transform based on the preceding speech vectors in the utterance and the partial decoding results of the decoding search.
7778825
11485690
1
1. A method for extracting voiced/unvoiced classification information using a harmonic component of a voice signal, the method comprising the steps of: converting, by a frequency domain conversion unit, an input voice signal into a voice signal of a frequency domain; calculating, by a harmonic-residual signal calculation unit, a harmonic signal and a residual signal other than the harmonic signal from the converted voice signal; calculating, by a Harmonic to Residual Ratio (HRR) calculation unit, HRR using a calculation result of the harmonic signal and residual signal; and classifying, by a voiced/unvoiced classification unit, voiced/unvoiced sounds by comparing the HRR with a threshold value, wherein calculating the HRR comprises obtaining a harmonic energy using the calculated harmonic signal and the residual signal, calculating a residual energy by subtracting the harmonic energy from an entire energy of the voice signal, and calculating a ratio of the calculated harmonic energy to the calculated residual energy.
20080133230
11775450
0
1. A vehicle navigation system that generates a text message comprising: an input configured to receive a speech signal corresponding to the text message; a speech recognizer in communication with the input and configured to analyze the speech signal and recognize text of the speech signal; a speech database in communication with the speech recognizer that provides the speech recognizer with samples of recognized digital speech for recognizing the text of the speech signal; a message generator in communication with the speech recognizer that generates the text message based on the text of the speech signal recognized by the speech recognizer; and a transmitter in communication with the message generator and configured to transmit the text message over a network.
20140126714
13669384
0
1. A method for connecting a website user to a contact center agent, the method comprising: monitoring, by a processor, user interaction associated with the website user; receiving, by the processor, a call request via the website; identifying, by the processor, an agent or an interactive voice response based on the monitored user interaction; and establishing, by the processor, a WebRTC communication channel supported by a media engine implemented in a web browser, the media engine controlling a microphone and a speaker of a computing device of the website user, the WebRTC communication channel being established between the website user and the identified agent or the interactive voice response.
8051061
12033308
1
1. A method for query suggestion performed by a processor executing computer-executable instructions stored on a memory device, the method comprising: for an input query in source language, identifying a query in target language from a query log of a search engine, the query in target language and the input query in source language having a cross-lingual similarity, the identifying the query in target language from the query log comprising: providing a plurality of candidate queries in target language; evaluating the plurality of candidate queries in target language at least partly by deducing a monolingual similarity between the input query in source language and a translation of a respective candidate query from target language to source language; and ranking the plurality of candidate queries in target language using a cross-lingual query similarity score, the cross-lingual query similarity score being based on a plurality of features and a weight of each feature in calculating the cross-lingual query similarity score; and suggesting the query in target language as a cross-lingual query at least partly based on click-through information of documents selected by users for the query in target language.
20150039305
14450366
0
1. A controller for a voice-controlled device, comprising: a setting module, configured to generate a threshold according to an environmental parameter associated with an environment that the voice-controlled device disposed in; and a recognition module, configured to receive a speech, to perform speech recognition on the speech to generate a confidence score of speech recognition, and to compare the confidence score of speech recognition with the threshold to generate a control signal.
8751562
12429794
1
1. A system configured to pre-render an audio representation of textual content for subsequent playback, the system comprising: a requesting device comprising: a memory configured to store a computer program; and a processor configured to execute the computer program, wherein the computer program comprises: a download unit configured to download first textual content of a content type from a remote source server across a computer network; a signature generating unit configured to locally generate a first signature from the downloaded first textual content, wherein the first signature identifies the first textual content; a signature comparing unit configured to locally compare the first signature with a second signature identifying a previously downloaded second textual content of the same content type to determine whether the second textual content differs from the first textual content; a text to speech conversion unit configured to convert the first textual content to speech only when the signature comparing unit determines that the second textual content differs from the first textual content; and wherein, when resources of the requesting device are limited, the requesting device is configured to transfer the speech to the remote source server and remove the speech from itself.
20140376752
14289865
0
1. A ribbon microphone comprising: a ribbon microphone unit; an acoustic box for mounting a rear acoustic terminal of the ribbon microphone unit; a detective microphone mounted in the acoustic box, the detective microphone detecting sound waves identical to sound waves guided to the rear acoustic terminal of the ribbon microphone unit; a speaker comprising a diaphragm, the speaker being assembled in the acoustic box and varying the pressure in the acoustic box in response to the driven diaphragm; and a drive unit for driving the speaker so as to cancel a variation in pressure in the acoustic box in response to signals detected by the detective microphone, the variation being caused by sound waves guided to the rear acoustic terminal.
20100185440
12691283
0
1. A transcoding method, comprising: receiving an input bit stream from a sending end; determining an attribute of discontinuous transmission (DTX) used by a receiving end and a frame type of the input bit stream; and transcoding the input bit stream in a corresponding processing manner according to a determination result.
20090219166
12040306
0
1. A method of event notification comprising: receiving an indication of an occurrence of an event at a handheld communications device, the handheld communications device comprising a display device, the event having a notification definition associated therewith for providing a visual notification of the occurrence of the event, the notification definition comprising a content parameter specifying a scope of content of the visual notification, and an action parameter specifying action to be taken on the communications device after the visual notification is initiated; and providing the visual notification of the occurrence on the display device in accordance with the associated notification definition, the visual notification providing particulars of the event.
20120072859
13301982
0
1. A method for facilitating accurate review of a document by manipulating a scanned image of the document and indicating to the reader portions of the document which have been already reviewed in a previous or master document.
9031897
13429041
1
1. A method for use with a first classification model that classifies an input into one of a plurality of classes, wherein the first classification model was built using labeled training data, wherein the labeled training data comprises a plurality of items of labeled training data, wherein each of the plurality of items of labeled training data is labeled with one of the plurality of classes, the method comprising acts of: obtaining unlabeled input for the first classification model; building a similarity model that represents similarities between the unlabeled input and the labeled training data; and using a programmed processor and the similarity model to evaluate the labeled training data to identify a subset of the plurality of items of labeled training data that is more similar to the unlabeled input than a remainder of the labeled training data.
20080195610
11672736
0
1. An adaptive query handling method comprising: receiving an initial query in a database driven application; parsing the initial query to identify a query expression key; matching the query expression key to an adaptive query expression; transforming the adaptive query expression to a final query expression through a replacement of annotations in the adaptive query expression with static expressions conforming to a query language for the final query expression; and, applying the final query expression to a database subsystem for the database driven application.
20030081115
08598457
0
1. A spatial sound conference system comprising: a conference station comprising: right and left spatially disposed microphones connected to a communication channel for receiving right and left audio signals, wherein the differences between the right and left audio signals represent a head-related transfer function; and a remote station comprising: right and left spatially disposed loudspeakers connected to the communication channel.
20100144439
12328776
0
1. A method for providing out-of-band voice communication with external services during gameplay on a game console, the method comprising: detecting at the game console a signal from a user indicating the user wishes to issue a voice command; upon detecting the signal, switching via a switching mechanism in the game console a connection of an audio input/output (“I/O”) device connected to the game console from an in-band game communication channel to an out-of-band communication channel established with a voice recognition engine; and forwarding the voice command from the audio I/O device to the voice recognition engine via the out-of-band communication channel.
20160328387
14703018
0
1. An artificial intelligence system comprising: a storage device comprising a terminology database that stores (i) a plurality of terms utilized in a previous communication by a human user requesting a product and/or a service in a first spoken language, (ii) a plurality of responses in a second spoken language to the communication, and (iii) a plurality of outcomes that indicate accuracy of a correspondence between the plurality of responses in the second spoken language and the plurality of terms in the first spoken language, the second spoken language being distinct from the first spoken language; and a processor that (i) learns to generate responses associated with corresponding terms in a request based upon a statistical probability analysis of the plurality of outcomes from the terminology database, (ii) receives a request for a product and/or service in the first spoken language in a current communication, and (iii) generates a message having a response that is associated with a term present in the request based upon the statistical probability analysis.
20130320081
13904951
0
1. A method for controlling a payment card comprising a magnetic stripe emulator, a first input region, and a second input region, the method comprising: establishing a wireless connection with the payment card; transmitting a first magnetic sequence command to the payment card over the wireless connection, the first magnetic sequence command associated with a first payment method assigned to the first input region; transmitting a second magnetic sequence command to the payment card over the wireless connection, the second magnetic sequence command associated with a second payment method assigned to the second input region; and on a digital display, displaying virtual representations of the first input region and the second input region on the payment card, a visual identifier of the first payment method displayed proximal the virtual representation of the first input region, and a visual identifier of the second payment method displayed proximal the virtual representation of the second input region.
7881534
11455874
1
1. A method for using prior corrections of a user to improve recognition operations comprising: receiving a handwritten input from a user; performing a recognition operation to determine a top recognized word; analyzing a history of prior corrections by the user to calculate a ratio comprising a forward quantity of corrections from the top recognized word to a particular word over a backward quantity of corrections from the particular word to the top recognized word; and if the ratio meets or exceeds a desired minimum, then swapping the particular word for the top recognized word and displaying the particular word on a display device as a recognition result.
20150170044
14105874
0
1. A pattern based audio searching method, comprising: labeling a plurality of source audio data based on patterns to obtain audio label sequences of the source audio data; obtaining, with a processing device, an audio label sequence of target audio data; determining matching degree between the target audio data and the source audio data according to a predetermined matching rule based on the audio label sequence of the target audio data and the audio label sequences of the source audio data; and outputting source audio data having matching degree higher than a predetermined matching threshold as a search result.
4853952
07128254
1
1. Apparatus for storage and retrieval of voice signals, comprising: (a) a plurality of first input means for input of said voice signals; (b) a plurality of second input means for input of input control signals associated with said voice signals, said input control signals including addressee identification signals and message description signals; (c) storage means for storing said voice signals for later retrieval and output; (d) station means, identified by a particular one of said addressee identification signals, said station means further comprising: (d1) output means for output of said stored voice signals; (d2) display means for display of text messages; and (d3) generating means for generating voice signal retrieval signals; (e) control means for: (e1) responding to said input control signals to control said storage means to store said associated voice signals; (e2) outputting said text messages, said text messages corresponding to said stored voice signals, to said station means for display when said addressee identification signals associated with said stored voice signals identify said station means, said corresponding text messages including information in accordance with said message description signals; and (e3) responding to said voice signal retrieval signals from said station means to control said storage means to output said associated voice signals to said station means for output; and, (f) transmission means for; (f1) transmitting said voice signals from said first input means to said storage means, and said stored voice signals from said storage means to said station means; (f2) transmitting said input control signals from said second input means to said control means; (f3) transmitting said text messages from said control means to said station means; and (f4) transmitting said voice signal retrieval signals from said station means to said control means
6098041
08841608
1
1. A speech synthesis system comprising: a schedule managing server comprising: schedule data base for storing schedule information of a plurality of users, schedule retrieving means for retrieving from said schedule data base schedule information meeting predetermined condition, and schedule sending means for sending the retrieved schedule information, and a voice synthesizing server comprising: text receiving means for receiving schedule information from said schedule managing server, waveform generating means for generating voice waveforms corresponding to schedule information received by said text receiving means, and waveform sending means for sending said voice waveforms to either of said client or said schedule managing server; a client comprising: schedule/waveform receiving means for receiving said schedule information and said voice waveforms corresponding to said schedule information, and voice output means for vocally outputting said voice waveforms received by said schedule/waveform receiving means, wherein pronunciation symbols are generated based on the schedule information, and acoustic parameters are generated based on the pronunciation symbols.
4651289
06460623
1
1. A pattern recognition apparatus for identifying the category of an input pattern from various categories of value patterns stored in the apparatus comprising: a vector generating means for converting an input pattern signal to an input vector representing the characteristics of the input pattern; a dictionary means for storing a plurality of reference vectors for each of said various categories, including a first memory means for storing a plurality of predetermined first reference vectors for each of said categories, said first reference vectors representative of features common to said categories, and a second memory means for storing a plurality of subsequently determined second reference vectors for each of said categories, said second reference vectors representative of features particular to said categories, said second reference vectors being mutually exclusive of said first reference vectors; a reference vector generating means for generating said second reference vectors from the input vector and for storing them in said second memory means, said reference vector generating means generating said second reference vectors by subtracting from the input vector a vector having components corresponding to the angles between the input vector and each of said first reference vectors; a similarity calculating means for calculating the similarities between said input vector and reference vectors stored in said dictionary means for each of said categories; and a comparing means for comparing the similarities calculated for each of said categories and for identifying the category of the input pattern.
9454963
13799962
1
1. A text-to-speech method for use for simulating a plurality of different voice characteristics, said method comprising: inputting text; dividing said inputted text into a sequence of acoustic units; selecting voice characteristics for the inputted text; converting said sequence of acoustic units to a sequence of speech vectors using an acoustic model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to a speech vector; and outputting said sequence of speech vectors as audio with said selected voice characteristics, wherein a parameter of a predetermined type of each probability distribution in said selected voice characteristics is expressed as a weighted sum of parameters of the same type, and wherein the weighting used is voice characteristic dependent, such that converting said sequence of acoustic units to a sequence of speech vectors comprises retrieving the voice characteristic dependent weights for said selected voice characteristics, wherein the parameters are provided in clusters, and each cluster comprises at least one sub-cluster, wherein said voice characteristic dependent weights are retrieved for each cluster such that there is one weight per sub-cluster.
20150317383
14267184
0
1. A method, in a data processing system comprising a processor and a memory, for performing an operation based on an identification of similar lines of questioning by input question sources, the method comprising: obtaining, by the data processing system, question information identifying extracted features of an input question and a first source of the input question; performing, by the data processing system, a clustering operation to cluster the input question with one or more other questions of a cluster based on a similarity of the extracted features of the input question to features of the one or more other questions; and performing, by the data processing system, an operation based on results of the clustering of the input question with the one or more other questions, wherein the operation facilitates at least one of a collaboration between the first source of the input question and a second source of another question in the cluster, a communication between the first source and the second source, or a reporting of the results of the clustering operation to either the first source, the second source, or a third party.
7475084
11134725
1
1. A method for data query comprising: providing a data schema having a data schema query language associated therewith; providing an ontology model including classes and properties, the ontology model having an ontology query language associated therewith, wherein constructs of the data schema are mapped to corresponding classes, properties or compositions of properties of the ontology model; providing a query expressed in the ontology language; generating a query expressed in the data schema query language corresponding to the query expressed in the ontology query language; providing an additional data schema having an additional data schema query language associated therewith, wherein constructs of the additional data schema are also mapped to corresponding classes, properties or compositions of properties of the ontology model; and generating a query expressed in the additional data schema query language corresponding to the query expressed in the ontology query language.
8266164
12329804
1
1. A computer-implemented method for bridging terminology differences between at least two subject areas, comprising executing the following steps on a computer: obtaining a first corpus associated with a first subject area and a second corpus associated with a second subject area; for each obtained corpus, performing the steps of: computing a glossary; computing an affinity matrix between pairs of terms in the glossary and assigning scores according to a similarity measure; and computing a transitive closure of the affinity matrix and assigning a score for a pair of terms in the transitive closure of the affinity matrix using a composite path probability; computing a set of bridge terms by intersecting the respective glossaries of the first corpus and the second corpus; and computing a synonym dictionary as a set of triples S(f, t, w) where f is a term in the glossary of the first corpus, t is a term in the glossary of the second corpus, and there exists a term b in the set of bridge terms such that a term triple (f, b, t) is in a join of the transitive closure of the affinity matrix of the first corpus, the set of bridge terms, and the transitive closure of the affinity matrix of the second corpus, and where w is the composite path probability of (f, b, t), wherein the obtaining and computing steps are performed by a computer processor.
7479011
11468081
1
1. A system for administering an assessment of literacy and linguistic abilities in a student comprising: means for audibly projecting a word to a student, the word comprising a plurality of phonemes, each phoneme corresponding to a sound and to at least one letter; means for asking the student to provide an attempt to spell the word based upon the sounds in the word, the attempt comprising at least one letter; means for receiving the student attempt; and means for scoring the student attempt, wherein at least partial credit is given for a student attempt that is incorrect but that includes at least one acceptable letter-phoneme correspondence.
20170263142
15063609
0
1. An anti-cheating system for online examination, comprising: a computer having a computer-monitor to display a first part of an exam question; an eye glasses having an eye-monitor to display a second part of said exam question, an eye-recognition device to continuously monitor and obtain the iris-pattern of a wearer of said eye glasses, and said eye glasses having means to connect to said computer; a computer program to compare the iris-pattern of said wearer with a pre-stored-iris-pattern of an examinee, to display said second part of said exam question on said eye monitor if said wearer is identified as the examinee by comparing iris patterns, otherwise do not show said second part, and to receive an answer for an exam question from said wearer, whereby the examination process is stopped as soon as the eye-recognition device does not receive the correct iris pattern from the wearer.
7689974
12339470
1
1. A method of monitoring execution behavior of a program product, the method comprising: (a) providing for the program product a trace tool producing a trace tool output having human-readable trace strings written in a human language, the trace strings including human-readable data fields for recording diagnostic information related to executable instructions of the program product; (b) providing a database storing identifiers of the trace tool, the trace strings, the data fields, and components of the diagnostic information in a binary language and the human language, wherein the database is used to cross-reference the identifiers and the components of the diagnostic information in the binary and human languages for translating contents of the trace tool output from one language into another; (c) using the database, encoding the contents of the trace tool output by: (i) converting the human-readable trace strings and at least portions of the human-readable data fields of the trace tool output into the corresponding binary identifiers stored in the database; and (ii) adapting the trace tool for storing the diagnostic information in the human-readable data fields using the binary identifiers, to produce a a binary trace-tool output; (d) monitoring execution of the instructions of the program product using the adapted trace tool; (e) producing an encoded trace report containing the binary trace-tool output having the diagnostic information in the form of trace strings and data fields replaced with their respective binary identifiers; and (f) decoding the encoded trace report into the human language wherein the corresponding human-readable trace strings of the encoded trace report are found in the database based on the binary identifiers and the binary identifiers are converted into the human-readable trace strings.
9703769
14877272
1
1. A method comprising: inserting, via a discriminative classification approach, boundary tags into speech utterance text, the boundary tags identifying boundaries selected from a group comprising phrase boundaries, sentence boundaries, and paragraph boundaries, wherein the discriminative classification approach utilizes syntactic features before and after each word being tagged, to yield boundary marked speech utterance text and unedited text; identifying, via a processor, a coordinating conjunction within the unedited text based on a conjunction tag, wherein the conjunction tag comprises conjunction span information indicating how many words to the left of the conjunction tag a corresponding conjunction includes; and identifying clauses in the speech utterance text based on the boundary marked speech utterance text and the coordinating conjunction.
10114895
14455776
1
1. A method for enhancing search queries related to streaming multimedia, the method comprising: at a server with one or more processors and memory storing programs configured for execution by the one or more processors: receiving a search query from a first user device entered by a user during a time window; accessing a repository of streaming multimedia related information to determine one or more streaming multimedia programs available to the first user device for watching during the time window; identifying a first set of categories associated with the received search query; for each streaming multimedia program of the one or more streaming multimedia programs, identifying a respective second set of categories associated with the respective streaming multimedia program; determining that a first streaming multimedia program of the one or more streaming multimedia programs is being displayed on a second user device in proximity to the first user device by comparing the first set of categories to each second set of categories; determining one or more additional search terms that are relevant to the determined first streaming multimedia program and the received search query; modifying the received search query by adding the one or more additional search terms to the received search query; identifying search results corresponding to the modified search query; and causing the first user device to display the search results.
5388164
07932414
1
1. A method for determining an agglutination reaction from a particle pattern formed on an inclined bottom surface of a reaction vessel comprising the steps of: scanning photoelectrically the inclined bottom surface to derive an image signal which represents a two-dimensional image of the particle pattern; processing the image signal into area light intensities by separating the inclined bottom surface into a plurality of areas due to different contours of the inclined bottom surface including decomposing the image signal into a series of concentric rings, each ring representing a different contour area of the image and integrating light intensities in each area to derive the area light intensities; determining an average intensity of each ring; inputting the area light intensities into a neural network to produce output signals; and determining an agglutination reaction based on the output signals.
20040015365
10619716
0
1. A method of speech recognition based interactive information retrieval for ascertaining and retrieving a target information of a user by determining a retrieval key entered by the user using a speech recognition processing, comprising the steps of: (a) storing retrieval key candidates that constitute a number of data that cannot be processed by the speech recognition processing in a prescribed processing time, as recognition target words in a speech recognition database, the recognition target words being divided into prioritized recognition target words that constitute a number of data that can be processed by the speech recognition processing in the prescribed processing time and that have relatively higher importance levels based on statistical information defined for the recognition target words, and non-prioritized recognition target words other than the prioritized recognition target words; (b) requesting the user by a speech dialogue with the user to enter a speech input indicating the retrieval key, and carrying out the speech recognition processing for the speech input with respect to the prioritized recognition target words to obtain a recognition result; (c) carrying out a confirmation process using a speech dialogue with the user according to the recognition result to determine the retrieval key, when the recognition result satisfies a prescribed condition for judging that the retrieval key can be determined only by a confirmation process with the user; (d) carrying out a related information query using a speech dialogue with the user to request the user to enter another speech input indicating a related information of the retrieval key, when the recognition result does not satisfy the prescribed condition; (e) carrying out the speech recognition processing for the another speech input to obtain another recognition result, and adjusting the recognition result according to the another recognition result to obtain adjusted recognition result; and (f) repeating the step (c) or the steps (d) and (e) using the adjusted recognition result in place of the recognition result, until the retrieval key is determined.
20150186360
14544311
0
1. A language system for use with one or more user devices and comprising: an image library; one or more audio libraries for each language, each such audio language library being coupled to the image library when such language is chosen; and wherein the user can comprise words, phrases and/or sentences by selecting one or more images from the image library, in response to which, the language system can deliver an audio representation of such words, phrases or sentences in a chosen language.
20100179964
12655836
0
1. A computer-implemented user interface and system for a two-stage search wherein in the first stage, the target keyphrase the user intends to search for is constructed using one or more searches for keyword(s) or sub keyphrase(s) that comprise the target keyphrase using a first keyword index and in the second stage a search for the target keyphrase is performed among the search domain's documents using a second document index and optionally a search for the target keyphrase is performed among advertisements using a third advertisement index; the system comprising of a user device system from the which the user performs the search, a search server system which performs the search for keywords and sub keyphrases for the text entered by the user using the first keyword index and performs the search for documents using the target keyphrase in the second document index, and optionally an advertisement server system that performs the search for advertisements corresponding to the target keyphrase in a third advertisement index; wherein in the first stage the search server system returns keywords and sub keyphrases as results in an incremental character-based manner where matching keyword and sub keyphrase results are displayed to the user in response to every new character entered by the user while constructing the target keyphrase.
8065286
11336928
1
1. A method of performing a search, comprising: receiving a query from a query source; selectively including a keyword among a plurality of keywords based on a predetermined frequency of the keyword in a corpus; registering, by a searcher, for the keyword; selecting the searcher based on a ranking of the searcher among searchers registered for the keyword when determining that the keyword is a highest ranking keyword of the query; selecting the searcher based on a generalist ranking of the searcher when determining that the query does not indicate a keyword registered by at least one searcher; conducting the search by the searcher using a resource ranked highest by the searcher for conducting searches; presenting search results to the searcher from a plurality of resources selected based on the searcher, the searcher reviewing the search results from the resource ranked highest and the plurality of resources and selecting a result considered optimal for the query; and supplying the result to the query source subsequent to said reviewing by the searcher.
8700658
13596539
1
1. A system comprising: a database configured to store a plurality of meta models, each meta model being associated with a domain and including one or more two-dimensional data structures, each with a plurality of columns, each column being associated with a title and the title being associated with one or more definitions or aliases, the two-dimensional data structures being populated with data derived from and correlated to a data schema and organized within the two-dimensional data structures according to at least one of the title, definition, and alias; a processor configured to execute a set of instructions stored on a memory; the memory configured to store the set of instructions, which when executed by the processor cause the processor to: receive a query from a user via a user interface, the query containing one or more keywords and a domain; decode the query to determine a query type and compare the received domain and keywords to at least one of the titles, definitions, and aliases associated with the respective meta models and, when a match is found, saving save a record of the match; use the query type, domain, keywords, and records to obtain a relational related data stream for each matching meta model; generate a new meta model comprising a collection of attributes obtained from the matching meta models using the query type, domain, keywords, records, and related data stream for each matching meta model, wherein the new meta model is derived from attribute relationships in one or more of the matching meta models and corresponding fields in data schemas of data stores maintained by one or more remote systems, which data stores are configured to accessed by the processor via corresponding data access interfaces, wherein relationships between the new meta model and the matching meta models are specified by relationship structures stored in the database; select one or more query results from the new meta model using the relational related data stream; provide the selected query results to the user interface; generate a new meta model using the domain, keywords, records, and relational data stream for each matching meta model; and the user interface configured to receive the query from the user and display query results to the user.
8780077
13222483
1
1. A user interface presented by a computing device having a processor, a memory, and a handwriting input device, the user interface comprising: an input window associated with the handwriting input device and configured to one or more of receive a handwriting input and toggle a handwriting input panel for receipt of a handwriting input; a candidate window that includes a recognition display state and a prediction display state, the candidate window in the recognition display state presenting one or more recognition candidates and one or more combination candidates that include a first recognition candidate from the one or more recognition candidates followed by a prediction candidate, the prediction candidate being determined based on the first recognition candidate and one or more of a user's input history and a lexicon of phrases, and the candidate window in the prediction display state presenting one or more second prediction candidates associated with a selected one of the recognition candidates and combination candidates; and an edit field that displays a text string that includes one or more of the recognition candidates, the prediction candidates, and second prediction candidates.
20100145684
12456012
0
1. A method of processing a narrowband speech signal comprising speech samples in a first range of frequencies, the method comprising: generating from the narrowband speech signal a highband speech signal in a second range of frequencies above the first range of frequencies; determining a pitch of the highband speech signal; using the pitch to generate a pitch-dependent tonality measure from samples of the highband speech signal; and filtering the speech samples using a gain factor derived from the tonality measure and selected to reduce the amplitude of harmonics in the highband speech signal.
20010029454
09821671
0
1. A speech synthesizing method comprising: the division step of acquiring partial speech segments by dividing a speech segment in a predetermined unit with a phoneme boundary; the estimation step of estimating a power value of each partial speech segment obtained in the division step on the basis of a target power value; the changing step of changing the power value of each of the partial speech segments on the basis of the power value estimated in the estimation step; and the generating step of generating synthesized speech by using the partial speech segments changed in the changing step.
8583438
11903020
1
1. At least one computer storage medium having computer-executable instructions that, when executed by a computer, cause the computer to perform a method comprising: building, based on text, a lattice comprising speech units, wherein each speech unit in the lattice is obtained from a database comprising a plurality of candidate speech units; finding, by the computer in the lattice, a sequence of speech units that conforms to the text; pruning, by the computer from the sequence of speech units, any of the speech units in the sequence that, based on likelihood ratios and a prosody model that was trained using actual speech, are detected to have unnatural prosody, where the prosody model exhibits a bias toward detecting unnatural prosody; iterating, by the computer, the finding and the pruning until completion that is based on a condition selected from a group of conditions comprising: 1) every speech unit in the sequence corresponding to natural prosody, and 2) iterating a maximum number of iterations.
10031523
15393137
1
1. An autonomous vehicle, comprising: a vehicle interior for receiving one or more occupants; a plurality of sensors to collect vehicle-related information, occupant-related information, and exterior environmental and object information associated with the vehicle; an automatic vehicle location system to determine a current spatial location of the vehicle; a computer readable medium to store selected information; an arithmetic logic unit that performs mathematical operations; a data bus that, at the request of the arithmetic logic unit, sends data to or receives data from the computer readable medium; an address bus that, at the request of the arithmetic logic unit, sends an address to the computer readable medium; a read and write line that, at the request of the arithmetic logic unit, commands the computer readable medium whether to set or retrieve a location corresponding to the address; one or more registers to latch a value on the data bus or output by the arithmetic logic unit; and one or more buffers, wherein the arithmetic logic unit is coupled to the plurality of sensors, automatic vehicle location system, and computer readable medium, and: determines a current spatial location of the vehicle, receives current vehicle-related information, current occupant-related information, and exterior environmental and object information, generates, from the exterior environmental and object information, a three-dimensional map comprising exterior animate objects in spatial proximity to the vehicle, the exterior animate objects comprising a selected exterior animate object, models from the three-dimensional map a first predicted behavior of the selected exterior animate object, receives a different second predicted behavior of the selected exterior animate object generated by another vehicle, and based on the three-dimensional map and the first and second predicted behaviors of the selected exterior animate object, issues a command to a vehicle component to perform a vehicle driving operation, wherein the first and second predicted behaviors, each executed alone by the arithmetic logic unit, cause the arithmetic logic unit to produce different commands.
9064489
13720900
1
1. A system comprising: one or more processors; a computer-readable memory; and a module comprising executable instructions stored in the computer-readable memory, the module, when executed by the one or more processors, configured to: obtain a voice recording and a corresponding sequence of speech units; select a first speech segment, wherein the first speech segment corresponds to a portion of the voice recording and wherein the first speech segment corresponds to a first speech unit; apply a first compression technique to the first speech segment to create a first compressed speech segment, wherein the first compression technique comprises one of time domain compression or perceptual compression; apply a second compression technique to the first compressed speech segment to create a second compressed speech segment, wherein the second compression technique comprises one of time domain compression or perceptual compression, and wherein the second compression technique is different from the first compression technique; distribute the second compressed speech segment to a client computing device for use in a text-to-speech system.
20030115063
10291710
0
1. A voice control method for controlling a voice produced by a character appearing in a computer game, the method comprising: a conversion step for converting a voice that is externally input or provided in advance, based upon attribute information on the character; and an output step for outputting the converted voice as voice of the character.
20080065379
11983494
0
1. A printer circuit comprising: a speech recognizer comprising: a first input that receives a remotely generated voice message; a second input that receives speech patterns and vocabulary words from a speech bank; logic for comparing the remotely generated voice message to the speech patterns and vocabulary words from the speech bank and translating the remotely generated voice message into text data, wherein the speech recognizer is operable to have its operation corrected based on corrective input from a user; and printing logic that causes the text data to be printed.
20030018469
09909530
0
1. A method of generating a sentence from a semantic representation, the method comprising: (A) mapping the semantic representation to an unordered set of syntactic nodes; (B) using grammar rules from a generation grammar and statistical goodness measure values from a corresponding analysis grammar to create a tree structure to order the syntactic nodes; and (C) generating the sentence from the tree structure.
7932895
11137093
1
1. A method comprising: recording with a gesture input device a gesture, wherein the gesture is defined by a moving point having a trajectory broken into partial gestures which are contiguous touch gestures, and where waypoints separate the partial gestures and the trajectory of the moving point is uninterrupted from a starting point to an end point and detecting, within the gesture, each one of a sequence of the partial gestures that form the gesture; and performing a same first command on detecting each partial gesture, where the gesture input device comprises part of an electronic device that comprises a display, and where the first command is at least one of a command to perform a display scroll up function or scroll down function, and a command to perform a display page back function or page forward function, wherein the method further comprises: a) detecting a potential waypoint separating partial gestures; b) testing to confirm that the potential waypoint is a waypoint separating partial gestures; c) if the test is positive, performing the first command; and repeating step a), b) and c) until the gesture ends.
20140379347
13926659
0
1. An electronic system configured to perform speech processing comprising: an audio detection system configured to receive a signal including speech; a memory having stored therein a database of keyword models forming an ensemble of filters associated with each keyword in the database; a processor configured to: i) receive the signal including speech from the audio detection system; ii) decompose the signal including speech into a sparse set of phonetic impulses; iii) access the database of keywords and convolve the sparse set of phonetic impulses with the ensemble of filters; iv) identify keywords within the signal including speech based on iii); and v) control operation the electronic system based on the keywords identified in iv).
20170301342
15203758
0
1. A device configured to identify phonemes within audible signal data, the device comprising: one or more audio sensors configured to receive the audible signal data; a spectral feature characterization module configured to generate a first feature stream and a plurality of targeted feature streams from the received audible signal data, wherein the first feature stream is generated primarily in order to provide audio information indicative of non-problematic phonemes, wherein each of the plurality of targeted feature streams is generated in order to provide audio information indicative of a corresponding problematic phoneme; an ensemble phoneme recognition neural network configured to assess which of a plurality of phonemes is present within the received audible signal data based on inputs including the first feature stream and a plurality of detection indicator values, wherein each of the plurality of detection indicator values characterizes a respective probability that a corresponding problematic phoneme is present within the received audible signal data; a phoneme-specific experts system having a plurality of problematic phoneme-specific expert neural networks (PPENNs) each configured to generate a respective one of the plurality of detection indicator values from a corresponding one of the plurality of targeted feature streams, wherein each of the plurality of targeted feature streams is associated with a respective problematic phoneme; and synthesizing, by the ensemble phoneme recognition neural network, one or more phoneme candidates as recognized within the received audible signal data based on the first feature stream and the plurality of detection indicator values.
20040181520
10784768
0
1. A document search system for the retrieval of documents related to a search key that is entered, said system comprising: a word sense associative network presenting portion for presenting a meaning of said search key together with its related meaning in a network; a search portion for conducting a search using said search key; and a filtering portion for selecting documents from a set of documents obtained as a result of said search that matches selected word senses.
8937537
13097146
1
1. A method of operating an audio system in an automobile, comprising the steps of: identifying a user of the audio system; identifying an audio recording playing on the audio system; sensing an audio setting entered into the audio system by the identified user while the audio recording is being played by the audio system; storing the sensed audio setting in memory in association with the identified user and the identified audio recording; retrieving the audio recording from memory with the sensed audio setting being embedded in the retrieved audio recording as a watermark signal; playing the retrieved audio recording on the audio system with the embedded sensed audio setting being automatically implemented by the audio system during the playing; providing a set of audio recordings; identifying which the audio recordings in the set that the user skips while listening to the set; storing in memory the audio recordings in the set that the user skips while listening to the set; and automatically skipping the audio recordings previously skipped by the user when playing the set of audio recordings.
20080096726
11848988
0
1. An athletic performance control system, comprising: a display system that presents workout information to a user, wherein the workout information includes information relating to content of a user's workout routine; a user interface system that prompts the user for a first input relating to a desired workout intensity parameter for the workout routine, wherein the user interface system prompts the user for the first input after the workout routine has begun; an input system for receiving the first input; and a processing system programmed and adapted to provide, under at least some circumstances, a revised or modified workout routine based, at least in part, on the first input.
8817775
12715754
1
1. An access gateway comprising: an IAD interface for receiving IP packets comprised of control packets and voice packets both sent from an integrated access device (IAD) together, to perform IP telephone service functions for each subscriber selectively in the access gateway, instead of performing IP telephone service functions for subscribers in the integrated access device (IAD); a control signal converter receiving, as input, said control packets from said IAD interface and converting the control packets to control information; a voice signal converter receiving, as input, said voice packets from said IAD interface and converting the voice packets to voice information; and an allocating circuit, connected to both soft switch (SS) at the IP network side and local exchange (LE) at the PSTN side, allocating said IP packets to either the soft switch (SS) or the local exchange (LE) upon receiving both said converted control information and said converted voice information applied from said control signal converter and voice signa converter, respectively, switching a current speech path, when a notification of trouble from the soft switch (SS) is received, to a PSTN speech path automatically or switching a current speech path, when a notification of trouble from the local exchange (LE) is received, to an IP speech path automatically, wherein, the access gateway is operative to perform protocol conversion between a first protocol defining transfer control of said IP packets and second protocol defining transfer control from said allocating circuit to said soft switch (SS), protocol conversion between said first protocol and a third protocol defining transfer control from said allocating circuit to said local exchange (LE), address conversion between the IP addresses of the IP packets transferred at said integrated access device (IAD) side and the IP addresses of the IP packets transferred at the soft switch (SS) side, and number-address conversion between the IP addresses of said IP packets transferred at said integrated access device (IAD) side and telephone numbers of TDM signals transferred at the local exchange (LE) side.
20010055370
09835237
0
1. Voice portal hosting system, intended to be connected to a first voice telecommunication network in order for a plurality of users in said network to establish a connection with said system using a voice equipment, said system comprising a memory in which a plurality of interactive voice response applications have been independently uploaded through a second telecommunication network by a plurality of independent value-added service providers, wherein at least a plurality of said interactive voice response applications uses a common speech recognition module run on said system.
20140129222
14126567
0
1. A speech recognition system comprising: a first speech recognition device; a second speech recognition device; and an acoustic model identifier series generation apparatus, wherein the first speech recognition device comprises: a sound input unit configured to obtain sound and to output sound data of the obtained sound; a first recognition dictionary configured to store recognition data formed of a combination of information on a character string, and an acoustic model identifier series based on a first type of feature, the acoustic model identifier series corresponding to the information on the character string; a first speech recognition processing unit configured to extract the first type of feature from a piece of the sound data outputted by the sound input unit, and to perform a speech recognition process on the piece of sound data using the first type of feature and the first recognition dictionary; and a recognition data registration unit, the second speech recognition device comprises: a second recognition dictionary configured to store recognition data formed of a combination of information on a character string, and an acoustic model identifier series based on a second type of feature corresponding to the information on the character string and different from the first type of feature; and a second speech recognition processing unit configured to extract the second type of feature from the piece of sound data, and to perform a speech recognition process on the piece of sound data using the second type of feature and the second recognition dictionary, and to transmit information on a character string corresponding to the piece of sound data to an outside, the acoustic model identifier series generation apparatus comprises an acoustic model identifier series generation unit configured to extract the first type of feature from the piece of sound data, and to generate an acoustic model identifier series based on the first type of feature corresponding to the piece of sound data, and to transmit the acoustic model identifier series, the recognition data registration unit of the first speech recognition device: receives the acoustic model identifier series based on the first type of feature corresponding to the piece of sound data transmitted by the acoustic model identifier series generation unit, and the information on the character string corresponding to the piece of sound data transmitted by the second speech recognition processing unit; and registers, in the first recognition dictionary, the recognition data be stored in the first recognition dictionary, the recognition data being formed of a combination of the received acoustic model identifier series based on the first type of features and the information on the character string.
6052491
08788141
1
1. A method of converting a monotonic sequence of input words to a sequence of digital output words, said method comprising: translating said sequence of consecutively valued input words to produce a range of output words, said translating step producing an output word for each input word, wherein said range of output words is a function of said sequence of consecutively valued input words; locating regions in said sequence of output words in which consecutive input words translate to a common output word; calculating the number of consecutive input words that translate to a common output word in each of said regions; producing an error signal for each output word in said regions, said error signal representing the number of consecutive input words that translate to said output word and the relative location of each output word within said regions; and adding said error signals to said output word corresponding to said error signal.
9941900
15724265
1
1. A method performed by a computing system comprising one or more processors and memory, the method comprising: compressing an original content item to a baseline lossless compressed data format; binarizing the baseline lossless compressed data format to a binarized format; arithmetically coding the binarized format based on probability estimates from a recurrent neural network probability estimator; and wherein the recurrent neural network probability estimator generates probability estimates for current symbols of the binarized format to be arithmetically coded based on symbols of the binarized format that have already been arithmetically coded during the arithmetically coding the binarized format.
20090254348
12099041
0
1. A method for voice enabling a Web page with free form input field support comprising: receiving speech input for an input field in a Web page; parsing a core attribute for the input field and identifying an external statistical language model (SLM) referenced by the core attribute of the input field; posting the received speech input and the SLM to an automatic speech recognition (ASR) engine; and, inserting a textual equivalent to the speech input provided by the ASR engine in conjunction with the SLM into the input field.
8090093
11988005
1
1. An echo canceller comprising: a pseudo echo generation means including an adaptive filter, the pseudo echo generation means generating a pseudo echo signal in accordance with a receiving-speech signal; an echo cancellation means which subtracts the pseudo echo signal from a sending-speech signal, thereby canceling an echo signal from the sending-speech signal; a smoothed sending-speech signal calculation means which calculates a smoothed sending-speech signal from the sending-speech signal, the smoothed sending-speech signal being obtained by smoothing the sending-speech signal; a smoothed receiving-speech signal calculation means which calculates a smoothed receiving-speech signal from the receiving-speech signal, the smoothed receiving-speech signal being obtained by smoothing the receiving-speech signal; a delay time information generation means which obtains delay time information reflecting delay characteristics of an echo path, in accordance with a correlation between the smoothed sending-speech signal and the smoothed receiving-speech signal; an update information generation means which obtains update information indicating execution of updating of the tap coefficients of the adaptive filter or suspension of updating of the tap coefficients of the adaptive filter, in accordance with the sending-speech signal, the receiving-speech signal, and the delay time information; a sending-speech band limiting means which limits a frequency band of the sending-speech signal to supply the sending-speech signal having the limited frequency band to the smoothed sending-speech signal calculation means; and a receiving-speech band limiting means which limits a frequency band of the receiving-speech signal to supply the receiving-speech signal having the limited frequency band to the smoothed receiving-speech signal calculation means; wherein if the update information indicates the execution of updating, the pseudo echo generation means updates the tap coefficients and receives the delay time information as information reflecting the delay characteristics of the echo path to perform processing of the received delay time information; and wherein the pseudo echo generation means generates the pseudo echo signal and updates the tap coefficients by using the sending-speech signal having the frequency band limited by the sending-speech band limiting means and the receiving-speech signal having the frequency band limited by the receiving-speech band limiting means.
20030001760
10233458
0
1. A method of converting a stream of databits of a binary information signal into a stream of databits of a constrained binary channel signal, wherein the stream of databits of the binary information signal is divided into n-bit information words, said information words being converted into m 1 -bit channel words in accordance with a channel code C 1, or m 2 -bit channel words, in accordance with a channel code C 2, where m 1, m 2 and n are integers for which it holds that m 2 >m 1 ≧n, wherein the m 2 -bit channel word is chosen from at least two m 2 -bit channel words, at least two of which have opposite parities, the concatenated m 1 -bit channel words and the m 2 -bit channel words complying with a runlength constraint of the binary channel signal, characterized in that the method comprises the repetitive and/or alternate steps of: selecting the m 1 -bit channel word from a set out of a plurality of sets of m 1 -bit channel words, each set comprising only m 1 -bit channel words having a beginning part out of a subset of beginning parts of the m 1 -bit channel words, each set being associated with a coding state of channel code C 1, the coding state being established in dependence upon an end part of the preceding channel word, or: selecting the m 2 -bit channel word from a set out of a plurality of sets of m 2 -bit channel words, each set comprising only m 2 -bit channel words having a beginning part out of a subset of beginning parts of the m 2 -bit channel words belonging to said set, each set being associated with a coding state of channel code C 2, the coding state being established in dependence upon an end part of the preceding channel word, the end parts of the m 1 -bit channel words in a coding state of channel code C 1 and the beginning parts of the m 2 -bit channel words in a set of channel code C 2 being arranged to comply with said runlength constraint.
9020245
13367425
1
1. A training device, comprising: a memory configured to store computer executable instructions; a processor configured to execute the computer executable instructions to perform operations comprising; storing data which is relevant to a training course for a user to train operations of an input device other than the training device, wherein the training course leads the user to train, on a user interface of the input device on which a plurality of buttons for inputting the operations to the input device are provided, an operation to one of the buttons; regenerating at least one of an image and a voice for training during the training course, the at least one of the image and the voice for training showing a plurality of phenomena that an opportunity that the user should perform an operation to one of the buttons is changed along with time, the at least one of the image and the voice for training not identifying one of the buttons to be normally operated by the user and the at least one of the image and the voice for training not instructing the user to operate the identified one of the buttons; accepting an operation to one of the buttons by the user in response to the at least one of the image and the voice for training from a simulated user interface which simulates the user interface of the input device during training; regenerating the at least one of the image and the voice for training when the training is ended; and instructing a normal operation to an one of the buttons by the user to the user by outputting at least one of an image and a voice indicating the normal operation to one of the buttons by the user in response to the at least one of the image and the voice for training, in parallel to the regenerating when the training is ended.
20120226644
13041253
0
1. A method of accurate neural network training for library-based critical dimension (CD) metrology, the method comprising: optimizing a threshold for a principal component analysis (PCA) of a spectrum data set to provide a principal component (PC) value; estimating a training target for one or more neural networks; training, based both on the training target and on the PC value provided from optimizing the threshold for the PCA, the one or more neural networks; and providing a spectral library based on the one or more trained neural networks.
9335899
14095025
1
1. A method for executing a function executing command through a gesture input, the method comprising: displaying a keyboard window for inputting text and a text input field for displaying the text inputted by the keyboard window on a touch screen display; recognizing one of gesture inputs performed in the text input field, wherein the gesture inputs include gesture inputs from an upper area, a lower area, a left area, a right area, an upper right area, a lower right area, an upper left area, or a lower left area of the text input field; and executing a function executing command corresponding to the gesture input only when the text displayed in the text input field is not selected, wherein each of the gesture inputs corresponds to a different function; wherein the text input field for displaying the input text and the keyboard window for inputting the text are formed in different areas from each other, a display area hidden by a menu window or an icon on the touch screen display is minimized, the gesture input is a drag input, and the function executing command is a command for replacing a command corresponding to the menu window or the icon of an application being currently run.
10049666
14989642
1
1. A method comprising: receiving a voice input; determining a transcription for the voice input, wherein determining the transcription for the voice input includes, for a plurality of segments of the voice input: maintaining a plurality of contexts and respective base weights associated with the plurality of contexts; obtaining a first candidate transcription for a first segment of the voice input; determining one or more contexts, from the plurality of contexts, associated with the first candidate transcription; identifying one or more base weights respectively corresponding to the one or more contexts; adjusting a respective base weight of the one or more base weights for each of the one or more contexts based on the first candidate transcription; and determining a second candidate transcription for a second segment of the voice input based in part on the adjusted base weights; and providing the transcription of the plurality of segments of the voice input for output.
20090244033
12479678
0
1. A method of performing operations on a computing system having a touch sensitive surface, the method comprising: tracking the paths of multiple distinguishable contacts, the contacts corresponding to touch devices as they move on or near the surface at the same time, wherein tracking is based on at least shape and position data corresponding to the contacts; determining translation motion information corresponding to one or more of the multiple contacts based on the tracked paths of the contacts; generating a translation gesture control signal based on the translation motion information.
7725309
11422406
1
1. A method for performing speech recognition, the method being performed by one or more processors that perform steps comprising: identifying a plurality of entries from a sequence of phonetic units that are recognized from a spoken input, wherein the plurality of entries individually match to one or more items that comprise one or more records of a database that comprises a plurality of records; accessing a search node for the database, wherein the search node includes, for each record of at least some of the plurality of records, multiple phonetic representations, including at least one phonetic representation that contains at least one of (i) an abbreviation of an entry that is included in the record, or (ii) a null use of an entry that is included in the record when that entry is combined with one or more other entries; and selecting, from the search node, a matching record of the database for the spoken input, wherein selecting the matching record includes comparing the plurality of entries identified from the spoken input to the phonetic representations of the individual records, including to the multiple phonetic representations of records in the at least some of the plurality of records.
9443254
14665860
1
1. A method for selecting at least one product record for embedding into a document and display with the document in a user interface, the method comprising: analyzing, with a computing device, at least a portion of the document, the analysis including at least a frequency of words in the document; constructing, with a computing device, a query search string based on the analysis of the document, the query search string at least partially based on words of the document having the highest frequencies; applying, with a computing device, the query search string to a products database, the products database including a plurality of product records which include information regarding products, to identify at least one product record in the products database that satisfies the query search string; selecting, with a computing device, at least one of the identified product records for embedding into the document and display in the user interface, and embedding, with a computing device, at least one of the selected product records into the document for display in the user interface, wherein the document is not stored within the products database.
20140136200
14059813
0
1. A method of adapting a speech system, comprising: processing a spoken command with one or more models of one or more model types to achieve model results; evaluating a frequency of the model results; and selectively updating the one or more models of the one or more model types based on the evaluating.
9064495
13889277
1
1. A system comprising: a user device configured to: receive a voice signal corresponding to an utterance of a user; determine a first time corresponding to a first point in the voice signal, wherein the first time corresponds to a user-device time; transmit the voice signal to a server device over a network, wherein the server device is configured to: perform speech recognition using the voice signal; determine a second point corresponding to an end of the utterance; determine a time offset corresponding to a difference in time between the second point and the first point; determine a response to the utterance using results of the speech recognition; and transmit the time offset and the response to the user device; receive the time offset and the response from the server device; present the response to the user; determine a second time corresponding to time at which the user device presents the response to the user, wherein the second time corresponds to a user-device time; and determine a latency using the first time, the time offset, and the second time.
20170287485
15624935
0
1. (canceled)
20130117018
13288594
0
1. A method comprising: receiving, via one or more computing devices, an indication to provide one or more real-time voice content-to-text content transcriptions to a first collaboration session participant, the one or more real-time voice content-to-text content transcriptions corresponding to voice content of a second collaboration session participant in one or more collaboration sessions including the first collaboration session participant and the second collaboration session participant; defining, via the one or more computing devices, a preference for the first collaboration session participant to receive the one or more real-time voice content-to-text content transcriptions corresponding to the voice content of the second collaboration session participant in the one or more collaboration sessions including the first collaboration session participant and the second collaboration session participant based upon, at least in part, the indication; applying, via the one or more computing devices, the preference to a first collaboration session including the first collaboration session participant and the second collaboration session participant; and providing, via the one or more computing devices, a first real-time voice content-to-text content transcription to the first collaboration session participant during the first collaboration session including the first collaboration session participant and the second collaboration session participant, the first real-time voice content-to-text content transcription corresponding to first voice content of the second collaboration session participant in the first collaboration session.
20050171926
10768675
0
1. A method for recognizing information from an information source comprising the steps of: determining portions of information from a first ambiguous information source; determining portions of context information from a second information source temporally associated with the portions of information from the first information source; determining at least one recognition model based on the portions of information from the first information source and the temporally associated portions of context information from the second information source; determining output information based on at least one of the determined recognition models.
20160294751
15184863
0
1. A machine implemented method of communicating, comprising: (i) composing an electronic message, via a first device having a processing unit and program code stored on a storage device of said first device; (ii) selecting a well-known animation character, via the first device; (iii) transmitting the electronic message, via the first device; (iv) transmitting the well-known animation character, via the first device; (v) receiving the electronic message, via a server having a processing unit and program code stored on a storage device of said server; (vi) receiving the well-known animation character, via the server: (vii) transmitting the electronic message, via the server; (viii) transmitting the well-known animation character, via the server; (ix) receiving the electronic message, via a second device having a processing unit and program code stored on a storage device of said second device; (x) receiving the well-known animation character, via the second device; (xi) converting the electronic message into speech using one of synthesized voice of the well-known animation character and actual voice of the well-known animation character, via the second device; (xii) generating moving images of the well-known animation character, via the second device; (xiii) outputting the speech, via the second device; and (xiv) displaying the moving images, via the second device.
9807217
15134705
1
1. A computer-implemented method, comprising: determining, by one or more computer processors, that a particular computing device received a message; determining, by the one or more computer processors and based on a determination whether the particular computing device is connected to a user earpiece, whether to cause the particular computing device to present an audible notification that the particular computing device received the message; and causing, by the one or more computer processors and responsive to determining that the particular computing device is connected to the user earpiece, the particular computing device to present the audible notification that the particular computing device received the message by way of the user earpiece, wherein the one or more computer processors are configured to prevent the particular computing device from presenting the audible notification that the particular computing device received the message by way of a speaker in communication with the particular computing device, when the particular computing device is not connected to the user earpiece, responsive to the one or more computer processors determining that the particular computing device is not connected to the user earpiece.
4856067
07082211
1
1. A speech-recognition system in which time-series patterns of characteristic quantities are extracted periodically at each frame defined as a predetermined time interval from an input utterance within a voiced interval, i.e. within a time interval from the start point of the utterance until its end point, the similarities are calculated between these time-series patterns and reference patterns prepared in advance, and the similarity is calculated for each category to be recognized, and the category having the largest similarity among all the categories to be recognized is used as the result of the recognition, the speech-recognition system comprising: (a) a spectrum normalizer which performs frequency analysis in a plurality of channels (numbered by their central frequencies) and logarithmic conversion and extracts the frequency spectra, and then calculates normalized spectrum patterns by normalizing the frequency spectra with least square fit lines; (b) a consonantal pattern extractor which makes judgement as to whether each frame has consonantal properties and creates consonantal paterns by processing in sequence the frames within a voiced interval, extracting the consonantal patterns in those frames which are judged to have consonantal properties, and not extracting consonantal patterns in those frames which are judged to lack consonantal properties (i.e., in which the value is set at 0 in all channel components); (c) a local-peak pattern extractor, which creates local peak patterns by processing all the frames within a voiced interval, assigning number 1 to those channel components in which the value of the normalized spectrum pattern is positive and reaches a maximum, and assigning number 0 to all the other channel components; (d) a consonantal similarity degree calculator which calculates the similarity between the consonantal patterns calculated by the extractor in (b) and consonantal reference patterns prepared in advance, and calculates the consonantal similarity for each category to be recognized; (e) a memory unit for the consonantal reference patterns; (f) a local-peak similarity calculator which calculates the similarity between the local-peak patterns calculated by the extractor in (c) and local-peak reference patterns prepared in advance, and calculates the local-peak similarity for each category to be recognized; and (g) an identifier which references both the consonantal similarity and the local-peak similarity and calculates the comprehensive similarity for each category to be recognized, and selecting, among all the categories to be recognized, the category which has the largest comprehensive similarity as the result of recognition.
8396878
13245840
1
1. A computer-implemented method of generating automated tags for a video file, the method comprising: receiving one or more manually generated tags associated with a video file; based at least in part on the one or more manually entered tags, determining a preliminary category for the video file; based on the preliminary category, generating a targeted transcript of the video file, wherein the targeted transcript includes a plurality of words; generating an ontology of the plurality of words based on the targeted transcript; ranking the plurality of words in the ontology based on a plurality of scoring factors; based on the ranking of the plurality of words, generating one or more automated tags associated with the video file; and generating a heat map for the video file, wherein the heat map comprises a graphical display which indicates offset locations of words within the video file with the highest rankings, wherein the plurality of scoring factors consists of two or more of: distribution of words throughout the targeted transcript of the video file, words related to the plurality of words throughout the targeted transcript of the video file, occurrence age of the related words, information associated with the one or more manually entered tags, vernacular meaning of the plurality of words, or colloquial considerations of the meaning of the plurality of words.
20020175930
09863895
0
1. A method for assisting a speaker of a first language in operating a remote control device designed for a speaker of a second language, the method comprising: storing an icon for each of a plurality of interactive options periodically available within an interactive television system, each interactive option corresponding to a button on the remote control device, each icon sharing a common visual characteristic with a button on the remote control device; storing with each icon a description, in the first language, of a corresponding interactive option; displaying an icon on a display device associated with the interactive television system in response to a corresponding interactive option becoming available; and presenting with the icon the description of the interactive option in the first language.
9569698
14434723
1
1. A method for classifying a multimodal test object, termed a multimedia test object, described according to at least one first modality and one second modality, said method comprising: constructing a recoding matrix X of representatives of the first modality forming a dictionary of the first modality including a plurality K T of words of the first modality, wherein each of the components of the recoding matrix X forms information representative of the frequency of each word of the second modality of a dictionary of the second modality including a plurality K v of words of the second modality, for each word of the first modality, an offline construction, by unsupervised classification, of a multimedia dictionary W m , defined by a plurality K m of multimedia words, on the basis of the recoding matrix X, a classification of a multimedia test object comprising: recoding of each representative of the first modality, relating to the multimedia test object, on the multimedia dictionary W m base, and aggregating the representatives of the first modality coded in the recoding step in a single vector BoMW representative of the multimedia test object.
20120197636
13018973
0
1. A method for processing a single-channel input including speech and noise, comprising: receiving, by a processor, the single-channel input captured via a microphone; for processing a current frame of the single-channel input: performing, by the processor, a time-frequency transformation on the single-channel input over L frames including the current frame to obtain an extended observation vector of the current frame, data elements in the extended observation vector representing coefficients of the time-frequency transformation of the L frames of the single-channel input; computing, by the processor, second-order statistics of the extended observation vector; if the current frame of the single-channel input does not include detectable human voice activity, computing, by the processor, second-order statistics of noise contained in the single-channel input; constructing, by the processor, a noise reduction filter for the current frame of the single-channel input based on the second-order statistics of the extended observation vector and the second-order statistics of noise; and applying the noise reduction filter to the single-channel input to reduce an amount of noise; wherein L>1.
20110099013
12604628
0
1. A method for improving speech recognition accuracy using textual context, the method causing a computing device to perform steps comprising: retrieving a recorded utterance; retrieving text captured from a device display associated with the spoken dialog and viewed by one party to the recorded utterance; identifying words in the captured text that are relevant to the recorded utterance; adding the identified words to a dynamic language model; and recognizing the recorded utterance using the dynamic language model.
20110281253
12953724
0
1. A computer-aided learning system with adaptive optimization, comprising: a storage module configured for storing learning data; a man-machine interface configured for providing the learning data to at least one learner; an information collection module configured for tracking and recording an interactive learning process, the interactive learning process representing a plurality of interactions between the at least one learner and the man-machine interface; a learning process analysis module configured for receiving the interactional learning process provided by the information collection module, the learning process analysis module further configured for analyzing the interactional learning process and forming a control signal; and a learning strategy generation module configured for receiving the control signal from the learning process analysis module, the learning strategy generation module further configured for generating a learning strategy signal based on the control signal, the learning strategy signal being at least one of: a recommended daily learning duration for the at least one learner, a recommended time allocation for learning a new item, a recommended time allocation for reviewing an old item, a recommended number of new items to be learned in a day, a recommended time interval between two review sessions, and a recommended number of old items to be reviewed in a day; wherein the man-machine interface is configured to provide the learning data to the at least one learner based on the learning strategy signal.
20030081677
10284280
0
1. A method for determining entropy of a pixel of a real time streaming digital video image signal, comprising the steps of: (a) receiving and characterizing the streaming digital video image input signal during a pre-determined time interval; (b) assigning and characterizing a local neighborhood of neighboring pixels to each input image pixel of the streaming digital video image input signal, in a temporal interlaced sequence of three consecutive fields in a global input grid of pixels included in the streaming digital video input image signal, said three consecutive fields being a previous field, a next field, and a current field; and (c) determining the entropy of each virtual pixel, of each previous pixel, and of each next pixel, in said temporal interlaced sequence of said three consecutive fields, relative to said assigned and characterized local neighborhoods of said neighboring pixels, said determining comprising the steps of: (i) calculating values of pixel inter-local neighborhood parameters for each said previous pixel in said previous field, and for each said next pixel in said next field, whereby each said value of each said pixel inter-local neighborhood parameter represents a regional sum of inter-local neighborhood weighted distances measured between said neighboring pixels located in subsets of said assigned and characterized local neighborhood of each said virtual pixel in said current field, and said assigned and characterized local neighborhood of each said previous pixel in said previous field, and of each said next pixel, in said next field, respectively; (ii) calculating a value of a virtual-pixel intra-local neighborhood parameter, for each said virtual pixel in said current field; (iii) adjusting a value of a pixel entropy counter for each said previous pixel in said previous field, for each said next pixel in said next field, and for each said virtual pixel in said current field; and (iv) calculating a value of the entropy of each said previous pixel in said previous field, of each said next pixel in said next field, and of each said virtual pixel in said current field, from said values of said pixel entropy counters of said pixels.
20120166370
13412871
0
1. A method of identifying attributes in a sentence, the method comprising: representing the sentence as a first feature vector; training a binary classifier based upon the first feature vector; obtaining a vector of probability for the sentence; concatenating the first feature vector with the vector of probability to obtain a second feature vector; and training the binary classifier based upon the second feature vector.
20080300871
11754814
0
1. A method of identifying an acoustic environment of a caller, the method comprising: analyzing acoustic features of a received audio signal from a caller; receiving meta-data information; classifying a background environment of the caller based on the analyzed acoustic features and the meta-data; selecting an acoustic model matched to the classified background environment from a plurality of acoustic models, each of the plurality of acoustic models being generated for a particular predefined background environment; and performing speech recognition on the received audio signal using the selected acoustic model.
20120022952
12893939
0
1. A computer-implemented method for combining probability of click models in an online advertising system, said method comprising: receiving, at a computer, at least one feature set slice; training, in a computer, a plurality of slice predictive models, the slice predictive models corresponding to at least a portion of the features in the at least one feature set slice; weighting, in a computer, at least two of the plurality of slice predictive models by overlaying a weighted distribution model over the plurality of slice predictive models; and calculating, in a computer, a combined predictive model based on the weighted distribution model and the at least two of the plurality of slice predictive models.
9285907
14108655
1
1. A method for recognizing a multiple input point gesture comprising: detecting initial contact as an unordered set of point locations, the unordered set of point locations indicating that simultaneous contact was detected at a plurality of locations in a planar input surface without regard for the order in which contact was detected, the plurality of locations including at least three individual locations, including a first location, a second location and a third location, where a user touched the planar input surface with one or more fingers or other objects; determining a relative orientation of the at least three individual locations on the planar input surface where contact was detected in the planar input surface, wherein the number and orientation of the simultaneous contacts determines which input gesture was provided; and after initial contact is released, identifying the input gesture based on the number of locations where simultaneous contact was initially detected, and based on the determined relative orientation of the at least three locations in the planar input surface.
8233592
10705328
1
1. A method for remotely requesting information and/or services from at least one remote service server through a home personal computer, the method comprising steps of: receiving, at the home personal computer, a telephone call from a user that is registered with the home personal computer, wherein the telephone call is a call originating from a device remote from the home personal computer, wherein the home personal computer is directly connected to the user's subscriber line; receiving a user spoken utterance over the telephone call; performing speech recognition on the user spoken utterance to determine a request for information and/or a service; formatting an electronic message according to the request; sending the electronic message over a communication network from the home personal computer to the at least one remote service server; receiving content at the home personal computer from the at least one remote service server; converting the content to speech audio at the home personal computer; and playing the speech audio to the user over the telephone call.
20100272351
12765190
0
1. An information processing apparatus comprising: a learning unit configured to sequentially execute learning with respect to weak discriminators based on learning data held in a storage device; a calculating unit configured to calculate an evaluation value for the weak discriminator at the time of the learning; a discriminating unit configured to discriminate whether or not the learning is overlearning based on a shift of the evaluation value; and an adding unit configured to add new learning data to the storage device if it is determined by the discriminating unit that the learning is overlearning.
7742920
12101465
1
1. A variable voice rate method of controlling a reproduction rate of voice, comprising: generating voice data from the voice; generating, in a text data generating unit, text data indicating a content of the voice data, based on the generated voice data; generating, in a division information generating unit, division information used for dividing the text data into a plurality of linguistic units each of which is characterized by a linguistic form; generating, as reproduction information concerning reproduction control of the voice for each of linguistic units in a reproduction information generation unit, information indicating a probability with which preset ones of the linguistic units are combined in a preset order; storing, in a first storage, the reproduction information; selecting, from the linguistic units, combinations of linguistic units each having a probability lower than a preset value, based on the stored reproduction information and the division information; and controlling, in a voice reproduction controller, reproduction of the voice data corresponding to the selected combinations.
7827239
10832035
1
1. A method for delivering dynamic media content to collaborators, the method comprising: providing collaborative event media content, wherein the collaborative event media content further comprises a grammar and a structured document including: creating, in dependence upon original media content, a structured document, the structured document further comprising one or more structural elements; creating a grammar for the collaborative event media content, wherein the grammar includes grammar elements each of which includes an identifier for at least one structural element of the structured document; and classifying a structural element of the structured document according to a presentation attribute including identifying a presentation attribute for the structural element; identifying a classification identifier in dependence upon the presentation attribute; and inserting the classification identifier in association with the structural element in the structured document; wherein the grammar comprises a data structure associating key phrases with presentation actions that facilitates a collaborator navigating the structured document of collaborative event media content using speech commands; and wherein the method further comprises: acquiring data representing a client's environmental condition including receiving asynchronously from environmental sensors data representing a client's environmental condition; storing, in a context server in a data structure comprising a dynamic client context for the client, the data representing a client's environmental condition, wherein dynamic client context includes network addresses for environmental sensors for a client and wherein acquiring data representing a client's environmental condition further comprises the context server's polling of the environmental sensors for the client; and wherein the method further comprises: detecting an event in dependence upon the dynamic client context including detecting a change in a value of a data element in the dynamic client context; identifying one or more collaborators in dependence upon the dynamic client context and the event including identifying a collaborator in dependence upon collaborator presence on a instant messaging network; and selecting from the structured document a classified structural element in dependence upon an event type and a collaborator classification; and transmitting the selected structural element to the collaborator including selecting a data communications protocol for communications with a collaborator, inserting the selected structural element in a data structure appropriate to the data communications protocol, and transmitting the data structure to the collaborator according to the data communications protocol.
9361890
14032973
1
1. An apparatus comprising: a microphone array; a processor; a memory storing computer readable code executable by the processor, the computer readable code comprising: a type module that determines if a recipient process of an audio signal from the microphone array is a speech recognition recipient type else that determines if the recipient process is a human destination recipient type; a filter module that selects a diction audio filter in response to determining the speech recognition recipient type.
20110046435
12528296
0
1. A sound enrichment system for provision of tinnitus relief, the sound enrichment system comprising: a noise generator; an environment classifier that is configured to determine a classification of an ambient sound environment of the sound enrichment system; a processing system for adjusting a noise signal based at least in part on the classification, wherein the noise signal is obtained using the noise generator; and an output transducer for conversion of the adjusted noise signal to an acoustic signal for presentation to a user.
10071575
15795353
1
1. A printer comprising: a thickness detection module comprising: a pinch arm assembly comprising: a pinch arm having a first end and a second end; an encoder disposed at the second end of the pinch arm, the encoder configured to rotate in response to engagement of the pinch arm with at least a portion of the print media; a dual channel encoder sensor proximate the encoder and configured to detect a rotation direction and an encoder count indicative of rotation movement of the encoder with respect to the dual channel encoder; and a processor communicatively coupled to the thickness detection module to calculate a print media thickness of the portion of the print media based on the encoder count and adjust a print head pressure based on the calculated print media thickness.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card