doc_id
stringlengths
7
11
appl_id
stringlengths
8
8
flag_patent
int64
0
1
claim_one
stringlengths
13
18.3k
7827034
11952005
1
1. A character animation and speech tool, comprising: a character library, comprising an assortment of characters, each character being capable of being rendered lifelike by computer graphic rendering image generation, and being fully interchangeable among the library assortment in relation to all other characters; a sampled human voice library, comprising one or more of male, female, child, and exaggerated human caricature voice samples, each voice sample being standardized in sampling modules and time duration, each of said voice samples being fully interchangeable among the library assortment in relation to all other of said voice samples, and fully integrated therewith; a text-to-speech assembly apparatus, whereby any word text entered by a keyboard into the application is translated to its syllabic components and both a voice syllabic component and an animation motion graph data component are introduced into an assembly timeline for editing; a syllabic speech editing apparatus, wherein all entered text words may be edited with respect to volume, pace and pitch in syllabic units consistent with natural speech patterns and conscious human speech logic; and an animation assembly apparatus, whereby the edited animation motion graph data components drive the character to create facial motions consistent with natural speech, and each edited voice syllabic component is simultaneously assembled into a composite audio speech track fully synchronized with character facial motions; wherein said speech tool operates at a syllable level of speech manipulation, both in its internal architecture and at its user tool set, thereby replicating natural human choices in giving dramatic emphasis and clear enunciation in speech presentations.
20140090050
14026944
0
1. A method for identifying an unauthorized user of an electronic device, the method comprising: monitoring with the electronic device usage of at least one memory of the electronic device; detecting with the electronic device a sudden increase in the monitored memory usage; and determining with the electronic device that a current user of the electronic device is the unauthorized user in response to the detecting.
20090063461
11849136
0
1. A method comprising: identifying search session logs of a user; determining a first keyword set and a second keyword set from a search session in the search session logs; calculating semantic relevance between the first and second keyword sets based on frequencies at which the first and second keyword sets occur; and displaying one or more semantically relevant keyword sets based on the calculation.
20080140741
11869699
0
1. A method comprising: (a) receiving one or more sequences of digital data into a provided memory; (b) the computation of a numerical similarity signature, referred to as the equivalence signature, for a sequence of digital data, as a sum over the number of subsequences that constitute said sequence of digital data with each summand being positive one if the value of the fundamental homotopy group's invariant for the values of the elements of a subsequence is even, and negative one if said value of the fundamental homotopy group's invariant for said values of the elements of said subsequence is odd; (c) the computation of said fundamental homotopy group's invariant of a subsequence as the difference between the last and first values of said subsequence; and (d) the computation of a similarity distance between any two sequences of digital data as the absolute value of the difference of their equivalence signatures so that said similarity distance will either be equal or differ by a bounded value that is less than the lesser of the two numbers of subsequences in said sequences of digital data.
20070282872
11742244
0
1. A method for associating a plurality of attributes and a plurality of values for a product within at least one natural language document to define attribute-value pairs, the method comprising: determining correlations between two or more attributes of the plurality of attributes; identifying at least one attribute phrase based On the correlations between the two or more attributes: determining correlations between two or more values of the plurality of values; identifying at least one value phrase based on the correlations between the two or more values; associating an attribute of the plurality of attributes or an attribute phrase of the at least one attribute phrase with a value of the plurality of values or a value phrase of the at least one value phrase based on syntactic dependency therebetween; and storing the attribute or attribute phrase and the associated value or value phrase as an attribute-value pair.
9807269
14713077
1
1. A method for low light image capture of a document image using a plurality of flash images from a single supplemental light source, the method comprising: first capturing a first image of a document with the supplemental light source wherein the first image has a first flash spot in a first flash spot region; second capturing a second image of the document with the supplemental light source wherein the second image has a second flash spot spaced from a position in the document of the first flash spot by a movement of the supplemental light source from a first position to a second position; and fusing the first and second images for an alignment of the first and second images to form a fused image, wherein the first flash spot region is replaced in the fused image with a corresponding portion of the second image wherein a boundary of the corresponding portion of the second image is selectively expanded to avoid splitting of characters and words by the fusing.
20110113054
12738350
0
1. A computer system comprising: a computer database comprising a plurality of data tables; a computer device that executes an application that requires data from the database; and a code generation engine in communication with the computer device and the database, wherein the code generation engine comprises a processor circuit, a memory circuit, and a metadata database comprising computer database metadata, wherein the code generation engine is programmed to translate a data request from the application, the data request being in a first language, to one or more data queries of the data tables of the computer database, wherein the one or more data queries are in a second language that is different from the first language.
20140101081
14099566
0
1. A method of training a target classifier to categorize textual data, the method comprising: matching a trained classifier to the target classifier, the trained classifier sharing at least one common attribute with the target classifier; selecting identifiers from the trained classifier, the identifiers being predictors of a sentiment of the textual data as one of a positive opinion or a negative opinion; and associating the identifiers with the target classifier.
20050038655
10639974
0
1. A method for constructing acoustic models for use in a speech recognizer, comprising: partitioning speech data from a plurality of training speakers according to at least one speech related criteria; grouping together the partitioned speech data from training speakers having a similar speech characteristic; and training an acoustic bubble model for each group using the speech data within the group.
20170094704
15272720
0
1. An electronic apparatus configured to perform wireless peer-to-peer (wireless p2p) connection with an external device, the electronic apparatus comprising: communication circuitry; a microphone configured to receive a user voice; and a processor configured to control the communication circuitry to select the external device as a target device for the wireless p2p connection in response to voice data received from the external device in a process of probing for the wireless p2p connection being consistent with voice data input through the microphone.
9229568
13541203
1
1. A method for employing touch gestures to control a web-based application, the method comprising: employing a browser running on a device with a touch-sensitive display to access content provided via a website; determining default touch gestures used by the device to manipulate the content; determining a context associated with the content, including ascertaining one or more user interface controls to be presented via a display screen used to present the content, and providing a first signal in response thereto; determining from the context associated with the content and the default touch gestures a set of touch gestures that are non-conflicting with the default touch gestures when operating the one or more user interface controls, wherein the set of touch gestures includes at least one common touch gesture configured to operate at least one of the one or more user interface controls also operable by a default touch gesture; providing at least one function responsive to at least one of the one or more user interface controls receiving a touch gesture input from a touch-sensitive display; in response to the touch gesture input operating the at least one function, providing a second signal in response thereto; and using the second signal to manipulate the display screen in accordance with the context associated with the content presented via the display screen and the at least one function operated by the touch gesture input.
8731908
12974120
1
1. A method executed in a receiver in response to packets representing encoded speech of a speech signal, comprising: determining whether a first packet of the packets is an expected packet or an unexpected packet, wherein an expected packet includes a packet that is not lost, corrupted, erased or delayed, and wherein an unexpected packet includes a packet that is lost, corrupted, erased or delayed; when the determining concludes that the first packet is an expected packet, decoding the first packet to create a plurality of speech samples; delaying the plurality of speech samples by a delay; and sending the plurality of speech samples that has been delayed to an output port; when the determining concludes that a second packet is an unexpected packet, computing a pitch period estimate, using a number of speech samples that correspond to a most recent 20 msec span of speech samples of the speech signal; obtaining a segment of the plurality of speech samples in accordance with the pitch period estimate; performing an Overlap-Add process on the segment with an Overlap-Add segment, wherein the performing generates a first synthesized speech segment; delaying the first synthesized speech segment by the delay; and sending the first synthesized speech segment that has been delayed to the output port.
20180052828
15401126
0
1. A machine translation method comprising: converting a source sentence written in a first language to language-independent information using an encoder for the first language; and converting the language-independent information to a target sentence corresponding to the source sentence and written in a second language different from the first language using a decoder for the second language; wherein the encoder for the first language is trained to output language-independent information corresponding to the target sentence in response to an input of the source sentence.
20120143860
12959840
0
1. At a computer system including one or more processors and system memory, a method for identifying key phrases within a document, the method comprising: an act of accessing a document; an act of calculating the frequency of occurrence of a plurality of different textual phrases within the document, each textual phrase including one or more individual words of a specified language; an act of accessing a language model for the specified language, the language model defining expected frequencies of occurrence at least for individual words of the specified language; for each textual phrase in the plurality of different textual phrases, an act of computing a cross-entropy value for the textual phrase, the cross-entropy value computed from the frequency of occurrence of the textual phrase within the document and the frequency of occurrence of the textual phrase within the specified language; an act of selecting a specified number of statistically significant textual phrases from within the document based on the computed cross-entropy values; and an act of populating a key phrase data structure with data representative of each of the selected specified number of statistically significant textual phrases.
7885817
11170584
1
1. A computer-implemented dialog system training environment, comprising: a processor to execute components of a dialog system; a memory coupled to the processor; a user simulator that during the dialog system training provides at least one text to speech training output associated with an utterance, the output having variable qualities; and a dialog system that comprises: a speech model having a plurality of modifiable speech model parameters, the speech model receives the at least one text to speech training output as a speech model input related to the utterance and produces related speech model output features; a dialog action model having a plurality of modifiable dialog action model parameters, the dialog action model receives the related speech model output features from the speech model and produces related output actions, the plurality of modifiable speech model parameters, the plurality of modifiable dialog action model parameters, or a combination thereof, are based, at least in part, upon the utterance, the action taken by the dialog action model, or a combination thereof; and the dialog system identifies the utterance that is in need of clarification by initiating a repair dialog, wherein the utterance associated with the repair dialog is identified includes: determining what states of the repair dialog are reached from other states, the dialog system learns which states to go to when observing an appropriate speech and dialog features by trying all repair paths using the user simulator where a user's voice is generated using various text-to-speech (TTS) engines at adjustable levels; and determining which states of the repair dialog are failures or successes.
20090112581
12259857
0
1. A method of transmitting an encoded speech signal, the method comprising: predicting a current speech sample based on a previous speech sample using a weighted synthesis filter; determining an innovation sequence based on a prediction error between the predicted current speech sample and an actual current speech sample; selecting at least one codebook code associated with the innovation sequence; identifying an index of the selected codebook code; and transmitting an encoded speech signal including the codebook code index.
20070043571
11204510
0
1. A method used in conjunction with an automated speech response system comprising the steps of: establishing an interactive dialog session between a user and an automated speech response system, wherein an error score is established when the interactive dialog session is initiated; during said interactive dialog session, determining a plurality of responses to dialog prompts; detecting whether each of said responses is a valid response; assigning error weights to non-valid responses, wherein different non-valid responses are assigned different error weights; for each non-valid response, adjusting said error score based upon the assigned error weight of an associated non-valid response; and when a value of said error score exceeds a previously established error threshold, automatically transferring said user from the automated speech response system to a human agent.
20100095210
12637512
0
1. An apparatus for providing media content to a user over a computer network, the apparatus comprising a server in communication with the user's computer over the computer network, the server being configured to: a) convert the media content of one or more original files into one or more audio files prior to the user requesting the audio files; and b) provide access to the audio files to the user over the computer network.
8923829
13729786
1
1. A method comprising: receiving, by one or more devices, an indication, from a caller associated with a call, that speech of the caller is to be modified to deemphasize an accent of the caller; modifying, by the one or more devices and based on the received indication, the speech of the caller to deemphasize the accent of the caller; transmitting, by the one or more devices, the modified speech to a callee associated with the call; receiving a second indication from the caller that speech of the callee is to be modified to deemphasize an accent of the callee; modifying, in response to the second indication, the speech of the callee to deemphasize the accent of the callee; and outputting the modified speech of the callee to the caller.
8837835
14159110
1
1. A computer-implemented system for grouping documents, the system comprising: a non-transitory document storage system comprising computer memory configured to store a plurality of documents, wherein each document of the plurality of documents comprises distinct character types; a computerized matching unit comprising one or more hardware processors, wherein the computerized matching unit is configured to access the non-transitory document storage system and generate: a first indicator of a common character count between a first document of the plurality of documents and a second document of the plurality of documents, wherein the common character count corresponds to a number of distinct character type occurrences in both the first document and the second document; a second indicator of a character variance count between the first document and the second document, wherein the character variance count corresponds to differences in a number of occurrences of distinct character types in both the first document and the second document; a third indicator of a missing character count between the first document and the second document, wherein the missing character count corresponds to a number of distinct character type occurrences in the first document and not in the second document; a single indicator by combining at least the first indicator, the second indicator, and the third indicator, wherein the computerized matching unit is configured to compare the single indicator to a threshold indicator to determine whether there is a match between the first document and the second document; and a grouping based at least in part on the single indicator; and a matching reporting unit configured to report the grouping generated by the computerized matching unit to a user.
8218751
12240119
1
1. A conference call noise identification and reduction system comprising: a block module adapted to block audio from one or more conference call participants, the blocking occurring at one or more of a near-end and a conference bridge to allow a conference call participant to identify a source of noise; and one or more of a tune module, filter module and mute module selectively operable at the conference bridge for each conference call participant identified by the blocking to reduce the source of noise associated with the conference call participants identified by the blocking, wherein when a first conference call participant selectively operates one or more of the one or more of the tune module, filter module and mute module, the selective operation only affects the audio to the first conference call participant, with the audio to other conference call participants remaining unchanged, and, if the first conference call participant sounds acceptable to a second conference call participant, but does not sound acceptable to a third conference call participant, then the third conference call participant can adjust the first conference call participant to the third conference call participant transmission parameters without affecting the first conference call participant to the second conference call participant transmissions.
5570528
08488647
1
1. A voice activated locking apparatus for a weapon having handgrips and a trigger that when pulled activates a discharge assembly causing said weapon to discharge, said apparatus comprising: a microphone, attached to one of said handgrips on said weapon, wherein said microphone is positioned to receive an operator's voice having a speech pattern; locking means, connected to the discharge assembly of said weapon, for preventing the activation of said weapon when said trigger is pulled; voice recognition means, connected to said microphone and said locking means, for evaluating the voice received by said microphone to verify that the speech pattern of the voice corresponds to only that of an authorized operator; operator interface means, connected to said voice recognition means, for initiating recording of the authorized operator's voice pattern by said voice recognition means, such that if the voice pattern received by said microphone is authenticated by said voice recognition means, said voice recognition means causes said locking means to unlock said weapon.
9691401
15446524
1
1. A method for reconstructing a wideband audio signal, the method comprising: decomposing a lowband audio signal into a plurality of complex subband signals with an L-channel analysis filterbank, each of the plurality of complex subband signals representing a frequency channel of the analysis filterbank; generating a highband audio signal by patching a number of consecutive complex subband signals, wherein the generating includes: frequency translating a complex subband signal in a source area channel of the lowband audio signal having an index i to a reconstruction range channel having an index j of the highband audio signal, and frequency translating a complex subband signal in a source area channel of the lowband audio signal having an index i+1 to a reconstruction range channel having an index j+1 of the highband audio signal; adjusting a spectral envelope of the highband audio signal to a desired level; combining the lowband audio signal and the highband audio signal with a Q·L-channel synthesis filterbank to generate the wideband audio signal, wherein the lowband audio signal has frequency components below a crossover region and the highband audio signal has frequency components above the crossover region, and wherein Q is chosen so that Q·L is an integer value.
20160293158
15185304
0
1. A method comprising: nominating, via a processor configured to use a partially observable Markov decision process in parallel with a conventional dialog state, a set of dialog actions and a set of contextual features; and generating an audible response in a dialog between a user and a spoken dialog system based at least in part on the set of contextual features.
8126561
12613094
1
1. A method of operating an implantable medical device chronically implanted in a vessel wherein the device includes a pressure sensor, a neural stimulator and a power supply, the method comprising: sensing a pressure within the vessel using the pressure sensor; stimulating a neural target using the neural stimulator; and recharging the power supply using ultrasound signals.
20090030680
11781285
0
1. A method of indexing speech data, the method comprising: indexing word transcripts, including a timestamp for a word occurrence; indexing sub-word transcripts, including a timestamp for a sub-word occurrence; wherein a timestamp in the index indicates the time of occurrence of the word or sub-word in the speech data; and wherein word and sub-word occurrences can be correlated using the timestamps.
20020091489
09756314
0
1. A method of ordering a set of kernels in a multi-dimensional data space, wherein the method includes: placing an ordered set of neurons in initial positions in the multi-dimensional data space; training the ordered set of neurons on the set of kernels; determining a number of kernels attached to each neuron in the ordered set; augmenting the ordered set of neurons by replacing each neuron with an ordered replacement set of neurons, wherein the replacement set includes a number of neurons that equals a number of kernels attached to the neuron being replaced; repeating said training, determining, and augmenting with the augmented set of neurons until each neuron has no more than one kernel attached.
8879761
13302673
1
1. A method for outputting audio from a plurality of speakers associated with an electronic device, comprising: determining an orientation of video being output for display by the electronic device, wherein the orientation of video is independent of an orientation of the electronic device; using the determined orientation of video to determine a first set of speakers generally on a left side of the video being output for display by the electronic device; using the determined orientation of video to determine a second set of speakers generally on a right side of the video being output for display by the electronic device; routing left channel audio to the first set of speakers for output therefrom; and routing right channel audio to the second set of speakers for output therefrom.
9924451
14957336
1
1. A method for communicating half-rate encoded voice frames, the method comprising: receiving, by a digital signal processor, a half-rate encoded voice frame; determining, by the digital signal processor, a network access code; encoding, by the digital signal processor, a network identifier based on the network access code; scrambling, by the digital signal processor, the network identifier to generate a scrambled network identifier; generating, by the digital signal processor, an erasure pattern including a deliberately-introduced error; generating, by the digital signal processor, a half-rate embedded voice code word based on the erasure pattern and the half-rate encoded voice frame; and generating, by the digital signal processor, a half-rate embedded logical data unit based on the half-rate embedded voice code word and the scrambled network identifier.
20170104876
15388471
0
1. A non-transitory processor-readable medium storing code representing instructions to be executed by a processor, the code comprising code to cause the processor to: receive, at a first time and via an asynchronous communication mode, a first network communication from an electronic device of a user, the first network communication being associated with a customer service transaction; select a first agent from a set of agents based on the first network communication and a set of characteristics stored as associated with the first agent in a database; route the first network communication to an electronic device of the first agent based on the selecting the first agent; update a set of asynchronous communication sessions of a work list of the first agent to include a session of the customer service transaction, the work list of the first agent includes an indication of the set of asynchronous communication sessions and an indication of a set of live communication sessions actively assigned to the first agent, the work list including an indication of the first network communication and a context of the customer service transaction as associated with the session associated with the customer service transaction; receive, at a second time after the first time, a request to initiate a communication with the first agent via a live communication mode; route the request to a second agent based on a number of live communication sessions from the set of live communication sessions actively assigned to the first agent at the second time being greater than a threshold; and update a set of live communication sessions of a work list of the second agent to include the session associated with the customer service transaction.
9913038
15394815
1
1. A multi-channel headphone, comprising: a housing having multiple audio output holes formed in the housing; multiple speaker units mounted inside the housing, each having a sound-generating part mounted thereon, wherein sound produced by one of the sound-generating parts directly travels through one of the audio output holes, and sound produced by the rest of sound-generating parts indirectly travels through the rest of the audio output holes: wherein the housing has a first compartment, a second compartment and a third compartment defined inside the housing, wherein the second compartment and the third compartment are located at two sides of the first compartment; a first sound wall formed on the inner wall of the second compartment; and a second sound wall formed on the inner wall of the third compartment; the audio output holes include a first audio output hole, a second audio output hole and a third audio output hole respectively communicating with the first compartment, the second compartment and the third compartment; and the speaker units include a first speaker unit mounted inside the first compartment, wherein sound produced by a sound-generating part of the first speaker unit directly travels through the first audio output hole; a second speaker unit mounted inside the second compartment, spaced apart from an inner wall of the second compartment and corresponding to the first sound wall, wherein sound produced by a sound-generating part of the second speaker unit is refracted by the first sound wall; and a third speaker unit mounted inside the third compartment, spaced apart from an inner wall of the third compartment corresponding to the second sound wall, wherein sound produced by a sound-generating part of the third speaker unit is refracted by the second sound wall.
8935347
13651323
1
1. A method for presenting notifications, comprising: at a computer system: obtaining message information, the message information representing a set of messages, and an importance score associated with each respective message in the set of messages, wherein the importance score is generated based at least in part on a global importance prediction model and a user importance prediction model; in accordance with a determination that the set of messages includes one or more unread priority messages, wherein priority messages comprise messages with which the associated importance score satisfy one or more predefined message importance criteria: presenting a new mail notification, wherein the global importance prediction model includes a social graph-related weight, the user importance prediction model is based on information associated with a single user, and the global importance prediction model is based on information associated with a plurality of users.
20160300557
15187056
0
1. A method for processing information, comprising: triggering a first operation; downloading a first audio file and a first text file matching the first audio file in response to the first operation; partly truncating the first audio file to obtain a first audio clip according to first indication information for identifying a truncating start position and second indication information for identifying a truncating end position; triggering a second operation; playing the first audio clip and dynamically displaying a text information part in the first text file corresponding to the first audio clip synchronously in response to the second operation; acquiring voice information of a user while playing the first audio chip; and synthesizing the first audio clip and the voice information into a first acquisition result.
20120192143
13011448
0
1. A computer, comprising: a processor configured to execute computer instructions; and memory comprising: a transformation software application configured for receiving an input model conforming to a modeling language in which model elements playing pattern roles are detected, and producing an output model that concisely reports occurrences of those patterns by utilizing transformation logic of the transformation software application and a pattern specification configured as a transformation relation; the processor is configured to execute the transformation software application by utilizing the pattern specification to detect pattern occurrences in the input model; and the processor is configured to report the pattern occurrences in the output model, wherein the pattern occurrences are instances of detecting patterns in the input model.
8737518
12940329
1
1. A method of compressing data for transmission in a first direction across a communications channel, the data representing channel conditions for the communications channel in a direction opposite to the first direction, comprising; arranging the data as a matrix comprising a number of orthonormal vectors derived from a channel matrix; determining a singular value decomposition of a subset of the orthonormal matrix to generate matrices respectively of left and right singular vectors, the number of vectors in the subset being equal to the order of the vectors; and right multiplying the remainder orthonormal vectors not included in the singular value decomposition by a matrix product of the matrix of right singular vectors and the matrix of left singular vectors to generate a matrix of compressed data.
8706493
13179671
1
1. A controllable prosody re-estimation system implemented in a computer system having at least a processing device and an input device, comprising: a controllable prosody parameter interface responding to the input device for loading a controllable parameter set; and a speech/text to speech (STS/TTS) core engine, said core engine including at least a prosody prediction/estimation module, a prosody re-estimation module and a speech synthesis module, at least one of which is executed by said processing device, wherein said prosody prediction/estimation module predicts or estimates prosody information according to the input text/speech, and transmits the predicted or estimated prosody information to said prosody re-estimation module; said prosody re-estimation module produces new prosody information according to said input controllable parameter set and predicted/estimated prosody information, after which said prosody re-estimation module transmits said new prosody information to said speech synthesis module to generate synthesized speech, wherein said system further constructs a prosody re-estimation model, and said prosody re-estimation module uses said prosody re-estimation model to re-estimate said prosody information so as to produce said new prosody information, wherein said prosody re-estimation model is expressed in the following form: X rst =Δμ+[μ src +( X src −μ src )ρ×γ] wherein X src is prosody information generated by a source speech, X rst is the new prosody information, μ src is the mean of prosody of a source corpus, and (Δμ, ρ, γ) are three controllable parameters.
9704486
13711510
1
1. A system comprising: an audio input module; an audio detection module in communication with the audio input module; a speech detection module in communication with the audio detection module; a wakeword recognition module in communication with the speech detection module; and a network interface module in communication with the wakeword recognition module, wherein: the audio detection module is configured to: receive audio input from the audio input module; determine a volume of at least a portion of the audio input; cause the audio input module to increase a sampling rate of the audio input based at least in part on the volume exceeding a threshold; and cause activation of the speech detection module based at least in part on the volume exceeding the threshold; the speech detection module is configured to determine a first score indicating a likelihood that the audio input comprises speech and cause activation of the wakeword recognition module based at least on part on the score; and the wakeword recognition module is configured to: determine a second score indicating a likelihood that the audio input comprises a wakeword; and cause activation of a network interface module based on the second score by providing power to the network interface module; and the network interface module is configured to transmit at least a portion of the obtained audio input to a remote computing device.
20020052870
09863424
0
1. An apparatus for identifying one or more portions of data in a database for comparison with a query input by a user, the query and the portions of data each comprising a sequence of sub-word units, the apparatus comprising: a memory for storing data defining a plurality of sub-word unit classes, each class comprising sub-word units that are confusable with other sub-word units in the same class; a memory for storing an index having a plurality of entries, each of which comprises: (i) an identifier for identifying the entry; (ii) a key associated with the entry and which is related to the identifier for the entry in a predetermined manner; and (iii) a number of pointers which point to portions of data in the database which correspond to the key for the entry; wherein each key comprises a sequence of sub-word unit classifications which is derived from a corresponding sequence of sub-word units appearing in the database by classifying each of the sub-word units in the sequence into one of the plurality of sub-word unit classes; means for classifying each of the sub-word units in the input query into one of the plurality of sub-word unit classes and for defining one or more sub-sequences of query sub-word unit classifications; means for determining a corresponding identifier for an entry in said index for each of said one or more sub-sequences of query sub-word unit classifications; means for comparing the key associated with each of the determined identifiers with the corresponding sub-sequence of query sub-word unit classifications; and means for retrieving one or more pointers from said index in dependence upon the output of said comparing means, which one or more pointers identify said one or more portions of data in the database for comparison with the input query.
6012158
08770643
1
1. A decoding apparatus for decoding an error correction code interleaved on a transmitting side and deinterleaved on a receiving side, comprising: a determining circuit for checking each word of the error correction code to make determination as to whether decoding is acceptable or not according to a result of decoding; an error position detector for obtaining error position data by detecting an error position in decoding bit by bit according to the result of decoding; an estimator for estimating an error position according to error position data detected by said error position detector in a word adjoining a correction object word when said determining circuit determines that decoding is not acceptable; a correcting circuit for correcting said correction object word according to the error position estimated by said estimator; and a redecoder for executing redecoding processing according to a result of correction by said correcting circuit.
20140244262
14185448
0
1. A voice synthesizing method comprising: a determining step of determining a manipulation position which is moved according to a manipulation of a user; and a generating step of generating, in response to an instruction to generate a voice in which a second phoneme follows a first phoneme, a voice signal so that vocalization of the first phoneme starts before the manipulation position reaches a reference position and that vocalization from the first phoneme to the second phoneme is made when the manipulation position reaches the reference position.
20060150097
11026629
0
1. A method for processing a received message, the method comprising the steps of: associating the message with a default language, wherein the default language corresponds to a default codepage; identifying portions of the message having an associated language key, wherein each of the language keys corresponds to a unique language key codepage; and converting a language of each of the identified portions of the message with the corresponding language key codepage and converting a language of portions of the message not having an associated language key with the default codepage.
20150100157
14390746
0
1. A humanoid robot, comprising: i) at least one sensor selected from a group including first sensors of the sound type and second sensors, of at least one second type, of events generated by at least one user of said robot, ii) at least one event recognition module at the output of said at least one sensor and, iii) at least one module for generating events towards said at least one user, a module for dialog with said at least one user, said dialog module receiving as input the outputs of said at least one recognition module and producing outputs to said event generation module selected from a group including speech, movements, expressions and emotions, wherein said robot further includes an artificial intelligence engine configured for controlling the outputs of the event generation module according to a context of dialog and variables defining a current and a forecast configuration of the robot.
8452597
13621068
1
1. A method comprising: determining whether a mobile computing device is receiving operating power from an external power source, wherein the mobile computing device has a trigger word detection subroutine that is activatable by a user input and automatically in response to determining that the mobile computing device is receiving external power; and in response to determining that the mobile computing device is receiving operating power from the external power source, activating a trigger word detection subroutine, wherein the trigger word detection subroutine includes: receiving spoken input via a microphone of the mobile computing device, based on speech recognition performed on the spoken input, obtaining text, determining whether the text includes one or more trigger words associated with a voice command prompt application, and in response to determining that the text includes the one or more trigger words associated with the voice command prompt application, launching the voice command prompt application, wherein the voice command prompt application is configured to receive via the microphone additional spoken input that causes the mobile computing device to launch one or more other applications, and wherein launching the voice command prompt application comprises displaying a voice command prompt on the mobile computing device.
20090171949
12347240
0
1. A method for determining a linguistic preference between two or more phrases, comprising: submitting each of the phrases as a search string to at least one search engine; receiving search results from each of the at least one search engine for each submitted search string; comparing total hit values of each search result; and displaying, to a user, one of the phrases associated with a greatest total hit value as a preferred phrase.
20060238520
11428515
0
1. A method for mapping gestures performed on a multi-touch surface to graphical user interface commands, the method comprising generating a pan command in response to whole hand translation.
8180633
12039965
1
1. A method for semantic extraction using neural network architecture, comprising: indexing an input sentence and providing position information for a word of interest and a verb of interest; converting words into vectors using features learned during training; integrating verb of interest and word of interest position relative to the word to be labeled by employing a linear layer that is adapted to the input sentence; and applying linear transformations and squashing functions to the vectors to predict semantic role labels.
8180627
12166647
1
1. An apparatus for clustering process models each consisting of model elements comprising a text phrase which describes in a natural language a process activity according to a process modeling language grammar and a natural language grammar, wherein said apparatus comprises: (a) a process object ontology memory for storing a process object ontology; (b) a distance calculation unit for calculating a distance matrix employing said processing modeling language grammar and said natural language grammar, wherein said distance matrix consists of distances each indicating a dissimilarity of a pair of said process models; and (c) a clustering unit which partitions said process models into a set of clusters based on said calculated distance matrix.
5587718
08260817
1
1. A method for automatically discovering and designating targets for an operator of a weapons system, comprising: (a) automatically scanning for existence and location of a target and thereby acquiring signals indicative of existence and location of the target; (b) transmitting signals indicative of existence and location of a target, to a portable receiver that is collocated with a weapons system operator who is wearing a headset unit which includes stereophonic headphones operatively connected with the receiver, so that the operator receives audio signals, including digitized voice signals, which vary depending on the existence and location of the target as well as on spatial orientation of the headset unit; (c) the operator orienting his or her head towards a direction in which the weapons system is to be fired at said target by moving so as to achieve a predetermined modification of said audio signals, including digitized voice signals; (d) while conducting steps (a) and (b), acquiring signals indicative of existence and location of a higher priority target and transmitting signals indicative of existence and location of said higher priority target to said receiver; and (e) the operator interrupting step (c) by rapid large-scale head rotation in response to audio signals received in step (d), and then conducting step (c) in regard to the audio signals received in step (d), for orienting their head towards the direction in which the weapons system is to be fired at said higher priority target.
9363372
14320743
1
1. A method for personalizing voice assistant, comprising steps of: activating a voice module having a personal name; inputting a voice message to said voice module, said voice message including said personal name; extracting voiceprint parameters after said step of inputting a voice message to said voice module; recognizing said voice message to recognize said voice message according to said voiceprint parameters for producing recognition data, said recognition data is configured to build system setup; and said voice module converting said personal name of said recognition data to an intelligent conversion name; wherein said intelligent conversion name triggering an intelligent conversion module of a server.
8914398
13223209
1
1. A computer-implemented method, comprising: receiving text input, the text input including content associated with an input source; providing the text input to a keyword suggestion tool, wherein the keyword suggestion tool generates one or more keywords based on the text input; applying a text reduction function to the text input to generate a reduced text that is a subset of the text input, wherein the text reduction function is based on a term importance score of terms in the text input; providing the reduced text to the keyword suggestion tool, wherein the keyword suggestion tool generates one or more keywords based on the reduced text, the one or more keywords generated based on the reduced text generated independently from the one or more keywords generated based on the text input; and generating a keyword set output from a combination of the one or more keywords based on the text input and the one or more keywords based on the reduced text.
20050197833
11095605
0
1. CELP-based speech encoder that performs encoding by decomposing one frame into a plurality of subframes, comprising: an LPC synthesizer that obtains synthesized speech by filtering an adaptive excitation vector and a stochastic excitation vector stored in an adaptive codebook and in an stochastic codebook using LPC coefficients obtained from input speech; a gain calculator that calculates gains of said adaptive excitation vector and said stochastic excitation vector; a parameter coder that performs vector quantization of the adaptive excitation vector and the stochastic excitation vector obtained by comparing distortions between said input speech and said synthesized speech, and a pitch analyzer that performs pitch analyses of a plurality of subframes in the frame respectively, before performing an adaptive codebook search for the first subframe, calculating correlation values and finding a value most approximate to the pitch period using said correlation values.
20090228126
12395265
0
1. An apparatus for annotating a line-based document, wherein said line-based document comprises audio data, said apparatus comprising: an audio codec coupled to an audio output device; a voice recognition function coupled to an audio input, said voice recognition function configured to detect one or more audible document navigation commands and one or more audible annotation commands; a navigation function responsive to a detected document navigation command received from said voice recognition function, said detected document navigation command comprising a desired line identifier, said navigation function configured to determine a desired audio time code associated with said desired line identifier and to direct said audio codec to play back said audio data from said desired audio time code; an annotation function responsive to a detected annotation command received from said voice recognition function, said annotation function configured to capture an audible annotation via said audio input and to store said audible annotation as an audio annotation file; and an index generator configured to add to an index file an annotation link having a first reference to said audio annotation file and a second reference to an associated line of said line-based document.
20120330651
13530149
0
1. A voice data transferring device which intermediates between a terminal device and a voice recognition server, in which the terminal device: records a voice of a user when the user is speaking; transmits the speech voice as a voice data; receives a recognition result of the transmitted voice data; and outputs the recognition result to the user, and in which the voice recognition server: receives the voice data from the terminal device; recognizes the voice data; and transmits the recognition result of the voice data, the voice data transferring device comprising: a storage unit that stores therein a first parameter value used for performing a data manipulation processing on the voice data and a voice data for evaluation used for evaluating voice recognition performance of the voice recognition server; a data processing unit that performs a data manipulation processing on the voice data for evaluation using the first parameter value, synthesizes a first voice data from the voice data for evaluation, performs a data manipulation processing on the voice data received from the terminal device using the first parameter value, and synthesizes a second voice data from the voice data received from the terminal device, a server communication unit that transmits the first voice data to the voice recognition server, receives a first recognition result from the voice recognition server, transmits the second voice data to the voice recognition server, and receives a second recognition result from the voice recognition server; a terminal communication unit that transmits the second recognition result of the second voice data to the terminal device; and a parameter change unit that updates the first parameter value stored in the storage unit, based on the received first recognition result of the first voice data.
20130246322
13441138
0
1. A computer-implemented method of training a neural network, comprising: training a first neural network of a self organizing map type with a first set of first text documents each containing one or more keywords in a semantic context to map each document to a point in the self organizing map by semantic clustering; determining, for each keyword occurring in the first set, all points in the self organizing map to which first documents containing said keyword are mapped, as a pattern and storing said pattern for said keyword in a pattern dictionary; forming at least one sequence of keywords from a second set of second text documents each containing one or more keywords in a semantic context; translating said at least one sequence of keywords into at least one sequence of patterns by using said pattern dictionary; and training a second neural network with said at least one sequence of patterns.
20110082684
12820061
0
1. A method for providing a trusted translation, comprising receiving a human-generated translation of a document from a source language to a target language; generating a trust level prediction of the human-generated translation by executing a quality-prediction engine stored in memory, the trust level associated with translational accuracy of the human-generated translation; and outputting the human-generated translation and the trust level.
20020026746
09849756
0
1. A method of using micro-variations of a biological living plant organism to generate music, such method comprising the steps of: detecting a plant microvoltage across a varying resistance of the biological living plant organism within a Wheatstone bridge; generating a feedback signal from an output of an external MIDI sound generator; subtracting the feedback signal from the plant microvoltage to provide a difference signal; and providing the difference signal as a drive signal to the MIDI sound generator to generate musical tones.
20100332428
12781939
0
1. In a computing system, a method for: analyzing an electronic document to generate document identifying data; classifying the electronic document in one or more categories by applying a classification rule to the document identifying data; displaying the classified electronic document in the one or more categories; and updating the classification rule based on input from a user.
20030197630
10127643
0
1. A method for encoding data transmitted over a communications channel, comprising: pre-loading an encoder dictionary with a set of character strings expected to appear in input data to be encoded; and encoding the input data with the set of expected character strings pre-loaded in the encoder dictionary.
9531995
14746497
1
1. A system comprising: a processor; a camera; and memory, accessible by the processor and storing instructions that are executable by the processor to perform acts comprising: causing presentation of first content on a display surface; receiving, from the camera, an image that includes at least a portion of the display surface and a reflector having known dimensions to reflect a representation of a face of a user, the user being in a field of view of the camera; analyzing the image; determining a portion of the reflector associated with the face of the user; and displaying second content associated with the face of the user.
8954319
14338550
1
1. A method comprising: at each turn in a dialog, nominating via a processor, using a partially observable Markov decision process in parallel with a conventional dialog state, a set of allowed dialog actions and a set of contextual features; and generating a response based on the set of contextual features and a dialog action selected, via a machine learning algorithm, from the set of allowed dialog actions.
10062378
15441973
1
1. A computer-implemented method performed by a speech recognition system having at least a processor, the method comprising: obtaining, by the processor, a frequency spectrum of an audio signal data; extracting, by the processor, periodic indications from the frequency spectrum; inputting, by the processor, the periodic indications and components of the frequency spectrum into a neural network; estimating, by the processor, sound identification information from the neural network; and performing, by the processor, a speech recognition operation on the audio signal data to decode the audio signal data into a textual representation based on the estimated sound identification information, wherein the neural network includes a plurality of fully-connected network layers having a first layer that includes a plurality of first nodes and a plurality of second nodes, and wherein the method further comprises training the neural network by initially isolating the periodic indications from the components of the frequency spectrum in the first layer by setting weights between the first nodes and a plurality of input nodes corresponding to the periodic indications to 0.
20090265303
12104168
0
1. A computer-implemented method for identifying superphrases in a set of candidate phrases with reference to a set of seed phrases, each of the candidate phrases comprising one or more candidate phrase words, and each of the seed phrases comprising one or more seed phrase words, the method comprising: sorting all distinct ones of the seed phrase words in the set of seed phrases; indexing each seed phrase in the set of seed phrases by sorting the corresponding seed phrase words, and indexing the seed phrase with reference to the sorted distinct seed phrase words; and determining whether each candidate phrase is a superphrase of one or more of the seed phrases by sorting only the corresponding candidate phrase words included among the distinct seed phrase words, and determining whether all of the seed phrase words of any of the indexed seed phrases are included among the sorted candidate phrase words.
20140344279
14201134
0
1. A computer-implemented method comprising: analyzing a cluster of conceptually-related portions of text to develop a model; calculating a novelty measurement between a first identified conceptually-related portion of text and the model; and transmitting a second identified conceptually-related portion of text and a score associated with the novelty measurement.
20150050633
14092165
0
1. A computer-implemented method comprising: selecting, by a computer-based system for displaying applications in a graphical user interface (“GUI”), a plurality of applications to be displayed in the GUI; determining, by the computer-based system, a relative size for each of the plurality of applications; and formatting, by the computer-based system, the GUI such that the plurality of applications substantially fills the GUI.
20090175509
12074985
0
1. A personal computing device comprising: a user interface for i) generating one or more user information outputs and ii) receiving one or more user information inputs, an image sensor for capturing one or more images, and a processor for i) detecting one or more faces in the one or more images and ii) controlling at least one of the generation of the one or more user information outputs and the receiving of the one or more user information inputs in response to detecting the one or more faces.
8666895
13401117
1
1. An apparatus comprising: an input device, configured to receive input from a user: a communication device configured to transmit wireless signals to a transaction device; a memory comprising predetermined payment information stored therein comprising a user defined action for authorizing a wireless payment; and a processor communicably coupled to the input device, the communication device and the memory, wherein the processor is configured to operate computer instruction code to: receive a parameter from the user, wherein the parameter comprises a maximum time since a previous authentication; receive transaction information from the transaction device related to a transaction, wherein the transaction information comprises a current time; determine a time of a most recent authentication; determine a duration between the time of the most recent authentication and the current time; compare the duration to the parameter to determine when the duration is less than the maximum time since the previous authentication; receive a first input from the user; determine if the first input matches the user defined action for authorizing a wireless payment; and use the communication device to wirelessly transmit the predetermined payment information and authorize payment when the duration is less than the maximum time since the previous authentication and the first input matches the user defined action stored in the memory.
9984685
14535764
1
1. A method for accepting or rejecting hypothesis words in a hypothesis part using an adjustable acceptance threshold as part of a speech recognition system, the method comprising: receiving a single speech input from a user, the speech input comprising a first speech input part and a second speech input part, the first speech input part and the second speech input part each having information independent from the other speech input part; processing the single speech input to generate a single hypothesis comprising a sequence of a first hypothesis part corresponding to the first input part and a second hypothesis part corresponding to the second input part, each of the first hypothesis part and the second hypothesis part having one or more hypothesis words, and each hypothesis word having a corresponding confidence score; independently comparing each of the first hypothesis part and the second hypothesis part with a first expected response part and a second expected response part, respectively, the first expected response part and the second expected response part having information different and independent from the other expected response part, and the first expected response part being independently compared with the first hypothesis part and the second expected response part being independently compared with the second hypothesis part, and using boundaries between the first or the second expected response parts to determine boundaries between the first or the second hypothesis parts respectively; adjusting an acceptance threshold for each hypothesis word in the first hypothesis part if the first hypothesis part matches word-for-word the first expected response part, otherwise not adjusting the acceptance threshold for each hypothesis word in the first hypothesis part, and independently adjusting an acceptance threshold for each hypothesis word in the second hypothesis part if the second hypothesis part matches word-for-word the second expected response part, otherwise not adjusting the acceptance threshold for each hypothesis word in the second hypothesis part; comparing the confidence score for each hypothesis word in each of the first hypothesis part and second hypothesis part to its acceptance threshold; and accepting or rejecting each hypothesis word in each of the first hypothesis part and the second hypothesis part based on the results of the comparison.
10114619
14864866
1
1. A computer implemented method to develop a data model, the computer implemented method comprising: displaying contents of a data file as a text-based data object via a text editor interface of an integrated development environment (IDE), the displayed text-based data object including text values of one or more elements, one or more attributes, and one or more attribute values; and in response to detecting an input modifying the one or more attribute values of the text-based data object via the text editor interface, transforming the text-based data object into a graphical model including the one or more attributes modified via the text editor interface and visually representing relationships between the modified one or more attribute values, the one or more attributes, and the one or more elements, and displaying the graphical model via a graphical editor interface of the IDE, wherein the displaying the graphical model comprises simultaneously displaying the graphical editor interface and the text editor interface within a window of the IDE.
8285536
12533519
1
1. A computer-implemented method comprising: accessing, at a computing device including a processor, a translation hypergraph that represents a plurality of candidate translations, the translation hypergraph including a plurality of paths including nodes connected by edges; calculating, at the computing device, first posterior probabilities for each edge in the translation hypergraph; calculating, at the computing device, second posterior probabilities for each n-gram represented in the translation hypergraph based on the first posterior probabilities; and performing, at the computing device, decoding on the translation hypergraph using the second posterior probabilities to convert a sample text from a first language to a second language, where calculating the second posterior probabilities includes calculating: P ⁡ ( w | Ψ ) = ∑ E ∈ Ψ ⁢ ∑ e ∈ E ⁢ f ⁡ ( e , w , E ) ⁢ P ⁡ ( E | F ) , where P(w|Ψ) is the posterior probability of the n-gram w in the translation hypergraph; Ψ is a space defined by the translation hypergraph; E is a candidate translation; F is the sample text in the first language, e is an edge; and f(e,w,E)=1 when Wεe, P(e|Ψ)>P(e′|Ψ), and e′ is an edge that precedes e on E; otherwise, f(e,w,E)=0.
8874653
14270065
1
1. A method comprising: detecting, by extension circuitry, a guest personal mobile communication device that is positioned to wirelessly communicate with at least one of one or more wireless transceivers, wherein: the guest personal mobile communication device is a communication device of a guest user that is attempting to use a vehicle, the one or more wireless transceivers are located in the vehicle and are configured to communicate with personal mobile communication devices that are located within a passenger compartment of the vehicle, the extension circuitry is electrically connected to one or more road contact transceivers and the one or more wireless transceivers, the extension circuitry is configured to manage communications between the one or more road contact transceivers, the one or more wireless transceivers, in-pavement vehicle detection systems, and personal mobile communication devices, the one or more road contact transceivers are arranged so that at least one of the one or more road contact transceivers is within a predetermined distance from a first in-pavement vehicle detection system of a first type based on rotatable wheels of the vehicle being located on pavement at a position above the first in-pavement vehicle detection system, and the one or more road contact transceivers are configured to transmit information to and receive information from the first in-pavement vehicle detection system based on at least one of the one or more road contact transceivers being within the predetermined distance from the first in-pavement vehicle detection system; based on detection of the guest personal mobile communication device, monitoring, by the extension circuitry, for presence of an owner personal mobile communication device in position to wirelessly communicate with at least one of the one or more wireless transceivers, the owner personal mobile communication device having been registered to the extension circuitry as a device that is able to use the vehicle and authorize guests to use the vehicle; based on the monitoring for presence of the owner personal mobile communication device, detecting, by the extension circuitry, the owner personal mobile communication device in position to wirelessly communicate with at least one of the one or more wireless transceivers; based on detection of the owner personal mobile communication device, authorizing, by the extension circuitry, use of the vehicle by the guest personal mobile communication device and receiving, from the guest personal mobile communication device through at least one of the one or more wireless transceivers, guest vehicle settings stored by the guest personal mobile communication device, the guest vehicle settings defining preferences of the guest user for settings associated with use of the vehicle and settings associated with communications exchanged with in-pavement vehicle detection systems by the guest personal mobile communication device; after authorization of use of the vehicle by the guest personal mobile communication device, detecting, by the extension circuitry, a passenger personal mobile communication device that is positioned to wirelessly communicate with at least one of the one or more wireless transceivers, the passenger personal mobile communication device being a communication device of a passenger user who is located in the passenger compartment of the vehicle; based on detection of the passenger personal mobile communication device, receiving, from the passenger personal mobile communication device through at least one of the one or more wireless transceivers, passenger vehicle settings stored by the passenger personal mobile communication device, the passenger vehicle settings defining preferences of the passenger user for settings associated with use of the vehicle and settings associated with communications exchanged with in-pavement vehicle detection systems by the passenger personal mobile communication device; accessing, from electronic storage, vehicle rules that define permissible settings for the vehicle, the accessed vehicle rules having been defined based on communication with the owner personal mobile communication device through at least one of the one or more wireless transceivers; evaluating, by the extension circuitry, the received guest vehicle settings and the received passenger vehicle settings with respect to the accessed vehicle rules; based on the evaluation, determining, by the extension circuitry, current vehicle settings for settings associated with use of the vehicle by the guest user and the passenger user and settings associated with communications exchanged with in-pavement vehicle detection systems by the guest personal mobile communication device and the passenger personal mobile communication device, the current vehicle settings meeting a subset of the preferences of the guest user and a subset of the preferences of the passenger user; monitoring, by the extension circuitry, for an ability to connect with the first in-pavement vehicle detection system through at least one of the one or more road contact transceivers, the first in-pavement vehicle detection system being able to simultaneously connect with multiple vehicles through road contact transceivers; based on the monitoring for the ability to communicate with the first in-pavement vehicle detection system, detecting, by the extension circuitry, the ability to communicate with the first in-pavement vehicle detection system; based on detection of the ability to communicate with the first in-pavement vehicle detection system, adding, by the extension circuitry and in accordance with the current vehicle settings, the vehicle to a first ad hoc social group that includes the multiple vehicles simultaneously connected to the first in-pavement vehicle detection system through road contact transceivers; determining, by the extension circuitry, that the first in-pavement vehicle detection system has the first type; based on the current vehicle settings and the determination that the first in-pavement vehicle detection system has the first type, enabling, by the extension circuitry, the passenger personal mobile communication device to interact with the first ad hoc social group without revealing identifying information associated with the passenger personal mobile communication device; automatically, without user intervention, disconnecting, by the extension circuitry, from the first ad hoc social group based on the vehicle moving to a position in which the one or more road contact transceivers are outside of the predetermined distance from the first in-pavement vehicle detection system; detecting, by the extension circuitry, the ability to communicate with a second in-pavement vehicle detection system of a second type, the second in-pavement vehicle detection system being different than the first in-pavement vehicle detection system and the second type being different than the first type; based on detection of the ability to communicate with the second in-pavement vehicle detection system, adding, by the extension circuitry and in accordance with the current vehicle settings, the vehicle to a second ad hoc social group that includes multiple vehicles simultaneously connected to the second in-pavement vehicle detection system through road contact transceivers; determining, by the extension circuitry, that the second in-pavement vehicle detection system has the second type; and enabling, by the extension circuitry, the guest personal mobile communication device to interact with the second ad hoc social group based on the determination that the second in-pavement vehicle detection system has the second type.
8755506
11770969
1
1. A method comprising: joining a first participant, configured over an integrated call and chat conference platform, in a conference, wherein the first participant communicates over a voice session; converting the voice session, including a first communication by the first participant, into a text stream; storing the text stream; and joining a second participant, configured over the integrated call and chat conference platform, in the conference after an occurrence of the first communication, wherein the second participant communicates over a chat session and a presence server determines that the second participant is online, wherein presence information is updated periodically, and wherein the stored text stream is presented to the second participant during the conference for viewing.
9971769
15673694
1
1. A translation result providing method using a computer, the method comprising: generating, by a processor, candidate translation sentences by translating a source sentence of a source language into a target language using a machine translation model; classifying, by the processor, the candidate translation sentences into semantic categories, respectively, based on attributes of the candidate translation sentences; generating, by the processor, information regarding a personality of a user by analyzing user information on Internet, the personality of the user being a service type or a writing style suitable for the user; predicting and automatically setting, by the processor, a specific semantic category, from among the semantic categories, based on the analyzed user information; and providing, by the processor, at least one of the classified candidate translation sentences as a translation result, wherein the providing includes displaying a first classified candidate translation sentence, from among the classified candidate translation sentences, which corresponds to the information in a first region of a screen and displaying a second classified candidate translation sentence, from among the classified candidate translation sentences, which does not correspond to the information in a second region of the screen, and the first region and the second region are visually distinguished from each other on the screen.
8782518
13101501
1
1. A computer-implemented medical diagnosis system, comprising: a. a memory and a display; b. a plurality of multilingual forms stored in the memory c. the multilingual forms containing a plurality of diagnostic information in a plurality of predetermined diagnostic-information locations; d. two or more language selection buttons, e. a plurality of audio activation buttons, each audio activation button located adjacent to a diagnostic-information location, f. activation of one of said audio activation buttons causes the system to play an audio clip, wherein i. the audio clip explains the diagnostic information adjacent to the activated audio-activation-button, and ii. the audio clip explains the diagnostic information in the most recently selected output language.
10152533
13654976
1
1. A computer-implemented method comprising: receiving a user query comprising one or more query keywords; in response to receiving the user query, automatically determining, using a processor, a set of segment candidates based on the user query and an indexing structure, the indexing structure comprising a plurality of segment constraints associated with a corresponding segment candidate, the one or more segment constraints being one of a critical segment constraint, an exclusionary segment constraint and a supplemental segment constraint, wherein each of the one or more segment constraints comprises a listing of one or more critical keywords and at least one of one or more exclusionary keywords or one or more supplemental keywords, said critical segment constraint comprising one or more textual words conveying a concept synonymous to the corresponding segment candidate, said exclusionary segment constraint comprising one or more textual words conveying a concept not synonymous to the corresponding segment candidate; said determining comprising matching the one or more query keywords with the one or more critical keywords and the at least one exclusionary keyword or supplemental keyword of the corresponding segment candidate, said determining further comprising: generating a critical word group count and an exclusionary count for each of the one or more critical keywords, and at least one exclusionary keyword or supplemental keyword; generating a segment candidate based on a sum of the critical word group count and the exclusionary count; ranking, using the processor, the set of segment candidates based on a total number of critical and supplemental segment constraints for each of the set of segment candidates to generate a ranked set of segment candidates stored in a memory and retrievable by a set of program code executed by the processor; and providing a result associated with the ranked set of segment candidates.
9361881
14312116
1
1. A method comprising: analyzing acoustic features of a received audio signal from a communication device; identifying a repeating pattern of meta-data associated with the acoustic features, wherein the repeating pattern of meta-data comprises a speed of a caller associated with the communication device; classifying a background environment of the caller based on the acoustic features and the repeating pattern of meta-data, to yield a background environment classification; selecting an acoustic model matched to the background environment classification from a plurality of acoustic models; and performing speech recognition on the received audio signal using the acoustic model.
20080040693
11340288
0
1. A computer-generated graphical user interface for users with limited reading skills, comprising: a screen display operable for displaying graphical controls in one or more text modes; a plurality of pages, operable to be displayed on the screen display; a plurality of controls operable to be displayed on one or more of the plurality of pages, wherein each control is represented by an icon comprising a recognizable image, and each control has a related control help message, each control operationally able to respond with the control help message upon activation; a text-free mode implementer, operable to ensure that no text appears within the screen display; and a page help feature, operable to appear on each page of the plurality of pages, wherein the page help feature appears at substantially the same location on each page of the plurality of pages.
20130166287
13724700
0
1. A method for dual modes pitch coding implemented by an apparatus for speech/audio coding, the method comprising: coding pitch lags of a plurality of subframes of a frame of a voiced speech signal using one of two pitch coding modes according to a pitch length, stability, or both, wherein the two pitch coding modes include a first pitch coding mode with relatively high pitch precision and reduced dynamic range and a second pitch coding mode with relatively high pitch dynamic range and reduced precision.
4441201
06342311
1
1. A speech synthesis system comprising: input means for receiving frames of speech data, said frames of speech data comprising binary representations of pitch data, energy data, reflection coefficient data and coded frame rate data, wherein said coded frame rate data is indicative of a variable time interval between the start of a current frame of speech data and the start of the next successive frame of speech data; decoding means coupled to said input means for decoding said frame rate data; interpolator means coupled to said input means and to said decoding means for providing a variable number of interpolation calculations to define interpolated speech values between adjacent frames of speech data from last implemented speech data in which the number of interpolation calculations and the time interval between the respective starts of adjacent frames of speech data in a given instance are determined by said frame rate data; speech synthesizer means coupled to said interpolator means for selectively converting said frames of speech data and interpolated values thereof into analog speech signals representative of human speech; and audio means coupled to said speech synthesizer means for converting said analog signals representative of human speech into audible sounds.
7596767
11156873
1
1. A multimodal system for controlling electronic components, comprising: a general purpose computing system which is in communication with said electronic components via a computer network, said electronic components being separate from the computing system; a computer program comprising program modules executable by the computing system, said program modules comprising: an object selection module that identifies an object selected by a user via a pointing device associated with at least one camera and at least one light-emitting diode (LED), a gesture recognition module that recognizes one or more motions of the pointing device in three-dimensional space, the pointing device associated with at least one accelerometer, and a speech control module that identifies a component selected by a user, each of the object selection module, the gesture recognition module, and the speech control module providing inputs to an integration module that integrates said inputs to arrive at a unified interpretation of what object the user wants to control and what control action is desired.
6015947
09239102
1
1. A method of teaching music comprising: a) teaching rote understanding of musical skills to the student; b) teaching basic structural elements of music to the student by using a two-line staff; c) teaching a five-line staff to the student; d) teaching rhythm to the student; and e) teaching the integration of steps c) and d) to the student.
9269350
13956313
1
1. A method comprising: obtaining a plurality of audio channels provided by a plurality of microphones, the plurality of audio channels comprising at least one audio control channel and at least one audio output channel; performing voice recognition on the at least one audio control channel; detecting, based on the performed voice recognition on the at least one audio control channel, a voice keyword; and performing adaptive filtering, at least one adaptive filter, to attenuate the detected voice keyword from the at least one audio output channel.
20130013304
13539380
0
1. A method of environmental noise compensation of a speech audio signal, the method comprising: estimating a fast audio energy level and a slow audio energy level in an audio environment, wherein the speech audio signal is not part of the audio environment; and applying a gain to the speech audio signal to generate an environment compensated speech audio signal, wherein the gain is updated based on the estimated slow audio energy level when the estimated fast audio energy level is not indicative of an audio event in the audio environment and the estimated gain is not updated when the estimated fast audio energy level is indicative an audio event in the audio environment.
20100332199
12494709
0
1. A method for analyzing a target electromagnetic signal radiating from a monitored system, comprising: monitoring the target electromagnetic signal using a set of antennas to obtain a set of received target electromagnetic signals from the monitored system; calculating a weighted mean of the received target electromagnetic signals using a first pattern-recognition model; subtracting the received target electromagnetic signals from the weighted mean of the received target electromagnetic signals to obtain a set of noise-reduced signals for the monitored system; and assessing the integrity of the monitored system by analyzing the noise-reduced signals using a second pattern-recognition model.
7610199
11217912
1
1. A method for recognizing speech in an audio stream comprising a sequence of audio frames, the method comprising the steps of: continuously recording said audio stream to a buffer; receiving a command to recognize speech in a first portion of said audio stream, where said first portion of said audio stream occurs between a user-designated start point and a user-designated end point, and where said command is distinct from said audio stream; augmenting said first portion of said audio stream with one or more audio frames of said audio stream that do not occur between said user-designated start point and said user-designated end point to form an augmented audio signal; and outputting a recognized speech in accordance with said augmented audio signal.
7478042
10432237
1
1. A stationary noise period detecting apparatus comprising: a pitch history analyzer that classifies pitch periods of a plurality of past subframes into one or more classes in a way in which different pitch periods are classified to different classes, groups classes where a difference between the pitch periods classified to those classes is less than a predetermined first threshold into one group when there are a plurality of classes, and obtains a number of the groups as an analysis result; and a determiner that determines that a signal period where the analysis result is less than a predetermined second threshold is a speech period.
20120166199
13372241
0
1. A method, comprising: receiving a selection of an application at a device; receiving audio input at the device; transmitting to a server a first transmission comprising the audio input; receiving an identifier of the audio input from the server; transmitting to the server a second transmission comprising the identifier of the audio input and a request for results; receiving from the server a transcription of at least a portion of the audio input; and processing at least a portion of the transcription with the application.
20080167857
12051973
0
1. A computer-implemented optimization method for instance-based sentence boundary determination comprising the steps of setting an initial upper bound (UB) of a cost associated with an optimized solution to a lowest cost derived by several greedy algorithms; identifying all corpus instances stored in an electronic database that contains one or more of a plurality of desired propositions; forming a search tree structure with branches for each of plurality of identified corpus instances that contain one or more of said plurality of desired propositions; deleting one or more of a plurality of undesired propositions from said identified corpus instances; updating an overall cost with one or more deletion costs; inserting one or more of said plurality of desired propositions that were not contained in said corpus instance into said corpus instance; updating the overall cost with one or more insertion costs; calculating a lower bound (LB) of a cost associated with a current solution or partial solution; pruning a current search branch if the LB is greater than the UB; recursively computing a best solution associated with generating one or more additional sentences to convey the rest of said plurality of desired propositions that were not contained in said corpus instance; updating the overall cost with a boundary cost plus a cost associated with the best solution found by the recursively computing procedure; updating UB if the current overall cost is lower than UB; and outputting a solution that has the lowest overall cost using a set of said identified corpus instances with a set of said deletion, insertion and sentence break operations.
20160078022
14483527
0
1. A method comprising: obtaining a document; determining, using a trained classifier, a candidate label for the document from a plurality of labels; selecting one or more linguistic structures from the document; displaying a user interface that presents data from the document, including at least a portion of the one or more linguistic structures, and the candidate label, wherein the portion of the one or more linguistic structures are displayed by the user interface, wherein the user interface includes one or more user interface controls which present a first option to accept the candidate label for the document and a second option to select a different label for the document, the one or more user interface controls further presenting an element for highlighting the one or more linguistic structures within the document; receiving, via the one or more user interface controls, input representing selection of the first option or the second option, and further input comprising a highlighted section of the one or more linguistic structures that was important to the selection of the first option or the second option; associating the document with a verified label; changing one or more weights assigned to the highlighted section relative to a non-highlighted section during retraining of the trained classifier; wherein the method is performed by one or more computing devices.
9055147
11848958
1
1. A method comprising: initiating, at one or more computing devices, a finite state machine; receiving, at the one or more computing devices, a voice input; interpreting, at the one or more computing devices, the received voice input, comprising: transitioning to a domain state functionality of the finite state machine, selecting a generic prompt corresponding to the domain state functionality, and selecting a specific prompt corresponding to the generic prompt, wherein the specific prompt comprises a variant of the generic prompt and also corresponds to the domain state functionality; and transmitting, at the one or more computing devices, the specific prompt in a response.
7774294
11682693
1
1. A computer-implemented user-interface method of selecting and presenting a collection of content items in which the presentation is ordered at least in part based on learning periodicities of user selections of content items, the method comprising: providing access to a set of content items, each content item having at least one associated descriptive term to describe the content item; receiving incremental input entered by the user for incrementally identifying desired content items; in response to the incremental input entered by the user, presenting a subset of content items to the user; receiving actions from the user resulting in the selection of content items from the subset; analyzing the descriptive terms associated with the selected content items to identify sets of actions resulting in the selection of similar content items, wherein similarity is determined by comparing the descriptive terms associated with any one of the selected content items with any of the previously selected content items; analyzing the date, day, and time of at least two of the individual selection actions of the sets of actions to learn periodicities of user actions resulting in the selections of similar content items, wherein the periodicity corresponding to a particular set of actions for selecting similar content items indicates the amount of time between the user actions of the set; associating the learned periodicities of the sets of actions resulting in the selection of similar content items with the corresponding descriptive terms associated with the similar content items that were selected; and in response to receiving subsequent incremental input entered by the user, selecting and ordering a collection of content items wherein content items associated with descriptive terms similar to the subsequent incremental input and associated with descriptive terms further associated with periodicities similar to the date, day, and time of the subsequent incremental input are presented on a display device as more relevant content.
7921113
11497357
1
1. A dictionary creation device that creates a dictionary which is used for searching, classifying, or filtering information written as text and in which keywords are registered per category, the dictionary creation device comprising: a classification information acquisition unit that acquires classification information regarding categories and text information from at least a first information source and a second information source which differ from an information source for information written as text and searched; a keyword extraction unit that extracts a keyword from the acquired text information; a dictionary registration and deletion unit that registers or deletes the extracted keyword in dictionaries corresponding to the first information source and the second information source, in accordance with a category of the first information source and a category of the second information source, respectively, based upon the classification information acquired by said classification information acquisition unit and the keyword extracted by said keyword extraction unit; a keyword database that stores the extracted keyword, said keyword database being a non-transitory computer-readable storage medium; and a dictionary combining and editing unit that edits the category of the first information source in the dictionary corresponding to the first information source and the category of the second information source in the dictionary corresponding to the second information source to create, as a category level structure of a combined dictionary, a new category level structure including the category of the first information source and the category of the second information source, based on a degree of overlap between characteristic keywords that are keywords characterizing classification information regarding the category of the first information source and characteristic keywords that are keywords characterizing classification information regarding the category of the second information source, wherein said dictionary combining and editing unit (i) compares a first set, which is a set of characteristic keywords in a first category included in the first information source, with a second set, which is a set of characteristic keywords in a second category included in the second information source, and (ii) edits and combines the dictionaries corresponding to the first information source and the second information source such that the second category is placed in a lower level subordinate to the first category as an intersecting set of the first set and the second set is less common to the first set and more common to the second set.
8682241
12464421
1
1. An analysis system for a classroom, comprising: a monitoring device configured to capture audio events of classroom participants comprising at least one teacher participant and student participant in a learning environment; a transcription device coupled to the monitoring device to transcribe speech dialogue of the classroom participants in the audio events; a processing device coupled to the transcription device to identify and classify at least the transcribed speech dialogue and to define educational strategies and mechanisms selected from the group consisting of problem solving, prompting for student participant, enforcing classroom discipline and establishing educational goals, in accordance with a context of the learning environment based on at least the transcribed speech dialogue, wherein scores are computed for each of the educational strategies and mechanisms in a accordance with observed audio events and processed to decide on a preferred action in the learning environment, wherein the scores are determined based on at least one of input from the monitoring system, output of the transcription device and output of the processing device; and a reporting mechanism configured to produce at least one annotated evaluation report based on observed data, the scores for the identified educational strategies and mechanisms to the at least one teacher participant.
20110066428
12559329
0
1. A system for automatically adjusting a voice intelligibility enhancement applied to an audio signal, the system comprising: an enhancement module configured to receive an input voice signal comprising formants and to apply an audio enhancement to the input voice signal to provide an enhanced voice signal, the audio enhancement configured to emphasize one or more of the formants in the input voice signal; an enhancement controller comprising one or more processors, the enhancement controller configured to adjust the amount of the audio enhancement applied by the enhancement module based at least partly on an amount of detected environmental noise; an output gain controller configured to: adjust an overall gain of the enhanced voice signal based at least partly on the amount of environmental noise and the input voice signal, and apply the overall gain to the enhanced voice signal to produce an amplified voice signal; and a distortion control module configured to reduce clipping in the amplified voice signal by at least mapping one or more samples of the amplified voice audio signal to one or more values stored in a sum of sines table, the sum of sines table being generated from a sum of lower-order sine harmonics.
20140188856
13733858
0
1. A method for translation of a medical database query from a first language into a second language, the method executed by one or more computer processors programmed to perform the method, the method comprising: receiving, at a processor from a user via a network, a query for a medical database, wherein the query is in the first language; transmitting, by a processor, the received query to a plurality of translation engines; receiving, at a processor, a plurality of translations for the query from the first language into the second language, wherein a respective translation is received from each translation engine in the plurality of translation engines; determining, by the processor, a respective ranking score for each received translation of the plurality of received translations; selecting, by the processor and based on the determined ranking scores, a translation from the plurality of translations; and performing one or both of: (i) transmitting, by a processor and via the network, the selected translation to the user, and (ii) utilizing, by a processor, the selected translation to search the medical database to obtain search results for the query, and transmitting, by a processor and via the network, the obtained search results to the user.
9454760
14103144
1
1. A method, comprising: accessing a message, by a computer, of a contact center from a sender; formulating, by the computer, a portion of a response to the message; storing, by the computer, the portion of the response in a memory; accessing, by the computer, a user context of the sender; selecting, by the computer, an embellishment in accord with the user context; retrieving, by the computer, the stored portion of the response from the memory; updating, by the computer, the retrieved portion of the response to include banter associated with the embellishment thereby creating an updated response; and sending, via a communications network, the updated response to the sender.
20080115163
11937848
0
1. A system for providing television advertisements based on a telephone conversation between two or more persons comprising: a speech recognition system for a telephone service configured to monitor a telephone conversation between two or more persons and to recognize key words and phrases spoken by one or more of the persons during the conversation; a database having one or more advertisements indexed by words and phrases; a search engine for querying the database based on key words and phrases recognized during the conversation; a television broadcast for a television service configured to integrate at least one advertisement from the database into the video feed to the television of at least one of the persons based on key words and phrases recognized during the conversation.
20150088509
14495391
0
1. System for classifying whether audio data received in a speaker recognition system is genuine or a spoof using a Gaussian classifier.
20100179801
12353155
0
1. A method implemented at least in part by a computing device, the method comprising: creating an index that maps each of multiple keywords to one or more phrases that each comprise at least one word; determining one or more statistically improbable phrases associated with a user, each of the one or more statistically improbable phrases comprising at least two contiguous words that occur in a corpus of text more than a threshold number of times; determining the words that compose the one or more statistically improbable phrases that are associated with the user; inputting, into the index as keywords, the determined words that compose the one or more statistically improbable phrases to determine phrases that are associated with the determined words that compose the one or more statistically improbable phrases; and outputting the determined phrases for suggestion to the user.
20100082347
12240433
0
1. A method for concatenating words in a text string, the method comprising: obtaining phonemes for a text string, the text string comprising at least a preceding word and a succeeding word to be concatenated; identifying a last letter of the preceding word to be concatenated, and identifying a first letter of the succeeding word to be concatenated; selecting a connector term and a connector term type based on the identified last letter and the identified first letter; and creating a modified text string for speech synthesis including the selected connector term and the selected connector type.
20080147403
12041427
0
1. A computer readable storage medium containing a program which, when executed, performs an operation, comprising: receiving a voice input; determining a number of sound fragments to be processed in a first set of sound fragments of the voice input by: monitoring a load of a first processing system and a load of a second processing system, and determining the number of sound fragments based on the load of the first processing system and the load of the second processing system, wherein the number of sound fragments is increased when the load of the second processing system exceeds a predefined threshold; using the number of sound fragments, determining, by the first processing system, whether the first set of sound fragments of the voice input matches with a set of sound fragments of a voice command; and if the first set of sound fragments matches with the set of sound fragments of the voice command, then determining, by the second processing system, whether one or more remaining sound fragments matches with one or more remaining sound fragments of the voice command.