doc_id
stringlengths
7
11
appl_id
stringlengths
8
8
flag_patent
int64
0
1
claim_one
stringlengths
13
18.3k
9418158
14528400
1
1. A method implemented in a computer infrastructure, comprising: receiving a search query containing one or more transliterated words; determining a source language corresponding to a particular transliterated word of the one or more transliterated words, wherein the determining the source language is based solely on the particular transliterated word, wherein the determining the source language comprises: determining a weighted score for each one of a plurality of candidate languages, and designating the candidate language with the highest weighted score as the source language; converting the particular transliterated word to a word in the source language; translating the word in the source language to a word in a target language; performing a search using the word in the target language; and displaying, by a user computer device, search results in the target language.
8131536
11998663
1
1. A method for automatically translating a document from a first language to a second language comprising: receiving the document in the first language; processing the document to extract elements of information; determining, using a processor, a plurality of potential translations for each of the extracted elements of information using a first translation process and a likelihood value for each of the potential translations of the elements of information; determining a plurality of potential translations of a remainder of the document using a second, different translation process and a likelihood value for each potential remainder translation; generating a plurality of combinations by combining a plurality of the potential translations of the elements of information with a plurality of the potential remainder translations; determining a likelihood value for respective ones of a plurality of the combinations based on a model of the second language and corresponding likelihood values of each of the potential element of information translations and remainder translations included in the respective combinations; and forming a translated version of the document based on the likelihood values of the combinations.
20070015486
11475847
0
1. A multimedia device integration system comprising: a car audio system having a display associated therewith; a portable device external to the car audio system; a first wireless interface in communication with the car audio system; a second wireless interface in communication with the portable device, the first and second wireless interfaces establishing a wireless communications link between the car audio system and the portable device; and an integration subsystem for generating a device presence signal for maintaining the car audio system in a state responsive to the portable device, wherein the integration subsystem transmits the device presence signal to the car audio system, channels audio from the portable device to the car audio system using the wireless communications link, processes video information generated by the portable device into a format compatible with the car audio system, and transmits the processed video information to the car audio system using the wireless communications link for displaying the processed video information on the display of the car audio system.
20130346086
13975901
0
1. A method comprising: upon verifying an identity of a user: identifying, via a processor, a template for a domain associated with the user; receiving input speech from the user, the input speech comprising a substantive portion and an instructional portion, the instructional portion related to navigation between fields in the template; transcribing the substantive portion of the input speech to text, to yield transcribed text; inserting the transcribed text into the template, to yield a completed template; and storing the completed template in a database; and upon receiving a request to play a dictation for a particular word in the completed template, playing the dictation of the particular word.
20080059145
11508032
0
1. A method for teaching a foreign language to a user who has knowledge of a base language, comprising: receiving the text entirely in the base language; receiving the text entirely in the target foreign language; preparing a set of mixed language texts by at least one of substituting target foreign language words for base language words in the base language text and substituting base language words for target foreign language words in the target foreign language text, each text in the set of mixed language texts having a respectively decreasing amount of base language words and a respectively increasing amount of target foreign language words; assessing the user's proficiency in the target foreign language; based on the user's assessed proficiency, choosing one of the set of mixed language text as the text for presentation to the user; and presenting the text to the user that includes both base language words and target foreign language words, where the amount of target foreign language words in the text depends on the user's assessed proficiency in the target foreign language.
20170132204
14934250
0
1. A method for automating multilingual indexing, the method comprising: receiving, by one or more computer processors, text of a first conversation between a first user and at least one second user; detecting, by the one or more computer processors, at least one language associated with the text of the first conversation; determining, by the one or more computer processors, whether the at least one language associated with the text of the first conversation is detected with a first confidence level that exceeds a first pre-defined threshold; responsive to determining the at least one language associated with the text of the first conversation is not detected with the first confidence level that exceeds the first pre-defined threshold, retrieving, by the one or more computer processors, text from one or more previous conversations between the first user and the at least one second user; detecting, by the one or more computer processors, at least one language associated with the text of the one or more previous conversations between the first user and the at least one second user; determining, by the one or more computer processors, whether the at least one language associated with the text of the one or more previous conversations between the first user and the at least one second user is detected with a second confidence level that exceeds a second pre-defined threshold; responsive to determining the at least one language associated with the text of the one or more previous conversations between the first user and the at least one second user is detected with the second confidence level that exceeds the second pre-defined threshold, analyzing, by the one or more computer processors, the text of the first conversation using the at least one detected language associated with the text of the one or more previous conversations between the first user and the at least one second user to create one or more index terms, wherein index terms are included in the text of the first conversation; indexing, by the one or more computer processors, the one or more index terms, wherein indexing serves as a mapping from the index terms to the text of the first conversation; and storing, by the one or more computer processors, the second confidence level of the at least one detected language associated with the text of the one or more previous conversations between the first user and the at least one second user associated with each of the one or more index terms.
20170147281
14997320
0
1. A personal audio system, comprising: a processor to generate a personal audio stream by processing an ambient audio stream in accordance with an active processing parameter set, wherein the active processing parameter set indicates a type and degree of one or more processes to be performed on the ambient audio stream; a buffer memory to store a most recent snippet of the ambient audio stream; an event detector to detect a trigger event, wherein the trigger event includes receiving, via a user interface, a command to modify the active processing parameter set; and a controller configured to, in response to detection of the trigger event: extract audio feature data from the most recent snippet of the ambient audio stream; and transmit the audio feature data and associated metadata to a device remote from the personal audio system, wherein the associated metadata includes the modified processing parameter set associated with the active processing parameter set.
20100161683
12339429
0
1. A method for generating customized event notifications, said method including the following steps: processing a communication event and related data; associating said communication event with an identification; storing said communication event, said related data and said identification in memory; receiving a second communication event; associating said second communication with a second identification; comparing said identification and said second identification; reconveying all or a portion of said communication event if said identification and said second identification are related.
9213693
13438812
1
1. A method comprising: receiving, at a language interpretation system, a request for a real time interpretation performed by a human language interpreter of a voice communication between a first voice communication participant speaking a first language and a second voice communication participant speaking a second language during the voice communication, the request being received from the first voice communication participant; providing, at the language interpretation system, the request to the human language interpreter; translating, with a machine language interpreter, the voice communication into a set of text data, the set of text data having a plurality of translated sentences translated in real time during the voice communication; and sending the text data to a display device that displays the set of text data during a verbal human language interpretation of the voice communication performed by the human language interpreter in real time during the voice communication so that the human language interpreter utilizes the set of text data to perform the verbal human language interpretation, the verbal human language interpretation being communicated by the human language interpreter to the second voice communication participant without the machine language interpreter, the verbal human language interpretation being unmodified prior to and during the communication of the human language interpreter to the second voice communication participant.
7762816
11358795
1
1. A computer-implemented method of developing a translation exercise, the method comprising: receiving a grammatical structure; for each of a plurality of text segments in a first language, translating the text segment in the first language into a corresponding text segment in a second language using a processing system; and selecting with the processing system a selected text segment from the plurality of text segments as a prompt for a translation exercise based on whether the text segment in the second language that corresponds to the selected text segment has said grammatical structure; and storing the selected text segment in a computer-readable memory.
8706747
10676724
1
1. A computer-implemented method, the method comprising: receiving a search query from a user device, wherein the search query includes one or more terms, each term being written in a first format; translating, using a probabilistic dictionary, the one or more terms of the search query into a group of translated search queries, each translated search query having one or more terms in a second format, wherein the probabilistic dictionary includes a mapping of terms from the first format to the second format according to a respective calculated probability that a particular term in the first format corresponds to a term in the second format; using a search engine to identify a plurality of documents written in the second format that are responsive to the group of translated search queries; providing search results written in the second format to the user device, the search results referencing one or more of the identified documents; obtaining click data from the user device indicative of user selections of one or more of the search results written in the second format; and modifying the probabilistic dictionary of term mappings based at least in part on the obtained click data indicative of user selections of one or more of the search results written in the second format and adjusting at least one probability associated with at least one mapping in the probabilistic dictionary.
7779353
11437259
1
1. A method for identifying text errors within a web page, the method comprising: determining text for error checking within content used in generating the web page; determining where the text is located within the web page; assigning an identifier to the text; packaging the text into a package using a schema, wherein delimiters separate the text in the package from other text in the package based on the location of the text within the web page, the other text having an assigned identifier, wherein the text and the other text is editable text; sending the package to an error checking module, wherein the package is disassembled by the error checking module into discrete text pieces and associated assigned identifiers for determining errors; determining errors within the text; displaying the errors within the text to a user, wherein the user is permitted to edit the text based on the errors returned by the error checking module, wherein the text is contained within at least one of an image, a web page link, and Hypertext Markup Language (HTML); and saving the text with error corrections to a computing device, wherein saving the text with error corrections comprises: reapplying previously removed text formatting; saving a first field in the web page, the first field comprising a field visible to the user, the field comprising the error corrections and the previously removed text formatting, wherein the first field is associated with a first attribute having a unique identification tag; and saving a second field in the web page, the second field comprising a field hidden to the user, the field comprising a redundant copy of the text located within the web page for use by the computing device, wherein the second field is associated with a second attribute having a unique identification tag.
9936914
15670064
1
1. A computer-implemented method for presenting a measurement of a physical or psychological disorder of a subject determined from the subject's production of a speech signal, the method comprising: receiving audio data representing an audio signal produced by a microphone in response to the speech signal produced by the subject; using a computer-implement speech recognizer to segment the audio data into a plurality of segments of the audio data, each segment of the audio data representing a corresponding time interval of the speech signal, wherein each segment of the audio data is associated is a corresponding speech unit of a predefined plurality of speech units, at least one speech unit corresponds to multiple segments of the plurality of segments, and a represented plurality of speech units comprises speech units of the plurality of speech units that correspond to at least one of the segments of the audio data; processing the segments of the audio data to produce respective values of segment features, the segment features for a segment characterizing the subject's production of the speech unit correspond to the segment; for each represented speech unit of the represented plurality of speech units, combining the values of the segment features for segments of the audio data corresponding to the represented speech unit to determine values of speech-unit features corresponding to the represented speech unit; forming a feature representation of the audio data from the values of the speech-unit features corresponding to each of the represented speech units; processing the feature representation of the audio data according to values of a plurality of numerical configuration parameters to provide one or more disorder indicators, wherein the numerical configuration parameters are formed from audio data for a plurality speech signals, each speech signal produced by a corresponding subject and data indicating presence of one or more disorders of the subject corresponding to each of the speech signals, each of the disorder indicators corresponds to a physical or psychological disorder; and determining output data from the one or more numerical disorder indicators and outputting the data to a user to indicate presence of one or more disorders of the plurality of disorders in the subject.
20070106977
11270014
0
1. A computer-implemented method of generating a dynamic corpus, the method comprising: (a) generating a plurality of web threads, based upon a corresponding plurality of sets of words dequeued from a word queue, to obtain web thread resulting URLs; (b) enqueueing the web thread resulting URLs in a URL queue; (c) generating a plurality of text extraction threads, based upon documents downloaded using URLs dequeued from the URL queue, to obtain text files, the text files providing the dynamic corpus; (d) randomly obtaining new words from the text files; (e) enqueueing the randomly obtained words in the word queue; and (f) iteratively repeating the steps (a), (b), (c), (d) and (e).
7711568
10406368
1
1. A method of processing speech data received from a mobile device, the method comprising: receiving at a speech server a speech request from a mobile device to transmit an audio segment; notifying a session object communicating with the mobile device regarding the arrival of the audio segment; generating from the session object a handler to process the audio segment, the handler acquiring a decoder proxy for the audio segment from a decoder proxy cache; obtaining an automatic speech recognition (ASR) decoder result associated with the audio segment, the ASR decoder result being passed to the decoder proxy; communicating a recognized phrase associated with the ASR decoder result or a failure code from the decoder proxy to the handler; and issuing from the handler a query to a web server using the ASR decoder result.
7542787
11354198
1
1. A method for providing hands-free operation of a communication device, comprising: monitoring a headset interface for a command prefix and a subsequent voicemail command, the command prefix being at least one spoken word that is used to identify subsequently spoken voice commands that control the device, the voicemail command only being effective when preceded by the spoken command prefix; detecting the command prefix in spoken speech; treating the next spoken word as the subsequent voicemail command; in response to detecting the voicemail command and command prefix, generating a device command that emulates a user's interaction with the communication device to answer an incoming call; providing the device command to the communication device via a feature connector interface; playing an outgoing announcement to a caller; and recording a message from the caller in response to the spoken voicemail command.
20130066845
13669584
0
1. A concept bridge employable with a search engine, comprising: an extractor configured to derive concept terms by extracting significant terms from search text and inferring relevant terms therefrom in accordance with a concept matrix; and a query generator configured to generate a query consistent with an index of a search engine as a function of said concept terms.
7954044
12126507
1
1. An apparatus for linking an original representation including text and a realization of the representation in non-text form, comprising: a processor; a structural analyzer for automatically separating a plain representation and structural information pertaining to structure of the contents of said text from the original representation; a temporal analyzer for automatically generating a time-stamped first representation from the realization; a time aligner for creating a time-stamped aligned representation by aligning the plain representation and the time-stamped first representation; a link generator for creating hyper links between elements of the original representation and the realization by combining the aligned representation and the structural information; a first converter for converting the original representation from any native data format to an operating data format for representations, and a second converter for converting the realization from any native data format to an operating data format for realizations, wherein the first converter is connected to the structural analyzer, and the second converter is connected to the temporal analyzer, wherein the hyper links can be used for performing search operations in audio data, wherein the original representation is a descriptive mark-up document and the realization is an audio stream, and wherein the audio stream includes an audio recording and/or the audio track of a video recording.
8046211
11977133
1
1. At least one memory device storing instructions that, when executed by a computer, cause the computer to perform a method of statistical machine translation (SMT), said method comprising: receiving a word string in a first natural language; parsing said word string into a parse tree comprising a plurality of child nodes, the parse tree representing a syntactic structure of the word string; reordering said plurality of child nodes resulting in a plurality of reordered word strings; evaluating each of said plurality of reordered word strings using a reordering knowledge, wherein said reordering knowledge is based on a syntax of said first natural language and on a plurality of alignment matrices that map first sample sentences in the first natural language with second sample sentences in a second natural language; translating a plurality of preferred reordered word strings from said plurality of reordered word strings to the second natural language based on said evaluating; and selecting a statistically preferred translation of said word string from among translations of said plurality of preferred reordered word strings.
20040107107
10309794
0
1. A method for processing a speech utterance, comprising: communicating between a local computer and a remote computer using a hyper text communication session, including sending a recording of a speech utterance from the local computer to the remote computer in the session, and receiving a result from the remote computer, the result based on a processing of the recording at the remote computer.
20050041786
10794551
0
1. A method for conducting a non-real time group interaction comprising: a. a group of at least one facilitator and one member; b. a programmable voice messaging system providing authentication to each individual facilitator and member; c. an initial interaction phase wherein the facilitators provide a voice message addressed to the group members at a first predetermined scheduled time; d. a second interaction phase wherein the members access and listen to the facilitators voice message at a second predetermined scheduled time, and the members optionally respond to the facilitators voice message by providing a voice message addressed to the facilitator or the entire group or both; e. a feedback loop interaction phase, wherein i. at a third predetermined scheduled time, the facilitators access and listen to the messages left by the group members, and optionally respond and provide feedback on the voice messaging system to the group as a whole, to one or more individual members, or to one or more specific messages, or to any combination thereof, and ii. the facilitators have the capability to optionally edit specific voice messages, and iii. at a fourth predetermined scheduled time, each group member accesses the voice message system and listens to the messages left by the others in the group or by the facilitators or both, all optionally edited by the facilitators, and iv. the facilitator and members optionally repeat the feedback loop, wherein the members respond to the set of messages left at the fourth predetermined time, and wherein the events at the third and fourth predetermined scheduled times are repeated; f. an optional termination phase, wherein i. at a fifth predetermined scheduled time, the facilitator summarizes the contents of the voice messages of the group and facilitator's provided during the loop interaction phase, by leaving summary voice messages for the group, and ii. at a sixth predetermined scheduled time the group members listen to the summary voice messages, and iii. at a seventh predetermined scheduled time, the group interaction ends.
7778834
12189506
1
1. A computer-implemented method of assessing pronunciation difficulties of a non-native speaker, the method comprising: determining one or more sources of pronunciation difficulties between a language of a non-native speaker and a second language; assigning a weight to each source; calculating using a processor a phonetic difficulty score based on the one or more sources and the weight assigned to each source; calculating using the processor a language model score based on a sound comparison between the language of the non-native speaker and the second language, including calculating a language model for the language of the non-native speaker and the second language and calculating a cross-entropy of an utterance with respect to the language model of the language of the non-native speaker inversely weighted by the cross-entropy of the utterance with respect to the language model of the second language, wherein calculating the cross-entropy of an utterance with respect to the language model of the language of the non-native speaker comprises assigning a lower score to utterances of the second language that are similar to sounds of the language of the non-native speaker, and assigning a higher score to utterances of the second language that are not similar to sounds of the language of the non-native speaker; normalizing using the processor the phonetic difficulty score and the language model score; and calculating a metric from the normalized phonetic difficulty score and the normalized language model score.
7953746
11952770
1
1. A computer implemented method for contextual query processing, comprising: receiving a current search query during a current search session, the current search query comprising one or more current search tokens; identifying a set of previous search tokens from previous queries received during the current search session; comparing the one or more current search tokens to the set of previous search tokens; identifying, as a potentially inaccurate search token, at least one of the one or more current search tokens that is not included in the set of previous search tokens; identifying a possible replacement token for the potentially inaccurate search token, wherein the possible replacement token is a token with which the potentially inaccurate search token was previously replaced with at least a minimum specified rate; identifying related tokens from a query log based upon previous search queries associated with the current search session; determining whether the possible replacement token for the potentially inaccurate search token is included in the group of related tokens; and in response to determining that the possible replacement token is among the group of related tokens, generating a modified search query that includes the possible replacement token.
8903719
12948292
1
1. One or more non-transitory computer-readable media having computer-executable instructions embodied thereon that, when executed by a computing device, facilitate a method of providing context-sensitive writing assistance, the method comprising: determining a context of a textual communication that a user is composing, wherein the context comprises a specific recipient to which the textual communication is addressed; selecting one or more dictionaries from a plurality of dictionaries, wherein the one or more dictionaries include words that are consistent with a communication style used in previous textual communications addressed to the specific recipient or written by the specific recipient; and providing, by way of the computing device, writing assistance that utilizes the one or more dictionaries, thereby tuning the writing assistance to match the communication style, wherein the writing assistance comprises a text slang to proper English conversion function that is activated only when a text-slang dictionary is not one of the one or more dictionaries, wherein the writing assistance is provided while the textual communication is being composed.
20050197828
11036872
0
1. A method for preprocessing a natural language database query, the method comprising: a) accepting an alphanumeric string related to the query; b) parsing the alphanumeric string to generate query words; c) determining whether any of the query words, or any phrases formed by at least two adjacent query words, match any of a plurality of indexed annotations; d) if a query word or phrase matches one or more of the plurality of indexed annotations, for each of the plurality of indexed annotations, adding a pattern associated with the indexed annotation to a group associated with the query word or phrase; e) selecting a pattern from each group of patterns to generate a selection of patterns; and f) combining the patterns of the selection of patterns to generate a single, connected, lowest cost pattern.
7957975
11502030
1
1. A voice controlled wireless communication device system comprising: a wireless communication device that records a voice command, recited by a user, and that executes a software based application resident on the wireless communication device; one or more server computers that communicate with the wireless communication device, comprising at least one server based module for creation of a command to be executed on the wireless communication device; wherein said software based application communicates the voice command to said server computer; wherein the server computer initiates at least one speech recognition process to identify the voice command, constructs an application command based on the voice command and communicates the application command to the wireless communication device; wherein said software based application directs the application command communicated from the server computer to a corresponding application on the wireless communication device for execution; an additional server computer, wherein based on a type of voice command, the server computer directs the voice command to the additional server computer for processing; wherein the wireless communication device maintains a contact list, and the contact list is periodically transmitted and stored on the server computer; wherein the contact list stored in the server is accessible to the speech recognition process to assist in automatic translation of a given voice command that requires input from the contact list; and the contact list stored on the server is also provided to an interface for presenting the given voice command for manual review and identification, wherein the contact list stored on the server is automatically displayed via the interface in response to presenting the voice command for manual review.
9723125
15251032
1
1. A mobile device for accessing voicemail messages, the mobile device comprising: a processing unit; and a memory coupled to the processing unit and storing computer-readable instructions that when executed by the processing unit cause the mobile device to: receive a voicemail message generated from a call; display a text transcription of the voicemail message, wherein voicemail speech that is not recognized by a transcription application is displayed differently in the text transcription than voicemail speech that is recognized by the transcription application; and provide an audio recording of the voicemail message upon activation of a control.
9288597
14159155
1
1. A device comprising: at least one computer memory that is not a transitory signal and that comprises instructions which when executed by at least one processor result in: determining that one or more audio speakers are present on a network of audio speakers in a speaker arrangement, each speaker being associated with a respective network address so that each speaker may be addressed by a computer accessing the network; receiving dimensions of at least one enclosure in which the network at least partially is disposed; receiving at least a desired listening position and/or a number of listeners; determining whether the speaker arrangement meets at least one acoustic requirement; responsive to a determination that the speaker arrangement does not meet the acoustic requirement, indicating on a computerized display device that the speaker arrangement does not meet the acoustic requirement and prompting the user to adjust one or more of speaker location, orientation, frequency assignation, speaker parameters, or automatically adjusting one or more of frequency assignation, speaker parameters; and determining whether a basic setup is complete, and responsive to a determination that the basic setup is complete, launching a speaker control user interface on the display device.
20140379349
14479980
0
1. A method comprising: receiving, from an automatic speech recognition system, a word lattice based on a speech query; composing a triple comprising a query word from the speech query, an indexed document, and a weight; generating an N-best path through the word lattice based on the triple; re-ranking automatic speech recognition output based on the N-best path, to yield re-ranked automatic speech recognition output; and returning search results to the speech query based on the re-ranked automatic speech recognition output.
7512568
11109400
1
1. A computer-accessible medium having executable instructions to manage collective interactions between autonomous entities, the computer-accessible medium comprising: computer executable program code to generate algorithms to manage said collective interactions between autonomous entities, said algorithms including; a first plurality of neural basis functions controlling a corresponding robotic device in turn controlling at least one action in response to sensory input data; and a first evolvable neural interface operably coupled to each of the first plurality of neural basis functions to selectively establish communication between said first plurality of neural basis functions; wherein a first one of said plurality of neural basis functions is autonomously reconfigured during a learning process in response to said selectively established communication with a second one of said plurality of neural basis functions and thereafter change the control of said robotic device.
20020165873
10079741
0
1. A method comprising the steps of: creating a document stack from at least one word in a handwritten document; creating a query stack from a query; and determining a measure between the document stack and the query stack.
20060214924
11299822
0
1. A storage medium storing a touch input program to be executed by a processor of a touch input device having a first display and a second display arranged on respective left and right sides with respect to a predetermined axis and a touch panel provided on said second display, said touch input program comprising: a setting step setting a reverse input mode in response to a predetermined operation; a determining step, determining whether or not the reverse input mode is set in said setting step; a first reversing step, reversing first character data vertically and horizontally when result of determination in said determining step is affirmative; a first displaying step of displaying an image based on the first character data reversed in said first reversing step; a first accepting step, accepting through said touch panel a handwriting input operation associated with the image displayed in said first displaying step; and a second displaying step, displaying on said second display an image based on handwriting input data corresponding to the handwriting input operation accepted in said first accepting step.
20130132832
13675221
0
1. A method for editing text, comprising: in response to an instruction to apply editing to at least one sentence within a document that is displayed on a display screen, changing a first word or phrase in the at least one sentence for a second word or phrase while maintaining semantic content of the first word or phrase and such that the at least one sentence falls within a predetermined range, wherein the changing the first word or phrase comprises one of: in response to the second word or phrase having more characters or words than the first word or phrase, changing a third word or phrase within the at least one sentence including the second word or phrase for a fourth word or phrase, such that the at least one sentence including the second word or phrase falls within the predetermined range; and in response the second word or phrase having fewer characters or words than the first word or phrase, changing a fifth word or phrase within the at least one sentence including the second word or phrase for a sixth word or phrase, such that the at least one sentence including the second word or phrase falls within the predetermined range; and displaying the at least one sentence including the second word or phrase, and one of the fourth word or phrase and the sixth word or phrase, on the display screen.
20080144927
12000590
0
1. A nondestructive inspection apparatus comprising: a sensor unit for detecting vibrations transmitted through a test object from a vibration generator; a signal input unit for extracting a target signal from an electric signal outputted from the sensor unit; an amount of characteristics extracting unit for extracting multiple frequency components from the target signal as an amount of characteristics; and a decision unit having a competitive learning neural network for determining whether the amount of the characteristics belongs to a category, wherein the competitive learning neural network has been trained by using training samples belonging to the category representing an internal state of the test object, wherein distributions of membership degrees of the training samples are set in the decision unit, the distributions being set with respect to neurons excited by the training samples based on samples and weight vectors of the excited neurons, and wherein the decision unit determines that the amount of characteristics belongs to the category, if one of the excited neurons is excited by the amount of characteristics and the distance between the amount of characteristics and a weight vector each of one or more of the excited neurons, corresponds to a membership degree equal to or higher than a threshold determined by the distributions.
8209333
13174296
1
1. A computer-implemented method of assessing the suitability of particular key phrases for use in providing contextually-relevant content to users, the method comprising: identifying a key phrase that appears on a page of a site; and generating a score for the key phrase based at least partly on view counts of social media content items associated with the key phrase, said social media content items being accessible to users on one or more social media sites that are separate from said site, said score representing a suitability of the key phrase for selecting contextually relevant content to present on the page, wherein generating the score comprises assessing, based at least partly on the view counts of social media content items, a rate of change in a popularity level of the key phrase; said method performed by a computer system that comprises one or more computers.
20140095145
13629885
0
1. A system comprising: at least one processor; an indexer which, if executed, causes the at least one processor to determine which keywords are likely to appear in a natural language query and to associate each likely keyword with a module of a plurality of modules; a query translator which, if executed, causes the at least one processor to determine whether at least one of the likely keywords determined by the indexer appears in a received natural language query; and a results generator which, if executed, causes the at least one processor to respond to the received natural language query with information generated by each module associated with a likely keyword appearing in the received natural language query.
8401251
12863132
1
1. A face pose estimation device estimating a face pose representing at least an orientation of a face from a face image in which the face is captured in a time series manner, the device comprising: a face organ detector that detects a face organ from the face image; a face pose candidate set generator that generates a face pose candidate set, which is a set of face pose candidates to be estimated; a first similarity estimator that computes a first similarity according to a first parameter corresponding to respective positions of each face organ of each element of the face pose candidate set generated by the face pose candidate set generator and an actual face organ detected by the face organ detector; a second similarity estimator that computes a second similarity according to a second parameter corresponding to a pixel value according to displacements of each face image of the face pose candidate set generated by the face pose candidate set generator and an actual face image detected as a detection target by the face organ detector with respect to each predetermined reference pose; a first likelihood estimator that computes a first likelihood corresponding to the first similarity computed by the first similarity estimator; a second likelihood estimator that computes a second likelihood corresponding to the second similarity computed by the second similarity estimator; an integrated likelihood estimator that computes an integrated likelihood representing a degree of appropriateness of each element of the face pose candidate set by using the first and second likelihoods; and a face pose estimator that estimates the face pose on the basis of the face pose candidate having the highest integrated likelihood computed by the integrated likelihood estimator, the integrated likelihood being considered by the face pose candidate set generator for generating a face pose candidate set in the next time step.
20100161317
12715968
0
1. A tangible computer-readable medium having instructions stored thereon, the instructions comprising: instructions to receive an input sequence of symbols, the input sequence of symbols being in a sequence order, the input sequence of symbols having a natural language meaning determined from a context node filter and a best contextual distance function; instructions to store the input sequence of symbols as a new set of semantic network nodes in a semantic network, the new set of semantic network nodes being stored as a linked list having the sequence order, wherein the semantic network has a plurality of semantic network links, and wherein the semantic network is stored in a memory of a computer system; instructions to store the natural language meaning as a set of meaning nodes in the semantic network; instructions to store the natural language meaning of the new set of semantic network nodes and of the meaning nodes in the semantic network, at least one of the semantic network nodes having a semantic network link to at least one node selected from the set of meaning nodes; and instructions to retrieve, from the semantic network, the natural language meaning for the received input sequence of symbols.
8290253
12609590
1
1. A computer-implemented method, comprising: quantizing each color channel for a region of an n-channel digital image to determine m representative values for each of the n color channels; generating an m n adaptive lookup table for the region, wherein each entry in the lookup table corresponds to a different combination of the representative values from the n color channels; and for each pixel in the region: for each color channel, determining a closest representative value to the value of the color channel for the pixel from among the m representative values of the color channel; locating an entry in the adaptive lookup table according to the n determined closest representative values for this pixel, wherein the located entry corresponds to the n determined closest representative values for this pixel; determining a metric for this pixel according to the located entry in the adaptive lookup table; and outputting the determined metric for this pixel.
8442820
12628514
1
1. A combined lip reading and voice recognition multimodal interface system, comprising: a voice recognition module, executed by an audio signal processor, recognizing an instruction through performing voice recognition; and a lip reading module, executed by a video signal processor, performing lip reading recognition and providing an image, wherein the lip reading module comprising: a lip detector that detects lip features using the input image from a lip video image input unit; a lip model generator that generates a shape model and an appearance model using an active appearance model (AAM) lip model; a lip tracker that tracks lip feature points obtained as a result of the AAM fitting after lip detection using the shape model generated by the lip model generator and a Lucas-Kanade (LK) algorithm; a speech segment detector that inputs frame data of a predetermined period into a neural net recognizer to determine whether the segment is a speech segment or a silence segment based on a series of lip model parameters obtained as the result of lip tracking on consecutive input images; a system mode determiner that determines whether the system is in a learning mode in which the label of lip feature data is known or in a recognition mode in which the label thereof is not known; a lip reading recognition learning unit that learns a k-nearest neighbor (K-NN) learner using feature data and an input label if the system is in the learning mode; an instruction recognition unit that finds a learning pattern most similar to the feature data through the learned K-NN recognizer and outputs a result instruction as a feature value if the system is in the recognition mode; and a lip feature database that stores patterns for each instruction that are learned offline or online.
20120233207
13480400
0
1. A method for processing natural language queries, the method comprising: receiving two or more natural language libraries from service providers via a network, where each natural language library comprises: natural language queries for interacting with a client application; and responses for the natural language queries; generating an aggregated natural language library from the received natural language libraries; receiving a search query via the network; comparing the search query to the aggregated natural language library to determine at least one natural language query that corresponds to the search query; and providing a response to the search query from the aggregated natural language library to a client device.
4679177
06831204
1
1. An underwater communication system comprising: a transmitter including message inputting means having keys each being assigned to each word, code converting means for converting a key input entered through said inputting means to a code assigned to the key input, modulator means responsive to the code, for performing modulation, and transmit transducer means for converting an output of said modulator means to an acoustic wave, and a receiver including receive transducer means for reconverting the incoming acoustic wave from said transmitter to an electric signal, demodulator means for demodulating the code from the electric signal, speech synthesis means responsive to the demodulated code for producing a word corresponding to the code, and speaker means for outputting the synthesized speech.
20110239112
12785802
0
1. A computer readable storage medium having an input program stored therein, the input program causing a computer of an information processing apparatus that includes storage means for storing an option character string database that defines a combination of at least one character and at least one option character string corresponding to the at least one character, to function as: character input reception means for receiving an input of a character by a user; first output means for outputting, as an unfixed character, the character received by the character input reception means; option character string obtaining means for obtaining, from the option character string database, at least one option character string as a respective at least one first option character string that corresponds to the unfixed character; first preceding/following identification means for identifying at least one of a character string preceding the unfixed character and a character string following the unfixed character; fixed character string determination means for determining, among the at least one first option character string, a first option character string to be a fixed character string, the first option character string satisfying a predetermined condition for a character string to be connectable to the at least one of the character strings, which has been identified by the first preceding/following identification means; and second output means for outputting the fixed character string.
8108203
12108134
1
1. A translation system comprising: a storage device; a processor; a bilingual data storage section, a plurality of pieces of first language simple sentence data corresponding to a plurality of first language simple sentences in a first language and a plurality of pieces of second language simple sentence data corresponding to a plurality of second language simple sentences in a second language being stored in the bilingual data storage section while being associated with each other so that the first language simple sentences and the second language simple sentences respectively make pairs; and a target language simple sentence data output section which outputs target language simple sentence data corresponding to a target language simple sentence which is a translation of a given source language simple sentence based on source language simple sentence data corresponding to the source language simple sentence, the target language simple sentence data output section receiving first-language-source-language simple sentence data corresponding to a first-language-source-language simple sentence in the first language, and selecting first language simple sentence data from the plurality of pieces of the first language simple sentence data stored in the bilingual data storage section based on the received first-language-source-language simple sentence data; and the target language simple sentence data output section outputting the second language simple sentence data associated with the selected first language simple sentence data as the target language simple sentence data; wherein the first language simple sentence data is stored in the bilingual data storage section while being classified into a plurality of groups; wherein one piece of the first language simple sentence data classified into each of the groups is designated as representative data; and wherein the target language simple sentence data output section selects one piece of the first language simple sentence data designated as the representative data.
20040221235
10007299
0
1. 2.(New)A method in a computer system for transforming a document of a data set into a canonical representation, the document having a plurality of sentences, each sentence having a plurality of terms, comprising: for each sentence, parsing the sentence to generate a parse structure having a plurality of syntactic elements; determining a set of meaningful terms of the sentence from the syntactic elements; determining from the structure of the parse structure and the syntactic elements a grammatical role for each meaningful term; determining an additional grammatical role for at least one of the meaningful terms, such that the at least one meaningful term is associated with at least two different grammatical roles; and storing in an enhanced data representation data structure a representation of each association between a meaningful term and its determined grammatical roles, in a manner that indicates a grammatical relationship between a plurality of the meaningful terms and such that at least one meaningful term is associated with a plurality of grammatical relationships.
20120304100
13559495
0
1. A method, comprising: at a portable electronic device having a touch screen display: displaying a current character string being input by a user with a soft keyboard in a first area of the touch screen display; displaying a suggested replacement character string for the current character string in a second area of the touch screen display, wherein the second area includes a suggestion rejection icon adjacent to the suggested replacement character string; replacing the current character string in the first area with the suggested replacement character string in response to detecting user activation of a key on the soft keyboard associated with a delimiter; and keeping the current character string in the first area and ceasing to display the suggested replacement character string and the suggestion rejection icon in response to detecting a finger gesture on the suggested replacement character string displayed in the second area.
20100241350
12727206
0
1. A blind traveler navigational data system having a server, the server comprising: means for receiving information identifying first and second landmarks from a set of predefined landmarks within a predefined geographic region; means, responsive to the identified first and second landmarks, for accessing and retrieving from a database, blind-ready wayfinding instructions for guiding a blind or visually impaired traveler from the first identified landmark to the second identified landmark; and means for outputting the retrieved blind-ready wayfinding instructions to a blind or visually impaired user.
8219406
11686722
1
1. A computer-implemented interface, comprising: a set of parsers configured to parse information received from a plurality of sources including a mixed modality of inputs; a discourse manager configured to: identify correlations in the information; interpret the mixed modality of inputs based on environmental data associated with at least one of the mixed modality of inputs; based on the identified correlations and the interpreted mixed modality of inputs, at least one of determine or infer an intent associated with the information; and generate a confidence level for the intent as a function of the environmental data; and a response manager configured to: evaluate a first input of the mixed modality of inputs, the first input having a first modality initially employed as a primary modality; based on the generated confidence level, provide feedback to request a second input having a second modality different from the first modality; and substitute the second modality for the first modality as the primary modality until the environmental data changes.
7912714
12060469
1
1. A method for forming discrete segment clusters of one or more sequential sentences from a corpus of communication transcripts of transactional communications, each communication transcript including a sequence of sentences spoken between a caller and a responder, the method comprising: dividing the communication transcripts of the corpus into a first set of sentences spoken by the caller and a second set of sentences spoken by the responder; using a processor, generating a set of sentence clusters by grouping the first and second sets of sentences according to a measure of lexical similarity using an unsupervised partitional clustering method; generating a collection of sequences of sentence types by assigning a distinct sentence type to each sentence cluster and representing each sentence of each communication transcript of the corpus with the sentence type assigned to the sentence cluster into which the sentence is grouped; and generating a specified number of discrete segment clusters of one or more sequential sentences by successively merging sentence clusters according to a proximity-based measure between the sentence types assigned to the sentence clusters within sequences of the collection.
9357071
14308555
1
1. A non-transitory, computer readable medium that controls an executable computer readable program code embodied therein, the executable computer readable program code for implementing a method of analyzing electronic communication data and generating behavioral assessment data therefrom, which method comprises: receiving, by a control processor, an electronic communication in text form from a communicant; analyzing the text of the electronic communication by mining the text of the electronic communication and applying a predetermined linguistic-based psychological behavioral model to the text of the electronic communication; and generating behavioral assessment data including a personality type corresponding to the analyzed text of the electronic communication.
20130054609
13220967
0
1. A method for accessing a specific location in voice site audio content, wherein the method comprises: indexing, in a voice site index, a specific location in the voice site that contains the audio content; mapping the audio content with information regarding the location and adding the mapped content to the index of the voice site; using the index to determine content and location of an input query in the voice site; automatically marking the specific location in the voice site that contains the determined content and location of the input query; and automatically transferring to the marked location in the voice site; wherein at least one of the steps is carried out by a computer device.
9053207
11672736
1
1. An adaptive query handling method comprising: receiving an initial query in a database driven application executing in a host computing platform; parsing the initial query to identify a query expression key; matching the query expression key to an adaptive query expression, the adaptive query expression specifying a data query in addition to annotations indicating points of variability in the adaptive query expression, each annotation being replaced with a static sub-expression consistent with a configured query language for a final query expression; transforming the adaptive query expression to the final query expression through a replacement of the annotations in the adaptive query expression with static expressions conforming to the query language for the final query expression; and, applying the final query expression to a database subsystem for the database driven application.
20150294017
14748918
0
1. A method of searching a collection of electronic documents, the method comprising: replacing a set of synonymous terms with a set of standardized paragraph terms, wherein each standardized paragraph term has an associated term weight; generating standardized search terms in response to a search query; generating paragraph scores for paragraphs of a document based at least in part on the associated weights of standardized paragraph terms that match one or more of the standardized search terms; determining overall document scores for the electronic documents based at least in part on a combination of the paragraph scores; and determining a set of matching documents, wherein the set of matching documents is ordered using the overall document scores.
20070143284
11633190
0
1. A method for constructing learning data, comprising the steps of: (a) generating learning models by performing machine learning with respect to learning data; (b) attaching tags to a raw corpus automatically using the generated learning models to thereby generate learning data candidates; (c) calculating confidence scores of the generated learning data candidates, and selecting a learning data candidate by using the calculated confidence scores; and (d) allowing a user to correct an error in the selected learning data candidate through an interface and adding the error-corrected learning data candidate to the learning data, thereby adding new learning models incrementally.
20170249297
15055868
0
1. A method for training a model to accurately determine whether two phrases are conversationally connected, the method comprising: detecting a first phrase and a second phrase; translating the first phrase to a first string of word types by determining what type of word each word of the first phrase represents, and replacing each word of the first phrase with its respective type; translating the second phrase to a second string of word types by determining what type of word each word of the second phrase represents, and replacing each word of the second phrase with its respective type; generating a third string of word types by appending the second string to the end of the first string; determining a first degree to which the first string and the second string matches any singleton template of a plurality of singleton templates by comparing both the first string and the second string to the plurality of singleton templates; determining a second degree to which the third string matches any conversational template of a plurality of conversational templates; determining whether the first degree exceeds the second degree; in response to determining that the first degree exceeds the second degree: decreasing a strength of association between the first string and a conversational category, and decreasing a strength of association between the second string and the conversational category; and in response to determining that the second degree exceeds the first degree: increasing the strength of association between the first string and the conversational category, and increasing the strength of association between the second string and the conversational category.
20130283168
13449927
0
1. One or more computer-readable media storing computer-executable instructions that, when executed on one or more processors, cause the one or more processors to perform acts comprising: causing display of a conversation user interface in conjunction with a site of a service provider; receiving input from a user while the user engages in a session on the site of the service provider, the user input comprising one of audio input, keypad input, or touch input; representing the user input in the conversation user interface; determining a response to the user input; representing the response in the conversation user interface; enabling the user to interact with the conversation user interface to ascertain how the response was determined and to modify assumptions used to determine the response; determining a revised response based on the modified assumptions; and representing the modified response in the conversation user interface.
20100241426
12729379
0
1. A method for noise reduction, comprising: beamforming audio signals sampled by a microphone array to get a signal with an enhanced target voice; locating a target voice in the audio signal sampled by the microphone array; determining a credibility of the target voice when the target voice is located; weighing a voice presence probability by the credibility; and enhancing the signal with the enhanced target voice according to the weighed voice presence probability.
8103007
11319917
1
1. A method comprising: providing a plurality of voice output units and a plurality of microphones in a region; sensing the ambient sound via the plurality of microphones in the region for a predetermined time interval; analyzing the sensed ambient sound; overlaying the ambient sound with a plurality of test audio signals injected into the region having predetermined characteristics via the voice output units; sensing the overlaid ambient sound via the plurality of microphones; determining if speech intelligibility in the region has been degraded beyond an acceptable standard; and upon determining that the speech intelligibility has degraded beyond an acceptable level based upon maximum attainable remediation values for at least one of frequency spectral and sound pressure level adjusting at least some of pace, pitch, frequency spectra and sound pressure level of audio from at least some of the plurality of voice output units.
20130159320
13329345
0
1. A computer-implemented method for ranking documents, comprising: identifying a plurality of query-document pairs based on clickthrough data for a plurality of documents; building a latent semantic model based on the plurality of query-document pairs, wherein the plurality of query-document pairs comprises a plurality of query-title pairs, and wherein building the latent semantic model comprises building a bilingual topic model by using the plurality of query-title pairs to learn a semantic representation of a query based on a likelihood that the query is a semantics-based translation of each of the plurality of documents; and ranking the plurality of documents for a Web search based on the latent semantic model.
20150039303
14314182
0
1. A speech processing system comprising; an input for receiving an input signal from at least one microphone, a first signal path for connecting the input to an output; a second signal path for connecting the input to the output, selection circuitry for selecting either the first signal path or the second signal path for carrying the input signal from the input to the output; wherein the first signal path contains a first buffer for storing the input signal, and the second signal path contains a noise reduction block for receiving the input signal and supplying a noise reduced signal to the output, and a second buffer; wherein the second buffer imparts a delay in the second signal path such that the noise reduced signal is substantially time-aligned with the output of the first buffer.
8064887
11981433
1
1. A system for delivering a pre-recorded audio message in response to the changing position of at least one object, said system comprising, in combination: a memory unit for storing: at least one spoken audio message recorded as an audio file stored as a member of a collection of different audio files stored in said digital memory, location data produced by one or more location detectors, said location data indicating the position and identity of said at least one object at different times, and data specifying one or more rules, each of said rules containing a condition part and an action part, said condition part defining at least one relative position condition to be satisfied by said at least one object and each action part specifying the particular audio file in said collection to be delivered when said condition part is satisfied and further specifying the destination to which said particular audio file is to be delivered, a processor coupled to said memory unit for evaluating said location data in accordance with said one or more rules, and a message transmitter responsive to said processor for delivering said particular audio file in the manner specified by the action part of each of said rules whose condition part is satisfied by said location data.
20120019683
12843805
0
1. A method comprising particular steps of: determining a first intensity value for a first region of an image; determining a second intensity value for a second region of the image; generating a first feature value for a first feature at least in part by dividing the first intensity value by the second intensity value; determining whether the first feature value falls within a specified range that is associated with the first feature; and in response to determining that the first feature value falls within the specified range that is associated with the first feature, storing data that indicates that the image contains a face; wherein the steps are performed by an automated device.
9201868
13794800
1
1. A method implemented on a computer comprising a processor, and for performing actions on a multi-term text unit based on a derived semantic attribute or attribute value, the method comprising: receiving a text content comprising multiple text units, each text unit comprising at least a portion of a phrase or a sentence consisting of multiple terms, each term comprising a word or a phrase in a language; identifying, in the text content, a text unit, wherein the text unit comprises a first term and a second term, wherein neither the first term nor the second term includes a grammatically defined negator or negation word; obtaining a derived semantic attribute or attribute value for the text unit as a whole based on the first term and the second term; and performing an action on the text unit based on the derived semantic attribute or attribute value, wherein the action includes extracting the text unit for display or storage, marking the text unit for display in a format that is different from the display format of the text elements adjacent to the text unit in the text content, or displaying the text unit in a format that is different from the display format of the text elements adjacent to the text unit in the text content; wherein the steps for obtaining the derived semantic attribute or attribute value for the text unit as a whole include the following: receiving a name or description of a semantic attribute, wherein the embodiment of the semantic attribute includes an attribute name or description, an attribute type or attribute value, wherein the semantic attribute comprises a first value and a second value each representing a meaning carried by a term in the language, wherein an example of the semantic attribute comprises a sentiment or opinion, and when the semantic attribute is a sentiment or opinion, each of the first value and the second value is either a positive value or a negative value, but not a neutral value; identifying the first term in the text unit, wherein the first term is associated with the first value; identifying the second term in the text unit, wherein the second term is associated with the second value; determining the derived semantic attribute or attribute value for the text unit as a whole based on the first term and the second term, and the first value and the second value.
7840175
11257584
1
1. A computer-implemented method performed by one or more processors for presenting a training course stored in memory to a learner, the method comprising the following steps performed by one or more processors: retrieving a default learning strategy associated with the learner to apply to a training course, the training course comprising a plurality of nodes defining a path for the learner; determining the default learning strategy is a valid strategy for the training course; applying the default learning strategy to the training course, wherein applying the default learning strategy includes presenting the plurality of nodes of the training course to the learner in an order based on the default learning strategy, the order different from the path defined by the plurality of nodes of the training course; receiving a request from the learner to replace the default learning strategy with a disparate learning strategy while the training course is in progress; applying the disparate learning strategy to the training course by modifying the order through the plurality of nodes for the learner such that the learner does not revisit previously visited nodes; and presenting the training course to the learner in order based on the disparate learning strategy, the order of the presented training course at least partially determined by unvisited nodes of the training course during the prior presentation.
8006157
11863704
1
1. A method for outlier detection, comprising: receiving a plurality of real sample vectors, each real sample vector representing a detected real event; synthesizing a plurality of random state vectors, each random state vector having a randomly generated value; forming a learning set of candidate sample vectors, consisting of a plurality of said random state vectors and a plurality of said synthesized random state vectors; generating a first classifier for classifying said candidate sample vectors between being an outlier or a non-outlier; forming a set of classifiers for classifying candidate sample vectors from among said set of candidate sample vectors between being an outlier or a non-outlier, said forming including initializing said set of classifiers as said first classifier, and adding additional classifiers to said set by repeated iterations, each iteration including generating, for each of said candidate sample vectors, a set of classification results based on said set of classifiers, generating, for each of said candidate sample vectors, a classification uncertainty value, said value reflecting a comparative number, if any, of the classification results indicating the candidate sample as being an outlier to a number, if any, of the classification results indicating the candidate sample as being a non-outlier, updating the learning set of candidate sample vectors by accepting candidate sample vectors for keeping in the updated learning set based on the vector's classification uncertainty value, wherein said accepting is such that a candidate sample vector's probability of being accepted into said updated learning set is proportional to the candidate sample vector's classification uncertainty value, and wherein the population of said updated learning set of candidate sample is substantially lower than the population of the learning set of candidate prior to the updating, generating another classifier based on said learning data set, updating said set of classifiers to include said another classifier, and repeating said iteration until said set of classifiers includes at least t members; generating an outlier detection algorithm based on said set classifiers; and classifying subsequent sample vectors based on said outlier detection algorithm.
20160104485
14859258
0
1. A method implemented by an information handling system that includes a memory and a processor, the method comprising: generating, by the processor, a plurality of information elements based upon a voice conversation between a first entity and a second entity over a communication network; constructing a current conversation pattern from the plurality of information elements; identifying one or more deceptive conversation properties of the current conversation pattern based upon analyzing the current conversation pattern against one or more domain-based conversation patterns; and sending an alert message to the first entity based upon the identified one or more deceptive conversation properties.
20150364139
14302137
0
1. A system, comprising: a memory that stores instructions; a processor that executes the instructions to perform operations, the operations comprising: obtaining visual content associated with a user and an environment of the user; obtaining, from the visual content, metadata associated the user and the environment of the user; determining, based on the visual content and metadata, if the user is speaking; obtaining, if the user is determined to be speaking, audio content associated with the user and the environment; adapting, based on the visual content, audio content, and metadata, an acoustic model corresponding to the user and the environment; and enhancing, by utilizing the acoustic model, a speech recognition process utilized for processing speech of the user.
8249857
12108738
1
1. A method for multilingual administration of enterprise data, the method comprising: retrieving, by at least one device, enterprise data comprising text and metadata; extracting, by the at least one device, the text and the metadata from the enterprise data for rendering from a digital media file, the extracted text and the extracted metadata being in a source language; selecting, by the at least one device, a target language from among a plurality of target languages based on a data type for the enterprise data; translating, by the at least one device, the extracted text and the extracted metadata in the source language to translated text and translated metadata in the target language; converting, by the at least one device, the translated text to synthesized speech in the target language; recording, by the at least one device, the synthesized speech in the target language in a digital media file; and storing the translated text as metadata associated with the digital media file.
8781811
13278622
1
1. A method comprising: receiving, by a server, a first plurality of language preferences from a user; storing the first plurality of language preferences to a database connected to the server; storing the first plurality of language preferences to a first computer readable medium on the first device as a second plurality of language preferences; receiving, from a first device, a request to resolve a first language preference for a first application, wherein the first application utilizes an API to access the first plurality of language preferences stored to the database; comparing the first plurality of language preferences to languages available in the first application; determining a most preferred language for the first application operating on the first device based upon the first comparison of the first plurality of language preferences with the languages available for the first application; providing an indication of the most preferred language for the first application to the first device; receiving, from the first device, a request to resolve a second language preference for a second application, wherein the second application cannot obtain the first plurality of language preferences stored to the database; obtaining the second plurality of language preferences from the first computer readable medium of the first device; determining a most preferred language for the second application on the first device based upon a second comparison of the second plurality of language preferences to languages available for the second application; and providing an indication of the most preferred language for the second application to the first device.
9990434
14726467
1
1. A method implemented by an information handling system that includes a memory and a processor, the method comprising: analyzing a plurality of posts included in one or more threads of an online forum, wherein the analyzing further comprises: identifying a main topic related to a parent post of the thread; selecting a plurality child posts of the thread, wherein the parent post is a parent to each of the child posts; identifying any referential types corresponding to a plurality of words included in the parent post; identifying any anaphora types corresponding to a plurality of words included in each of the child posts; associating each of the plurality of child posts with the parent post as a relationship; resolving the anaphora types included in the child posts with at least one of the referential types included in the parent post; determining a sentiment between each of the child posts and the parent post; identifying a plurality of child topics, wherein each of the plurality of child topics corresponds to one of the child posts and the determined sentiment; and determining a relevance of each of the child posts by comparing the identified main topic to each of the identified child topics; selecting one or more of the child posts based on the relevance of the child posts; ingesting data from the parent post into a corpus utilized by a question answering (QA) system; ingesting data from the one or more selected child posts into the corpus; and building a forum tree corresponding to the online forum, wherein the forum tree includes the parent post and the selected one or more child posts, the relationships between the parent post and each of the selected child posts, and the resolved anaphora types included in each of the selected child posts.
8073700
11422093
1
1. A method carried out by at least one computer, the method comprising acts of: receiving a request comprising speech data from a mobile device; querying a network service using query information obtained from the speech data, whereby search results are received from the network service; formatting the search results for presentation on a display of the mobile device; generating a voice grammar based at least in part on the search results; and sending the search results and the voice grammar generated from the search results to the mobile device.
9282359
13830986
1
1. A method comprising: (a) maintaining, by a computer system comprising one or more or computers, one or more databases stored on computer-readable media comprising: (1) first electronic data comprising one or more digitally created reference compact electronic representations for each of a plurality of reference electronic works, wherein each digitally created reference compact electronic representations comprises one or more extracted feature vectors of at least one of the plurality of reference electronic works; and (2) second electronic data associated with one or more of the reference electronic works and related to action information comprising displaying an advertisement corresponding to each of the one or more reference electronic works; (b) obtaining, by the computer system, a first digitally created compact electronic representation comprising one or more extracted feature vectors of a first electronic work; (c) identifying, by the computer system, a matching reference electronic work that matches the first electronic work by comparing the first digitally created compact electronic representation of the first electronic work with the first electronic data using an approximate nearest neighbor search, which is a sub-linear search of the first electronic data that identifies a match to the first digitally created compact electronic representation within a threshold but does not guarantee to identify the closest match to the first digitally created compact electronic representation; (d) determining, by the computer system, the action information corresponding to the matching reference electronic work based on the second electronic data; and (e) associating, by the computer system, the determined action information with the first electronic work.
8393962
11581011
1
1. A non-transitory computer-readable storage medium storing a game program to be executed by a computer of a game device including a voice input element, voice output units, a display, and memory locations, the game program instructing the computer to perform: notification for notifying that a player is prompted to input a voice; acquisition for repeatedly acquiring, after the notification, voice data representing a voice signal, having a predetermined time length, which is inputted to the voice input element; determination for determining, each time the voice data is acquired in the acquisition, whether or not the acquired voice data satisfies a predetermined selecting condition; first memory control for storing a collection comprising a partial subset of the voice data, which is determined to satisfy the predetermined selecting condition in the determination, in the memory locations as a piece of selected voice data; and voice output for outputting, when a game image showing a game character speaking is displayed on the display, a sound effect representing a voice of the game character from the voice output units by using a partial portion of a plurality of pieces of the selected voice data, wherein the sound effect representing a voice of the game character is meaningless.
20100114556
12609647
0
1. A computer-implemented speech translation method, comprising the steps of: receiving a source speech; extracting non-text information in the source speech; translating the source speech into a target speech; and adjusting the translated source speech according to the extracted non-text information to make a final target speech to preserve the non-text information in the source speech.
8417522
12763438
1
1. A speech recognition method, comprising: receiving a speech input signal in a first noise environment which comprises a sequence of observations; determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, comprising, providing an acoustic model for performing speech recognition on a input signal which comprises a sequence of observations, wherein said model has been trained to recognise speech in a second noise environment, said model having a plurality of model parameters relating to the probability distribution of a word or part thereof being related to an observation, and adapting the model trained in the second environment to that of the first environment; the speech recognition method further comprising, determining the likelihood of a sequence of observations occurring in a given language using a language model; and combining the likelihoods determined by the acoustic model and the language model and outputting a sequence of words identified from said speech input signal, wherein adapting the model trained in the second environment to that of the first environment comprises using second order or higher order Taylor expansion coefficients derived for a group of probability distributions and wherein the same expansion coefficient is used for the whole group; the speech recognition method further comprising estimating noise parameters used to determine the Taylor expansion coefficients, wherein the noise parameters comprise a component for additive noise and a component for convolutional noise, and an observation in the first noise environment is related to an observation in the second noise environment by: y=x+h+g ( x, n, h )= x+h+C ln(1+ e C −1 (n−x−h) )  (1) where y is the observation in the first noise environment, x is the observation in the second noise environment, n is the additive noise, h is the convolutional noise in the first environment with respect to the second environment and C is the discrete cosine transformation matrix.
9548066
14456620
1
1. A system comprising: one or more server computers; one or more server applications that have been selected by a user for execution on the one or more server computers, wherein the one or more server applications operate in conjunction with a speech interface device located in premises of the user to provide services for the user; a speech processing component configured to receive, from the speech interface device, an audio signal that represents user speech, wherein the user speech expresses a user intent, the speech processing component being further configured to perform automatic speech recognition on the audio signal to identify the user speech and to perform natural language understanding on the user speech to determine the user intent; and an intent router configured to perform acts comprising: identifying a first server application of the one or more server applications corresponding to the user intent; providing a first indication to the first server application to invoke an action corresponding to the user intent; providing a second indication of the user intent to the speech interface device, wherein the speech interface device is responsive to the user intent to perform the action corresponding to the user intent; receiving, at the one or more server computers, a confirmation from the speech interface device that at least one of (i) the speech interface device will perform the action in response to the user intent or (ii) the speech interface device has performed the action in response to the user intent; and providing a third indication, based at least in part on receiving the confirmation, to the first server application to cancel responding to the user intent.
20070214418
11486122
0
1. A video summarization method comprising: providing a video wherein the video has a plurality of sentences and a plurality of frames; applying a key frame extraction step to the frames of the video to acquire a plurality of key frames, wherein the key frame extraction step comprises: computing the similarity between each frame to obtain a plurality of similarity values; and choosing the key frames from the frames, wherein the sum of the similarity values between the key frames is the minimum; applying a key sentence extraction step to the sentences of the video to acquire a plurality of key sentences, wherein the key sentence extraction step comprises: converting the sentences into a plurality of corresponding sentence vectors; computing the distance between each sentence vector to obtain a plurality of distance values; according to the distance values, dividing the sentences into a plurality of clusters, wherein the clusters are members of a set; computing the importance of each sentence of each cluster to obtain the importance of each cluster; applying a splitting step to split a most important member with the highest importance in the cluster into a plurality of new clusters, wherein the new clusters replace the original most important member and join the set as members of the set; repeating the splitting step until the number of the clusters reaches a predetermined value; and choosing at least one key sentence from each members of the set, wherein the sum of the importance of the key sentences is the maximum; and outputting the key frames and the key sentences.
20160379635
15105755
0
1. A method of processing received data representing speech comprising the steps of: monitoring the received data to detect the presence of data representing a first portion of a trigger phrase in said received data; sending, on detection of said data representing the first portion of the trigger phrase, a control signal to activate a speech processing block, and monitoring the received data to detect the presence of data representing a second portion of the trigger phrase in said received data, and if said control signal to activate the speech processing block has previously been sent, maintaining, on detection of said data representing the second portion of the trigger phrase, the activation of said speech processing block.
9444928
14740644
1
1. A method, comprising: generating a voice assist message in a device; queueing the voice assist message responsive to a state of a microphone in the device indicating an active state; and executing the queued voice assist message on the device responsive to identifying the state of the microphone in the device indicating an inactive state.
20140171149
14030034
0
1. An apparatus for controlling a mobile device, the apparatus comprising: a conversation recognition unit configured to recognize a conversation between users through mobile devices; a user intent verification unit configured to verify an intent of at least one user among the users based on the recognition result; and an additional function control unit configured to execute an additional function corresponding to the verified user's intent in a mobile device of the user.
20070271509
11383970
0
1. A computer implemented method of performing an operation, comprising: analyzing a document for one or more document attributes; communicating the one or more document attributes via a user interface; selecting at least one of the one or more document attributes; and performing the operation on one or more components of the document associated with the selected at least one document attributes.
10035643
14145864
1
1. A food packaging customization system, comprising: at least one processing device; and one or more instructions which, when executed by the at least one processing device, cause the at least one processing device to be configured as at least: circuitry configured for acquiring user information associated with one or more users for preparing one or more customized food items for the one or more users, including at least utilizing one or more network credentials of the one or more users to obtain at least a portion of the user information at least in part via at least one network connection; circuitry configured for controlling preparation of the one or more customized food items in accordance with the acquired user information; circuitry configured for directing generation of one or more customized packagings for holding the one or more customized food items, the one or more customized packagings having one or more bar codes indicative of the one or more users and one or more features that are customized based, at least in part, on the acquired user information; circuitry configured for controlling one or more components of a robotic packaging system to pack one or more portions of the one or more customized food items into the one or more customized packagings; and circuitry configured for storing at least one indication of at least one dietary activity, including at least circuitry configured for receiving data acquired by at least one bar code reading arrangement of at least one refuse receptacle to obtain at least one identity of at least one user and at least one indication of at least one customized food item having been packaged in at least one customized packaging discarded into the at least one refuse receptacle.
20100036663
12515536
0
1. A method for reducing total bandwidth requirement for communication of audio signals in a voice over internet protocol application, comprising the steps of: sampling said audio signals and converting said sampled audio signals into sampled digital signals of frames of predetermined sizes; computing spacings of order statistics of said frames and deriving the entropy of each of the frames; setting a threshold for a first set of frames derived from said entropy of said first set of frames, wherein said first set of frames comprises one or more frames, and setting the threshold of a second set of frames that is subsequent to said first set of frames to be equal to the threshold of the first set of frames plus an increment, wherein said second set of frames comprises one or more frames; marking the second set of frames, wherein the step of marking comprises marking the second set of frames as inactive speech frames when the entropy of the second set of frames is greater than the threshold of the first set of frames, and marking the subsequent frames as active speech frames when the entropy of the subsequent frames is lesser than the threshold of the first set of frames; and transmitting only the active speech frames.
20100239166
12566072
0
1. A character recognition device comprising: an acquiring unit that acquires image data describing pixel values representing colors of pixels constituting an image; a binarizing unit that binarizes the pixel values described in the image data acquired by the acquiring unit; an extracting unit that extracts boundaries of colors in the image represented by the image data acquired by the acquiring unit; a delimiting unit that carries out a labeling processing on the image represented by the image data acquired by the acquiring unit to delimit a plurality of image areas in the image; a specifying unit that specifies, with regard to first image areas arranged according to a predetermined rule among the plurality of image areas delimited by the delimiting unit, pixels binarized by the binarizing unit, corresponding to the first image areas, as a subject for character recognition, and specifies, with regard to second image areas not arranged according to the predetermined rule among the plurality of image areas delimited by the delimiting unit, pixels of areas surrounded by boundaries extracted by the extracting unit, corresponding to the second image areas, as a subject for character recognition; and a character recognition unit that recognizes characters represented by the pixels specified by the specifying unit as a subject for character recognition.
9390712
14223468
1
1. A method performed by a computer processor for recognizing mixed speech from a source, comprising: training a first neural network to recognize a speech signal spoken by a speaker with a higher level of a speech characteristic from a mixed speech sample; training a second neural network to recognize a speech signal spoken by a speaker with a lower level of the speech characteristic from the mixed speech sample, wherein the lower level is lower than the higher level; and decoding the mixed speech sample with the first neural network and the second neural network by optimizing the joint likelihood of observing the two speech signals.
4604737
06514317
1
1. An electronic diving apparatus for producing indication sounds to assist a diver carrying the apparatus, comprising: means for sensing ambient pressure in a body of water; means responsive to said means for sensing for generating a depth signal having a frequency related to said ambient pressure; timing means for gating said depth signal to produce periodic audio signal pulses separated by intervals of silence having a greater time duration than said pulses; and transducer means for receiving said gated depth signal and producing a corresponding sound in said body of water for indicating the depth of said apparatus within said body of water.
8074171
12134293
1
1. A method for providing notification of content potentially omitted from within an active document in a document preparation application executing on a computer system having a display, the method comprising: defining a natural language model for a set of phrasal forms associating each phrasal form in the set of phrasal forms with a content type; parsing a textual content of the active document to generate one or more natural language tokens; accessing the natural language model to identify each of the one or more natural language tokens that matches with a phrasal form in the set of phrasal forms; generating a list of expected content items having an expected content item for each of the one or more natural language tokens that matches with a phrasal form in the set of phrasal forms, each expected content item being generated based upon the content type associated with the corresponding matching phrasal form in the natural language model; scanning the active document to attempt to locate each expected content item in the list of expected content items; and displaying a notification of each expected content item in the list of expected content items not located within the active document on the display.
20060111904
10996811
0
1. A method for spotting an at least one target speaker within at least one call interaction, the method comprising: generating from the at least one call interaction an at least one speaker model of the at least one speaker based on an at least one speaker speech sample; and searching for the at least one target speaker using the at least one speech sample of the target speaker and the at least one speaker model.
9280538
14292498
1
1. A sentence hiding and displaying system comprising: an image creating interface receiving an input of information on an original image, a plurality of sentences corresponding to the original image, and information on at least one language to be hidden; an image creator configured to determine at least one sentence to be hidden expressed in the language to be hidden from the plurality of sentences, and to create a hidden image of the original image, wherein the at least one sentence is hidden in the hidden image; a sentence display interface configured to receive an input of information on a language selected by a user; and a sentence extractor and displayer configured to extract at least one sentence expressed in the selected language from the at least one sentences hidden in the hidden image, and to display the at least one sentence expressed in the selected language on the original image.
9568327
14417998
1
1. A navigation system comprising: a GPS module; an image recognition module having a line recognition function; a roadmap storage module configured to store roadmap information and route change possible section information through which a route of a vehicle is changed; a roadmap receiving module configured to receive the roadmap information; and an arithmetic processing module configured to determine whether the route of the vehicle is changed, based on the route change possible section information and line recognition information recognized by the image recognition module, wherein the roadmap information stored in the roadmap storage module comprises line characteristic information, and wherein when determining that a route change is dangerous because a number of lanes which are to be changed from a current lane of the vehicle is large in comparison to a remaining forward distance to a route change impossible section from the vehicle, the arithmetic processing module cancels a current route guide function and searches for a next route.
4802228
06923004
1
1. An instrument for use in the clinical therapeutic correction of defects in speech of a user which comprises: means for generating an electrical signal related to said speech, said speech having an acoustic spectrum; at least one input for receiving said electrical signal related to said speech; a broad band amplifier for receiving and selectively increasing said electrical signal; a first one-octave bandpas amplifier for receiving said electrical signal, said first bandpass amplifier having means for passing and selectively amplifying a first preselected portion of said acoustic spectrum in said electrical signal, said first preselected portion having a center frequency; a second one-octave bandpass amplifier for receiving said electrical signal having means for passing and selectively amplifying a second preselected portion of said acoustic spectrum present in said electrical signal, said second preselected portion having a center frequency; means for individually selecting said portion of said acoustic spectrum to be passed and amplified by said first and second bandpass amplifiers, respectively; a mixer for selectively combining outputs of said broad band amplifier and said first and second bandpass amplifiers and for providing an output signal corresponding to said combined outputs from said mixer; and means for converging said output signal from said mixer into audible sound, said audible sound including said speech with said portions of said acoustic spectrum enhanced in amplitude.
8463612
11557047
1
1. A system for monitoring and collection of at least a portion of a voice conversation handled by a Voice over Internet Protocol (“VoIP”) application the system comprising: an audio stream capture component in operable communication with a computer system operably coupled to a communications network and further configured to monitor and capture at least a portion of a voice conversation that originates from or terminates at the computer system, wherein the computer system comprises a processor, a memory, an audio input device, an audio output device, a device driver providing an application programming interface (“API”) for the audio input device and the audio output device, and a VoIP application for handling voice conversations using the computer system, and wherein the audio stream capture component comprises a first set of instructions executable by the processor, the first set of instructions operable when executed by a processor to perform operations comprising: intercepting a first API function call from the VoIP application to the device driver, the first API function call indicating that the VoIP application has processed a first set of data at an audio input buffer configured to hold a first set of audio data received at the audio input device, the first set of data representing an audio input stream for the VoIP application, wherein the first set of data comprises a first set of raw data; identifying, based at least in part on information contained in the first API function call, the location of the audio input buffer for the VoIP application; capturing a copy of the first set of data stored in the audio input buffer before the first set of data is purged from the audio input buffer; propagating the intercepted first API function call for reception by the device driver; permitting the first set of raw data stored in the audio input buffer to be processed in accordance with at least the first API function call; intercepting a second API function call from the VoIP application to the device driver, the second API function call comprising information about a second set of data representing an audio output stream from the VoIP application, wherein the second set of data corresponds to the output from the VoIP application and comprises a second set of raw data; capturing a copy of the second set of data; propagating the intercepted second API function call for reception by the device driver; permitting the second set of raw data to be processed in accordance with at least the second API function call; and transmitting the captured copies of the first set of data and the second set of data for reception by an audio mixer component; an audio mixer component in operable communication with the computer system, the audio mixer component comprising a second set of instructions executable by the processor, the second set of instructions operable when executed by the processor to perform operations comprising: receiving the copy of the first set of data; receiving the copy of the second set of data; and synchronizing the copy of the first set of data with the copy of the second set of data to re-create a voice conversation handled by the VOIP application; and an audio storage component in operable communication with the computer system, the audio storage component comprising a third set of instructions executable by the processor, the third set of instructions operable when executed by the processor to perform operations comprising: compressing the re-created voice conversation; saving the re-created voice conversation to a storage medium; and transmitting the re-created voice conversation for reception by a monitoring server.
8996359
13462488
1
1. A system, comprising: a computer; a first set of categories; a second set of words, where each word in the second set of words is a member of one or more of the first set of categories; a word identifier on the computer, the word identifier configured to identify words in a text; and a signature generator configured to generate a signature for said text using said words identified by the word identifier in said text and said category memberships for each word.
7809548
11075625
1
1. A method of processing at least one natural language text using a graph, comprising: selecting, using a processing unit, a plurality of text units from said at least one natural language text: associating, using the processing, unit, the plurality of text units with a plurality of graph nodes such that each graph node corresponds to one of the text units selected from said at least one natural language text; determining, using the processing unit, at least one connecting relation between at least two of the plurality of text units: associating, using the processing unit, the at least one connecting relation with at least one graph edge connecting at least two of the plurality of graph nodes: constructing, using the processing unit, a graph using only the plurality of graph nodes that correspond to one of the text units selected from said at least one natural language text and said at least one graph edge; and determining, using the processing unit, at least one ranking by applying a graph-based ranking algorithm to the graph, wherein determining the at least one ranking comprises ranking the plurality of graph nodes based upon the at least one graph edge so that the ranking represents the relative importance, within the natural language text, of the text units associated with the graph nodes, and wherein ranking the plurality of graph nodes based upon the at least one graph edge comprises: assigning a plurality of first scores to the plurality of graph nodes; defining a relationship between a second score of each graph node and second scores, of graph nodes coupled to each graph node by a graph edge; and determining a plurality of second scores associated with the plurality of graph nodes by applying an iterative recursive algorithm starting with the first plurality of scores and iterating until the relationship is satisfied.
20110078191
12567920
0
1. A method for training a handwritten document categorizer comprising: for each of a set of categories, extracting a set of discriminative words, from a training set of typed documents labeled by category; for each keyword in a group of keywords, synthesizing a set of samples of the keyword with a plurality of different type fonts, the keywords comprising at least one discriminative word for each category in the set of categories; generating a keyword model for each keyword in the group, parameters of the model being estimated based on features extracted from the synthesized samples of that keyword; computing keyword statistics for each of a set of scanned handwritten documents labeled by category by applying the generated keyword models for the group of keywords to word images extracted from the scanned handwritten documents; and with a computer processor, training a handwritten document categorizer with the keyword statistics computed for the set of handwritten documents and the respective handwritten document labels.
20030115055
10251734
0
1. A speech recognizer comprising: a microphone for receiving speech utterances and pauses; models for recognizing speech; a model adjustor responsive to said models for providing adjusted models; said adjusted models having their parameters simultaneously adjusted to handle distortion introduced by convolutive and additive components; and a recognizer coupled to said microphone and responsive to adjusted models for recognizing speech.
20030002632
09895091
0
1. In a voice mail system having a central voice message server for delivering voice messages to a remote voice message receiving device, a method comprising: storing a voice signal as a voice message on the central voice message server; and selecting, using the remote voice message receiving device, whether central voice message server will transmit the stored voice message in a digital voice stream format at a playback transmission rate or in a computer file format at a file storage rate; and transmitting the stored voice message from the central voice message server to the remote voice message receiving device in the selected format.
8935159
13864935
1
1. A noise removing system in a voice communication, comprising: a spectral subtraction apparatus configured to perform a spectral subtraction (SS) for voice signals; and a noise removing apparatus configured to perform clustering of the voice signals, for which the spectral subtraction has been performed and which are consecutive on a frequency axis of a spectrogram, to designate one or more clusters, and determine continuity of each of the designated clusters on the frequency axis and a time axis of the spectrogram to extract musical noises.
9020244
13312558
1
1. A method comprising: generating a plurality of model-generated scores; wherein each model-generated score of the plurality of model-generated scores corresponds to a candidate image from a plurality of candidate images for a particular video item; wherein generating the plurality of model-generated scores includes, for each candidate image of the plurality of candidate images, using a set of input parameter values with a trained machine learning engine to produce the model-generated score that corresponds to the candidate image, wherein the set of input parameter values include at least one input parameter value for an activity feature that reflects one or more actions that one or more users have performed, during playback of the particular video item, relative to a frame that corresponds to the particular candidate image; establishing a ranking of the candidate images, from the plurality of candidate images, for the particular video item based, at least in part, on the model-generated scores that correspond to the candidate images; selecting a candidate image, from the plurality of candidate images, as a representative image for the particular video item based, at least in part, on the ranking; wherein the method is performed by one or more computing devices.
8249525
12564717
1
1. A mobile electronic device, comprising: a storage system; at least one processor; and one or more programs stored in the storage system to be executed by the at least one processor, the one or more programs comprising: a setting module operable to receive a standard voice command input via a voice command input unit by a user of the mobile electronic device, receive a voice command identification standard input by the user, and store the standard voice command and the voice command identification standard in the storage system, wherein the voice command identification standard defines a preset similarity degree between characteristics of any voice command that satisfies the voice command identification standard and characteristics of the standard voice command; a detecting module operable to detect a missed call received by the mobile electronic device; and an activation module operable to automatically activate a voice identification module in response that the missed call is detected, where said voice identification module is activated only upon receiving a missed call while said mobile device is in a silent mode, the voice identification module operable to determine if a voice command input via the voice command input unit satisfies the voice command identification standard by determining if a similarity degree between each characteristic of the voice command and a corresponding characteristic of the standard voice command satisfies the preset similarity degree, and the activation module further operable to activate a ringing circuit of the mobile electronic device to play a predetermined ring tone in response that the voice command satisfies the voice command identification standard, so as to help the user locate the mobile electronic device according to the ring tone.