doc_id
stringlengths
7
11
appl_id
stringlengths
8
8
flag_patent
int64
0
1
claim_one
stringlengths
13
18.3k
8595175
13179601
1
1. A computer system comprising: a logical processor; a memory in operable communication with the logical processor; an object-relational mapping session residing in the memory; a mapped persistence ignorant object residing in the memory and having at least one state as part of the session; a fluent interface residing in the memory; and a developer code containing an API Pattern and residing in the memory, which upon execution of the developer code manipulates the mapped persistence ignorant object using calls to the fluent interface in the API Pattern.
8867891
13269742
1
1. A method for determining a semantic concept classification for a digital video clip including a temporal sequence of video frames and a corresponding audio soundtrack, the method comprising: analyzing, by a processing device, the temporal sequence of video frames to determine a set of visual features; analyzing, by the processing device, the audio soundtrack to determine a set of audio features; determining, by the processing device, similarity scores between the digital video clip and each of a plurality of audio-visual grouplets from an audio-visual dictionary, wherein the plurality of audio-visual grouplets includes distinct visual background codewords representing visual background content, distinct visual foreground codewords representing visual foreground content, distinct audio background codewords representing audio background content, and distinct audio foreground codewords representing audio foreground content, wherein the distinct visual background codewords and the distinct visual foreground codewords are separate and distinct from each other, wherein the distinct audio background codewords and the distinct audio foreground codewords are separate and distinct from each other, and wherein the distinct visual background codewords and the distinct visual foreground codewords are separate and distinct from the distinct audio background codewords and the distinct audio foreground codewords, and wherein the determining similarity scores comprises: comparing the set of visual features to distinct visual background codewords and distinct visual foreground codewords associated with a particular audio-visual grouplet; and comparing the set of audio features to distinct audio background codewords and distinct audio foreground codewords associated with the particular audio-visual grouplet; determining, by the processing device, one or more semantic concept classifications using trained semantic classifiers responsive to the determined similarity scores; and storing, by the processing device, indications of the one or more semantic concept classifications in a processor-accessible memory.
8712775
13755016
1
1. A method, comprising: receiving input of a plurality of sample phrases each comprising a plurality of words; representing each sample phrase as a node in a tree; forming a mathematical expression for each pair of nodes in the tree, the mathematical expression comprising a plurality of words found in the sample phrases of a pair of nodes and an indication of whether a word is a common word that occurs in each of the plurality of phrases or an optional word that occurs in some of the plurality of phrases for the pair of nodes; and generating a compact mathematical expression by comparing mathematical expressions, wherein the compact mathematical expression includes each of the plurality of words found in the sample phrases and an indication of whether each of the plurality of words is a common word or an optional word.
9367490
14304174
1
1. A method implemented by a host computing device comprising: detecting connection of a connector to an accessory port of the host computing device based on signals conveyed via a pair of detection pins allocated in the connector; ascertaining an orientation of the connection of the connector to the accessory port based on the signals, the signals indicating a logic state combination for the pair of detection pins, the logic state combination indicating whether an accessory device connected via the connector is a one wire device or a two wire device; and configuring a switching mechanism of the host computing device to route signals according to the ascertained orientation and the logic state combination.
20130311505
13223209
0
1. A computer-implemented method, comprising: receiving text input; providing the text input to a keyword suggestion tool, wherein the keyword suggestion tool generates one or more keywords based on the text input; applying a text reduction function to the text input to generate a reduced text that is a subset of the text input, wherein the text reduction function is based on a term importance scoring of terms in the text input; providing the reduced text to the keyword suggestion tool, wherein the keyword suggestion tool generates one or more keywords based on the reduced text, the one or more keywords generated based on the reduced text generated independently from the one or more keywords generated based on the text input; and generating a keyword set output from a combination of the one or more keywords based on the input text and the one or more keywords based on the reduced text.
8396856
12945105
1
1. A method of building, managing, and sharing a searchable personalized database, the method comprising: enabling users with personal computers having a local storage system and access to the Internet to create selectively shareable personalized databases of a plurality of selected source files, including files originating from the user's local storage system, files located in access-restricted databases, accessed through the Internet, to which the users have obtained personalized access permission, and selectively shareable files the users create; enabling users to annotate files in their personalized databases and incorporating those annotations into the personalized databases; generating one or more word level inverted indices of the personalized databases to support text searching of database source files and in-context highlighting of search terms during display of database source files; enabling users to register selected ones of a plurality of selectively shareable personalized databases; enabling users to unregister selected ones of a plurality of selectively shareable personalized databases; selectively searching registered ones of the plurality of personalized databases, using the one or more indices, according to a search criterion, to locate words and phrases in the source files of the registered databases; and sending information for the display of at least portions of files in the plurality of selected source files that meet the search criterion with in-context highlighting of search terms consistent with the search criterion.
8935390
13515079
1
1. A method for categorizing URLs (Uniform Resource Locators) of web pages accessed by users over an IP (Internet Protocol) based data network, the method comprising: collecting by means of at least one monitoring probe real time data from IP data traffic occurring on the IP based data network; extracting from said collected real time data parameters related to a web page, said parameters including an URL of the web page; processing said URL with a rule based categorization engine, to associate a matching category to the URL of said web page, the matching category being inferred from a pre-defined list of categories; when no matching category is inferred, transferring said URL of said web page to a semantic based categorization engine; and processing said transferred URL by the semantic based categorization engine, said processing consisting in: extracting textual content from content of said web page associated to said URL, performing a semantic analysis of said textual content, and associating a matching category to the transferred URL of the web page based on the semantic analysis of the textual content extracted from the web page, the matching category being inferred from a pre-defined list of categories, wherein the URLs for which no matching category has been inferred by the rule based categorization engine over a determined period of time are memorized, wherein only the N URLs having the highest occurrence for which no matching category has been inferred by the rule based categorization engine over the determined period of time are transferred to the semantic based categorization engine, and wherein N is a pre-defined number of URLs.
20180150457
14593427
0
1. A method of sequence recognition, comprising the steps of: receiving an input sequence; converting the input sequence to an input sequence SSM Matrix; comparing the input sequence SSM Matrix to a plurality of known SSM Matrices representing a plurality of known sequences; matching the input sequence to the known sequence based on the step of comparing.
9380161
14010471
1
1. A computer-implemented system for user-controlled processing of audio signals comprising: a computing device comprising a processor and a memory coupled to the processor, the computing device configured to connect to a voicemail server via at least one communication path, the computing device further comprising: a signal module to receive from the voicemail server via the at least one communication path an audio signal comprising a reference segment and a segment preceding the reference segment; a value module to receive a value q from a user for determining starting points of audio buffers comprised in the preceding segment; a buffer module to define the audio buffers in the preceding segment, each having a width of N audio samples and the starting point that is a unique number of samples away from a start of the preceding segment, based on a result of a division of N by q; a transformation module to transform one or more of the buffers into discrete Fourier transform (DFT) buffers; a signature module to generate a signature of the audio signal using at least a portion of the reference segment and at least one of the DFT buffers, comprising: an identification module to identify a plurality of values comprised in the preceding segment DFT buffers; a mean module to calculate a mean of the values in the preceding segment DFT buffers; a multiplication module to multiply the mean by a mean factor; a comparison module to compare each of the values to a result of the multiplication; and a use module to use those of the values that exceed the result of the multiplication in the generation of the signature; a receipt module to receive from the voicemail server via the at least one communication path a new audio signal and to generate at least one DFT for the new audio signal; a matching module to determine that the new audio signal matches the audio signal based on a comparison of the new audio signal DFT to the signature; and an action module to perform a predefined action with respect to the voicemail server upon determining that the new audio signal matches the signature.
20150161096
14624929
0
1. An apparatus for detecting grammatical errors, the apparatus comprising: a sentence analyzer configured to break up an input sentence into units of morphemes; an example builder configured to break up example text into units of morphemes and build an example-based index database (DB); and an error detector configured to generate morpheme sequences by binding the morphemes broken up from the input sentence in a preset window (n-window) size, the generated morpheme sequences comprising forward morpheme sequences and backward morpheme sequences, determine frequencies of appearance of morpheme sequences identical to the forward morpheme sequences and backward morpheme sequences by searching the example-based index DB, and detect grammatical errors in the input sentence by combining the determined frequencies of appearance.
8015141
12032702
1
1. A method of programming an entity in an environment with a rule set: respective rules grouped according to at least one rule group having a rule group name; at least one group designated as a start rule group for the entity; the entity configured to accept input from an entity controller; at least one rule specified according to a rule-based programming language comprising: at least one language condition, comprising: at least one entity controller input condition; and at least one environment test comprising: a sensory condition comprising at least one of: an entity type condition; an entity status condition; an entity possessory condition; an entity sensory input condition; and an environment status condition; a language verb parameter representing a sensory object; and at least zero language adjectives representing the sensory object; at least one language verb, comprising at least one rule group transition verb; at least one language verb parameter, comprising: names associated with the respective rules; and a sensory object reference representing the sensory object of the language condition of the at least one rule; at least one language adjective; and at least one Boolean logic connector; the at least one rule comprising a rule priority, at least one language condition representing an action condition, at least one language verb representing an action, and at least one language verb parameter representing an action object; the entity comprising a rule group identifier; and the method comprising: receiving the rule set comprising the at least one rule; and programming the entity to: upon initialization of the entity, set the rule group identifier to one of the start rule groups; and for a rule cycle, to: evaluate one or more action conditions of respective rules of the one of the start rule groups identified by the rule group identifier and in descending priority order, within the environment to identify a satisfied rule having satisfied action conditions according to Boolean logic connectors of the satisfied rule; upon identifying a first satisfied rule, perform the first satisfied rule within the environment; and upon failing to identify a second satisfied rule, remain idle for the rule cycle.
9874914
14281518
1
1. A method implemented by a host computing device comprising: maintaining a data structure configured to associate authorized accessory devices with power contract settings for each authorized accessory device, the power contract settings including at least a power exchange direction and current limits; detecting connection of an accessory device to the host computing device via an accessory interface; determining whether the accessory device is an authorized accessory device by comparing identity data of the accessory device to known data that indicates respective identities of the authorized accessory devices; when the accessory device is determined as an authorized accessory device, setting an active power contract for power exchange between the host computing device and the authorized accessory device, including: obtaining, from the data structure, the power contract settings associated with the authorized accessory device; and setting active power contract settings for the authorized accessory device based on the power contract settings obtained from the data structure; monitoring power exchange conditions between the host computing device and the authorized accessory device; detecting a change in the power exchange conditions prompting modification of the active power contract settings including modifying the data structure to reflect the modified active power contract settings; and communicating an update message that includes the modified active power contract settings to the authorized accessory device.
9767387
14921372
1
1. A device for selectively performing an object recognition operation, the device comprising: one or more processors to: receive a plurality of images for the object recognition operation; combine the plurality of images into a stitched image; determine a mean luminosity of an overlapped area of two images, of the plurality of images, that share the overlapped area; convert the two images to a hue-saturation-value (HSV) color space to generate two HSV color space converted images; obtain a composite image based on the mean luminosity of the overlapped area and based on hue, saturation, and value parameters associated with the two HSV color space converted images; determine a mean hue parameter, a mean saturation parameter, and a mean value parameter for the composite image; determine a reliability score for the two images based on the mean hue parameter, the mean saturation parameter, and the mean value parameter for the composite image, the reliability score predicting a quality of the stitched image that includes the two images, of the plurality of images, to which the reliability score corresponds; determine whether an accuracy associated with performance of the object recognition operation is likely to satisfy a threshold based on the reliability score; and selectively perform the object recognition operation based on whether the accuracy associated with the performance of the object recognition operation is likely to satisfy the threshold, the object recognition operation identifying one or more objects in the plurality of images.
20130040706
13654043
0
1. A mobile communication terminal, comprising: a display unit to display a call history comprising a first call distinguishing icon corresponding to a counterpart and a second call distinguishing icon corresponding to the counterpart, the first call distinguishing icon and the second call distinguishing icon being arranged in a chronological order according to call history information, the first call distinguishing icon comprising a voice call icon and the second call distinguishing icon comprising a message icon; a memory unit to store the call history information generated in response to a voice call, or a message being sent from or received by the mobile communication terminal; and a controller to generate a voice call if the first call distinguishing icon is selected, and to control the display unit to display a message creating screen if the second call distinguishing icon is the message icon and is selected.
9292183
13922969
1
1. An apparatus, comprising: at least one computer processor; at least one communications interface configured to transmit information to a web server over a network; and at least one computer-readable medium configured to store a plurality of computer program instructions that, when executed by the at least one computer processor, perform a method comprising: receiving from the web server, by the apparatus, a web page for a multimodal application wherein the web page includes a plurality of input fields; presenting the web page on a display of the apparatus; monitoring user input to determine a mode of interaction used by a user when interacting, via the apparatus, with at least one input field of the web page; storing information indicating the determined mode of interaction used by the user when interacting with the at least one input field; evaluating, by the at least one computer processor, a user modal preference based, at least in part, on the stored information indicating the determined mode of interaction used by the user when interacting with the at least one input field; and sending an indication of the user modal preference from the apparatus to the web server via the at least one communications interface.
7774203
11589772
1
1. An audio signal segmentation algorithm comprising: providing an audio signal; applying an audio activity detection (AAD) step to divide the audio signal into at least one first audio segment and at least one second audio segment, wherein the audio activity detection step further comprises: dividing the audio signal into a plurality of frames; applying a frequency transformation step to signals in each of the frames to obtain a plurality of bands in each frame; performing a likelihood computation step to the bands and a noise parameter to obtain a likelihood ratio therebetween; performing a comparison step to the likelihood ratio and a noise threshold, if the noise threshold is greater than the likelihood ratio, the bands belonging to a first frame, and if the likelihood ratio is greater than the noise threshold, the bands belonging to a second frame wherein the first frame belongs to the first audio segment and the second frame belongs to the second audio segment; and when a distance between two adjacent second frames is smaller than a predetermined value, combining the two adjacent second frames to compose the second audio segment, performing an audio feature extraction step on the second audio segment to obtain a plurality of audio features of the second audio segment; applying a smoothing step to the second audio segment after the audio feature extraction step; and discriminating a plurality of speech frames and a plurality of music frames from the second audio segment wherein the speech frames and the music frames compose at least one speech segment and at least one music segment, respectively.
20110258191
12935679
0
1. A search result providing system, comprising: a registration keyword unit to determine whether an additional keyword is required to be registered based on at least one of information associated with a registration of a keyword, wherein the additional keyword is registered as a registration keyword based on the determination.
6118064
09390548
1
1. A karaoke system acoustically outputting karaoke musical accompaniment together with a singing voice input from a microphone, and in the conjunction therewith, performing video output for image of lyric in synchronism with progress of karaoke musical accompaniment, comprising: sound volume integrating means for sampling singing voice input volume from said microphone at a given interval and sequentially integrating respective sampled values; converting means for converting an integrated value derived by said sound volume integrating means into a calorie consuming amount of a physical exercise by singing a song according to a predetermined algorithm; and announcing means for announcing said calorie consuming amount derived by said converting means to a user.
20110010180
12500029
0
1. A method of speech enabled media sharing in a multimodal application, the method implemented with the multimodal application and a multimodal browser, a module of automated computing machinery operating on a multimodal device supporting multiple modes of user interaction, the modes of user interaction including a voice mode and one or more non-voice modes, wherein the voice mode includes accepting speech input from a user, digitizing the speech, and providing digitized speech to a speech engine, and wherein the non-voice mode includes accepting input from a user through physical user interaction with a user input device for the multimodal device; the method comprising: parsing, by the multimodal browser, one or more markup documents of a multimodal application; identifying, by the multimodal browser, in the one or more markup documents a web resource for display in the multimodal browser; loading, by the multimodal browser, a web resource sharing grammar that includes keywords for modes of resource sharing and keywords for targets for receipt of web resources; receiving, by the multimodal browser, an utterance matching a keyword for the web resource, a keyword for a mode of resource sharing and a keyword for a target for receipt of the web resource in the web resource sharing grammar thereby identifying the web resource, a mode of resource sharing, and a target for receipt of the web resource; and sending, by the multimodal browser, the web resource to the identified target for the web resource using the identified mode of resource sharing.
9594831
13531493
1
1. A method implemented by one or more computer processing devices, the method comprising: receiving and storing a list of multiple different named entities, the multiple different named entities homogenously pertaining to a particular subject matter domain; determining and storing a set of candidate mentions of the multiple different named entities, each candidate mention being an occurrence of a corresponding named entity in the list of multiple different named entities, the set of candidate mentions including true mentions and false mentions occurring in a collection of documents; identifying particular candidate mentions as the true mentions within the set of candidate mentions by leveraging homogeneity in the list of multiple different named entities, each true mention corresponding to a valid occurrence of an individual named entity in the collection of documents, the identifying including assigning scores to individual candidate mentions of the set of candidate mentions and identifying the particular candidate mentions as the true mentions using the scores; and outputting the true mentions.
8583415
11771542
1
1. One or more computer-storage devices having computer-executable instructions embodied thereon that, when executed, perform a method for generating a normalized string based on a native string, wherein the native string comprises one or more native character-sets associated with an Indian writing system, the method comprising: identifying one or more native character-sets within the native string using an optimization attribute that takes into account size of the one or more character-sets being analyzed, wherein one of the one or more native character-sets comprises an initial native character-set having a greatest number of characters, including at least the first character of the native string, that matches a first predetermined native character-set, and each of the one or more native character-sets subsequent to the initial native character-set, if any, comprises the greatest number of characters, including at least the first character following a previous native character-set, that matches a corresponding predetermined native character-set; associating each of the one or more native character-sets with one or more phonetically corresponding normalized character-sets based on an English writing system; generating a query normalized string, wherein the query normalized string comprises the one or more phonetically corresponding normalized character-sets based on the English writing system; and utilizing the query normalized string to identify search content related to the native string input by a user, wherein search content related to the native string input by the user is identified based on the query normalized string matching at least one normalized string associated with the search content.
9251371
14793435
1
1. A method, comprising: at a multitenant computing platform system: setting a data retention policy of an account at the computing platform system; generating data through operation of the computing platform system on behalf of the account; moderating the generated data of the account according to the data retention policy of the account; and storing the moderated data, wherein the computing platform system moderates the generated data by: securing sensitive information of the generated data from access by the computing platform system, and providing operational information from the generated data, the operational information being accessible by the computing platform system during performance of system operations.
20080270116
11739187
0
1. A computer readable medium embodying instructions executable by a processor to perform a method for determining a sentiment lexicon associated with an entity, the method steps comprising: inputting a plurality of texts associated with the entity; labeling seed words in the plurality of texts as positive or negative; determining a score estimate for the plurality of words based on the labeling; re-enumerating paths of the plurality of words and determining a number of sentiment alternations; determining a final score for the plurality of words using only paths whose number of alternations is within a threshold; converting the final scores to corresponding s-scores for each of the plurality of words; and outputting the sentiment lexicon associated with the entity.
20030004719
10216189
0
1. A method for developing an automatic speech recognition (ASR) vocabulary for a voice activated service, the method comprising: a. posing, to at least one respondent, a hypothetical task to be performed; b. asking each of the at least one respondent for a word that the respondent would use to command the hypothetical task to be performed; c. receiving, from each of the at least one respondent, a command word; d. developing a list of command words from the received command word; e. rejecting the received command word, if the received command word is acoustically similar to another word in the list of command words.
20090012785
11772992
0
1. A sampling-rate-independent method of automated speech recognition (ASR), comprising the steps of: comparing speech energies of a plurality of codebooks generated from training data created at an ASR sampling rate to speech energies in a current frame of acoustic data generated from received audio created at an audio sampling rate below the ASR sampling rate; selecting from the plurality of codebooks, a codebook having speech energies that correspond to speech energies in the current frame over a spectral range corresponding to the audio sampling rate; copying from the selected codebook, speech energies above the spectral range; and appending the copied speech energies to the current frame.
8806320
12181314
1
1. A method, comprising: receiving, via a network, a media selection in connection with a first media associated with a media file; receiving, via the network, a media selection in connection with a second media associated with a media file; receiving, via the network, a multi-sync request associated with the media selection for the first media and the media selection for the second media; and when the multi-sync request is a time-based multi-sync request, then receive, via the network, a selection of a segment of the first media and a selection of a segment of the second media; automatically detect whether a duration of the segment of the first media is equal to a duration of the segment of the second media; when the duration of the segment of the first media is detected as being equal to the duration of the segment of the second media, then automatically enable time-based synching as a default to generate a dynamic media link and multi-sync data based on the selection of the segment of the first media and the selection of the segment of the second media, without affecting an integrity of the first media and an integrity of the second media, the dynamic media link being a hyperlink; send the multi-sync data such that the multi-sync data is stored in a relational database at a relational database server after the multi-sync data is generated; and send, via the network, the dynamic media link such that the segment of the first media and the segment of the second media are displayed and synchronously played side-by-side in a user-editable form based on the multi-sync data stored in the relational database, after receiving an indication that the dynamic media link was selected, the user-editable form being received from a media server, the user-editable form allowing a user to edit synchronization points between the first media and the second media.
20180122361
15340319
0
1. A computer-implemented method comprising: receiving, using one or more microphones, an audio signal from a user associated with a user device; determining, by one or more processors and based on the audio signal received using the one or more microphones, (i) a tone of voice of the user associated with a user device, and (ii) a proximity indicator indicative of a distance between the user and the user device; obtaining, by the one or more processors, data to be audibly output using a computer-synthesized voice; selecting, by the one or more processors, a tone of voice of the computer-synthesized voice that corresponds to the tone of voice of the user, and a volume level of the computer-synthesized voice based on the tone of voice of the user and the distance between the user and the user device indicated by the proximity indicator; generating, by the one or more processors, an audio signal based on (i) the data, (ii) the selected tone of voice that corresponds to the tone of voice of the user, and (iii) the selected volume level of the computer-synthesized voice; and providing, by the one or more processors, the generated audio signal for output by one or more speakers.
8888497
12723400
1
1. A method comprising: using one or more computers, obtaining and storing a first set of information comprising a set of emotional states with which online elements may be associated; using one or more computers, obtaining and storing a second set of information comprising information relating to a set of online elements; using one or more computers, based at least in part on the second set of information, assigning each of the set of online elements to at least one associated emotional state, of the set of emotional states; using one or more computers, obtaining and storing a third set of information comprising information relating to online activity of a user in association with at least one online element of the set of online elements, and comprising an emotional state to which the at least one online element of the set of online elements is assigned; using one or more computers, based at least in part on the third set of information, classifying the user into at least one emotional state of the set of emotional states; presenting the user with an online advertisement based at least in part on the at least one emotional state, of the set of emotional states, into which the user is classified; and based at least in part on at least one direct online activity of the user and at least one indirect online activity of the user, predicting an emotional state that the user is likely to be in at a particular time at which, or during a particular period of time during which, the online advertisement is anticipated to be served, wherein the at least one direct online activity of the user includes one or more of usage, usage frequency, extent of personalization, sharing of emoticons, and sharing of emoticlips, and wherein the at least one indirect online activity of the user includes one or more of visits to specific user post, user review or blog entry domains or Web sites, and engagement with specific user post, user review or blog entry domains or Web sites.
7895036
10688802
1
1. A system for suppressing wind noise from a voiced or unvoiced signal, comprising: a first noise detector that is adapted to detect a wind buffet from an input signal by deriving and analyzing an average wind buffet model comprising attributes of a line fit to a portion of the input signal, where the first noise detector is adapted to identify whether the input signal contains the wind buffet based on a correlation between the line and the portion of the input signal; and a noise attenuator electrically connected to the first noise detector to substantially remove the wind buffet from the input signal.
20140244254
13775643
0
1. A development framework, implemented by one or more computer devices, for developing a spoken natural language interface, comprising: a development system, comprising: a developer interface module configured to provide a development interface, the developer interface module comprising: logic configured to receive a set of seed templates, each seed template identifying a command phrasing for use in invoking a function performed by a program, when spoken; and logic configured to collect a set of added templates, each added template identifying another command phrasing for use in invoking the function, the set of seed templates and the set of added templates forming an extended set of templates; a resource interface module configured to interact with one or more development resources to provide the set of added templates; and a data store for storing the extended set of templates associated with the function, the extended set of templates being for use in training one or more models for use on a user device, and said one or more models being for use in interpreting commands spoken by end users.
9529449
14096100
1
1. A computer-implemented method, comprising: displaying, at a computing device having one or more processors, a first virtual keyboard configured for an Indic script via a touch display of the computing device; receiving, at the computing device, a first touch input from a user indicating a selection of a character from the first virtual keyboard to obtain a selected character; in response to receiving the first touch input: (a) when the selected character is a consonant, displaying, at the computing device, a modified first virtual keyboard instead of the first virtual keyboard, the modified first virtual keyboard including diacritic forms of vowels from the first virtual keyboard, wherein the diacritic forms of the vowels (i) are different than the vowels from the first virtual keyboard, (ii) were not part of the first virtual keyboard, and (iii) replace at least some of the characters of the first virtual keyboard such that the diacritic forms of the vowels are part of the modified first virtual keyboard, (b) when the selected character is a vowel and a duration of the first touch input is greater than or equal to a predetermined duration, displaying, at the computing device, a second virtual keyboard including at least one of (i) diacritic forms of the selected character and (ii) vowels having similar sounds as the selected character, and (c) when the selected character is the vowel and the duration of the first touch input is less than the predetermined duration, selecting and displaying, by the computing device, the vowel from the first virtual keyboard; and receiving, at the computing device, a second touch input from the user indicating a selection of and causing the displaying of (i) a vowel from the modified first virtual keyboard when the modified first virtual keyboard is displayed, or (ii) a vowel from the second virtual keyboard when the second virtual keyboard is displayed.
6014134
08702335
1
1. A method for performing a software tutoring application distributed between an Internet client node and an Internet server node comprising: providing, on an Internet client node, a model-based user interface generating module for generating direct manipulation graphical user interfaces displayable at said Internet client node; sending a first specification corresponding with a software tutoring application from an Internet server node to said model-based user interface generating module on said Internet client node, wherein said first specification is received at said Internet client node using an Internet browser, and wherein said first specification includes a collection of entity representations of said software tutoring application, each said entity representation having an associated predetermined entity definition for defining a structure and semantics corresponding with said entity representation; transforming said collection of entity representations of said first specification, at said Internet client node, into a corresponding set of tutoring application programming entity data types having direct manipulation graphical representations, wherein: (A1) for at least some of said tutoring application subject matter programming entity data types, said user interface generating module on said Internet client node determines a display characteristic not provided in said first specification; and (A2) for at least a portion of said entity representations of said collection, prior to receipt of said first specification at said model-based user interface generating module on the Internet client node, said associated predetermined entity definitions are undefined for said user interface generating module, and wherein at least some of said associated predetermined entity definitions for said portion of the entity representations are utilized-in the transformation step; displaying on said Internet client node a first user interface generated by said user interface generating module using said first specification; activating said user interface generating module for responding to each of a plurality of user inputs to said first user interface on said Internet client node, each said input resulting in one of: creating and deleting instantiations of said tutoring application subject matter programming entity data types, wherein each response to each of the user inputs is independent of communication with said Internet server node; and receiving at the Internet server node selected information entered or created at the Internet client node by a user, performing an analysis of the selected information, automatically generating performance information relating to the user, and transmitting the performance information from the Internet server node to the Internet client node and said user interface generating module.
20010047290
09782873
0
1. A system for creating and maintaining information in a database of subjects, available to a population of users, comprising: a) describing a database subject using a plurality of natural-language terms, each of such plurality of natural-language terms having relevance to the subject according to an involved subset of such population of users; b) rating the degree of relevance of each of such plurality of natural-language terms to such database subject according to each of such involved subset of such population of users; c) associating, in such database, such respective natural-language terms and respective degrees of relevance with each such database subject; and d) computing, for such involved subset of such population of users, in such database, an overall degree of relevance of each of such plurality of natural-language terms to such database subject.
20060161434
11037750
0
1. A method for improving spoken language, comprising: accepting a speech input from by a speaker using a language; identifying the speaker with a predetermined speaker category; and correcting an error in the speech input using an error model that is specific to the speaker category.
9245015
13790864
1
1. A method comprising: analyzing, by a device, first text to identify a pair of terms, within the first text, that are alias terms, the analyzing the first text including performing two or more of: a latent semantic analysis of the pair of terms, based on the pair of terms being associated with a particular tag; a tag-based analysis that determines that the pair of terms are associated with compatible tags; a transitive analysis that determines that a pair of neighbor terms, associated with the pair of terms, are associated with compatible tags; or a co-location analysis based on a distance between the pair of terms in the first text; and the analyzing the first text further including performing one or more of: a misspelling analysis to determine that a first term, of the pair of terms, is a misspelling of a second term, of the pair of terms, a short form analysis to determine that the first term is a short form of the second term, or an explicit alias analysis to determine that the first term is an explicit alias of the second term; calculating, by the device and based on analyzing the first text, a first alias score for the pair of terms; calculating, by the device and using the first alias score for the pair of terms, a second alias score for the pair of terms; determining, by the device, that the second alias score satisfies a threshold; generating, by the device and based on determining that the second alias score satisfies the threshold, a glossary that includes the pair of terms identified as alias terms, the glossary being generated based on the performing the one or more of the misspelling analysis, the short form analysis, or the explicit alias analysis; and replacing terms, by the device and using the glossary, within at least one of: the first text, or a second text that is different from the first text.
8712757
11621729
1
1. A method for managing communications during one or more conference calls, the method comprising: receiving, by a computer at a central location, a first keyword corresponding to a phrase having a high degree of interest to a user, the first keyword having a weight determined by a first priority ranking assigned to the first keyword and representative of user preference and/or communication type; receiving, by the computer, a second keyword having a weight determined by a second priority ranking assigned to the second keyword and representative of user preference and/or communication type, wherein the first priority ranking of the first keyword is greater than the second priority ranking of the second keyword; receiving, by the computer, a replay time span associated with the first keyword; identifying an instantiation of the first keyword by directing communication generated during the one or more conference calls to a speech recognition engine and monitoring output of the speech recognition engine; upon identifying the instantiation of the first keyword, replaying a section of the communication corresponding to the first keyword to the user for a period of time equal to the replay time span; converting the section of the communication into text and formatting the text to highlight text portions that correspond to the first keyword; and transmitting the formatted text to the user, further comprising receiving the first priority ranking, wherein the first keyword and the first priority ranking assigned to the first keyword are received from the user, wherein first and second conference calls are included in the one or more conference calls, wherein the output of the speech recognition engine includes first output corresponding to the first conference call and second output corresponding to the second conference call, wherein identifying the instantiation of the first keyword comprises identifying the instantiation of the first keyword in the first output corresponding to the first conference call, and wherein the method further comprises: identifying an instantiation of the second keyword in the second output corresponding to the second conference call; and determining, based, at least in part, on the first priority ranking assigned to the first keyword and the second priority ranking assigned to the second keyword, how to display the formatted text corresponding to the first keyword and formatted text corresponding to the second keyword.
20160170968
14566808
0
1. A method, in a natural language processing (NLP) system comprising a processor and a memory, the method comprising: receiving, by the NLP system, performance data for a performance to be provided by a human performer, wherein the performance data comprises at least one objective to be achieved by the performance; monitoring, by the NLP system, one or more channels of communication to identify natural language statements exchanged over the one or more channels of communication directed to the performance while the performance is being presented; extracting, by the NLP system, feedback information from the natural language statements based on natural language processing of the natural language statements; generating, by the NLP system, aggregate feedback information based on the identified feedback information; evaluating, by the NLP system, an alignment of the aggregate feedback information with the at least one objective in the performance data; and outputting, by the NLP system, a guidance output based on results of evaluating the alignment of the aggregate feedback information with the at least one objective in the performance data, wherein the guidance output guides the performer to modify presentation of the performance to more likely achieve the at least one objective based on the aggregate feedback information.
20160292266
15182300
0
1. A non-transitory computer readable medium storing code that, when executed by one or more processors, causes the one or more processors to: send an audio query to a server; responsive to the server matching the query with a reference item in a database, receive from the server an audio fingerprint sequence and an audio identifier associated with the matched reference audio item, the audio fingerprint sequence representing some portion of the matched reference audio item; update a cache with the audio fingerprint sequence and the associated audio identifier; extract an input audio fingerprint from a frame of an audio signal; and match the input audio fingerprint to the audio fingerprint sequence stored in the cache to identify an audio item in the audio signal.
9661142
13348402
1
1. A method comprising: at one or more processing devices: determining that textual information of a conference transmitted over a network includes a predetermined term, the predetermined term having associated supplemental information that includes a definition of the predetermined term; marking the textual information to display with the textual information to notify a plurality of participants of the conference that the supplemental information is available based on the determination; and forwarding the textual information having the marking over the network to the one or more participants for presentation during the conference.
8588073
13467209
1
1. A method, comprising: receiving a plurality of voice packets in a voice queue for a depacketizing engine, the plurality of voice packets having been encoded at an encoding interval by a far-end voice encoder; identifying a plurality of packet arrival times indicating when the plurality of voice packets arrived at the depacketizing engine, the plurality of packet arrival times being identified at a time resolution, the time resolution being not less than the encoding interval; timestamping each of the plurality of voice packets with a respective one of the plurality of packet arrival times; and transforming the plurality of voice packets into a plurality of frames of a plurality of digital voice samples.
7669111
09628727
1
1. A method for using a computer system to provide a user interface to an electronic text, comprising in sequence the steps of: a. presenting, on a display controlled by the computer system, a portion of an outline of said electronic text, wherein: i. an element of the text comprises at least one phrase appearing in said electronic text, said at least one phrase comprising at least one word; ii. said outline comprises a plurality of elements, wherein elements of the outline comprise copies of elements of said electronic text; iii. each element of the outline represents a portion of said electronic text; iv. the combined elements of the outline comprise substantially less text than the entire said electronic text; v. substantially all portions of said electronic text are represented by at least one element of the outline; and vi. the relative positional and hierarchical relationships of elements of the outline correspond to the relative positional and hierarchical relationships of the portions of said electronic text represented by said elements of the outline; b. in response to user action, said user action consisting only of indicating at least one element of said outline, selecting for the operation of step (c) the entire portion of said electronic text represented by said at least one outline element; and c. performing an operation exclusively on the portion of electronic text selected in step (b), wherein said operation does not cause the display of said selected electronic text and wherein said operation processes all components of said selected electronic text.
4639557
06781412
1
1. A test system for testing from a test site a selectable one of a plurality of electrical circuits, comprising: testing means, connectible to the plurality of electrical circuits, for receiving a control signal from the test site and for generating and applying a test signal to the one of the plurality of electrical circuits selected in response to the control signal from the test site; and synthesized voice means, connected to said testing means, for generating audible speech signals to verbally communicate a result from said testing means to the test site.
7783637
10674834
1
1. A computer-implemented method of creating a new label in a computer-implemented business integration system, wherein the new label is a computer-implemented user interface element configured to identify a control within a user interface associated with the business integration system, the method comprising: receiving data at an interface indicating a desired text for the new label; searching a label database for indications of existing labels that include text matching the desired text, wherein existing labels represented in the label database are computer-implemented user interface elements; and returning to a user, based at least in part on the results of the search of the label database, a list of existing labels that include text matching the desired text.
20040234050
10874405
0
1. A voice mail system comprising: a network based voice mail system including a user voice mail box; a remote answering device coupled to a user's incoming telephone line; a telephone network that is to provide a three-way calling service, such that upon answering an incoming call after a predetermined number of rings, the remote answering device conferences in the user's voice mail box in the network based voice mail system with the incoming call using the three-way calling service; and a synchronization device for synchronizing a network message count, indicating a number of messages in the user's network based voice mail box, with a message count at the remote answering device when a new message is received at the user's voice mail box before the new message is retrieved by the user through the remote answering device.
9117446
13221953
1
1. A method for achieving emotional Text To Speech (TTS), the method comprising: receiving a set of text data; organizing each of a plurality of words in the set of text data into a plurality of rhythm pieces; generating an emotion tag for each of the plurality of rhythm pieces, wherein each emotion tag is expressed as a set of emotion vectors, each emotion vector comprising a plurality of emotion scores, where each of the plurality of emotion scores is assigned to a different emotion category in a plurality of emotion categories; determining, for each of the plurality of rhythm pieces, a final emotion score for the rhythm piece based on at least each of the plurality of emotion scores; determining, for each of the plurality of rhythm pieces, a final emotional category for the rhythm piece based on at least each of the plurality of emotion categories; and performing, by at least one processor of at least one computing device, TTS of the set of text data utilizing each of the emotion tags, where performing TTS comprises decomposing at least one rhythm piece in the plurality of rhythm pieces into a set of phones; and determining for each of the set of phones a speech feature based on: F i =(1− P emotion )* F i-neutral +P emotion *F i-emotion wherein: F i is a value of an i th speech feature of one of the plurality of phones, P emotion is the final emotion score of the rhythm piece where one of the plurality of phones lies, F i-neutral is a first speech feature value of an i th speech feature in a neutral emotion category, and F i-emotion is a second speech feature value of an i th speech feature in the final emotion category.
8838454
11010054
1
1. A method of processing a call in a voice-command platform, comprising the steps of: transferring the call from the voice-command platform to a second voice-command platform; and transmitting, either directly or indirectly, grammar information from the voice command platform to the second voice-command platform for use by a voice command application executing in the second voice-command platform in processing the call, the grammar information comprising information as to allowed spoken utterances from a user in response to a prompt.
20150286888
14659935
0
1. An optical text recognition method, comprising the steps of: acquiring at least one set of multiple images of a document having texts thereon; determining whether two or more of said at least one image of said document overlap; combining said at least one set of multiple images via image fusion to form at least one merged image; processing each of said at least one merged image through single frame document recognition to produce at least one set of text and metadata.
20070208568
11276542
0
1. A method for managing an interactive speech recognition system, the method comprising: determining whether a voice input relates to expected input, at least partially, of any one of a plurality of menus different from a current menu; and if the voice input relates to the expected input, at least partially, of any one of the plurality of menus different from a current menu, skipping to the one of the plurality of menus, wherein the plurality of menus different from the current menu include menus at a plurality of hierarchical levels.
9866865
15581301
1
1. A moving picture decoding device that decodes a bitstream obtained by coding a moving picture using a motion vector in units of blocks obtained by partitioning each picture, comprising: a decoding unit that decodes information representing a motion vector predictor index to be selected in a motion vector predictor candidate list; a motion vector predictor candidate constructing unit that derives first and second motion vector predictor candidates from a motion vector of one of decoded blocks neighboring a decoding target block in a same picture as the decoding target block, and derives a third motion vector predictor candidate from a motion vector of one of blocks of a decoded picture different from the decoding target block; a motion vector predictor candidate adding unit that adds the first to third motion vector predictor candidates satisfying a certain condition to a motion vector predictor candidate list; a redundant motion vector predictor candidate determining unit that deletes the second motion vector predictor candidate from the motion vector predictor candidate list when the first and second motion vector predictor candidates added to the motion vector predictor candidate list by the motion vector predictor candidate adding unit have the same values; a motion vector predictor candidate limiting unit that repeatedly adds a motion vector predictor candidate having a same value to the motion vector predictor candidate list until the number of motion vector predictor candidates reaches a certain number when the number of the motion vector predictor candidates added to the motion vector predictor candidate list is smaller than the certain number (a natural number larger than or equal to 2); and a motion vector predictor selecting unit that selects a motion vector predictor of the decoding target block from the motion vector predictor candidate list based on the decoded information representing the motion vector predictor index to be selected, wherein the redundant motion vector predictor candidate determining unit does not compare the value of the first motion vector predictor candidate and the value of the second motion vector predictor candidate with the third motion vector predictor candidate.
20120290521
13168973
0
1. (canceled)
10114817
14820466
1
1. A method comprising: storing a plurality of multi-language profiles of a plurality of users; identifying one or more multilingual cognates in each profile of the plurality of multi-language profiles; based on the one or more multilingual cognates identified in each profile of the plurality of multi-language profiles, generating one or more translation models; receiving input that indicates a selection, by a second user, of data that is associated with a first user that is different than the second user, wherein the plurality of users includes users other than the second user and the first user; determining a first language that is associated with the first user; determining a second language that is different than the first language and that is associated with the second user; wherein a plurality of data items in a profile of the first user are in the first language; translating the plurality of data items into the second language using the one or more translation models; in response to receiving the input, causing a translated version of the plurality of data items to be displayed to the second user, wherein the translated version is in the second language; wherein the method is performed by one or more computing devices.
9152881
14026295
1
1. A computer-implemented method, comprising: learning, by a computing system, a sparse overcomplete feature dictionary for classifying and/or clustering a remote sensing image dataset; building, by the computing system, a local sparse representation of the image dataset using the learned sparse overcomplete feature dictionary; and applying, by the computing, system, a local maximum pooling operation on the local sparse representation to produce a translation-tolerant representation of the image dataset.
8942979
13192902
1
1. A hardware acoustic processing apparatus comprising: a first extracting unit configured to extract a first acoustic model that corresponds with a first position among positions set in a speech recognition target area; a second extracting unit configured to extract at least one second acoustic model that corresponds with, respectively, at least one second position in proximity to the first position; and an acoustic model generating unit configured to generate a third acoustic model based on the first position of the first acoustic model or a combination of the first position of the first acoustic model and the second acoustic model.
20100135474
12325885
0
1. A method for telephone service technicians to retrieve telephone line assignment information, said method comprising: receiving a telephone call from a telephone service technician; receiving a telephone number assigned to a customer of a communications carrier, the telephone number associated with a telephone operating on a wired communications network; requesting telephone line assignment information, the telephone line assignment information including cable and line pair information; converting the telephone line assignment information into speech synthesized audible signals; and communicating the speech synthesized audible signals to the telephone service technician during the telephone call.
20020107851
09776469
0
1. A method for specifying using a data processing system comprising the steps of: reading a first list from a dictionary database; attempting to match a set of user input to the first list to select a first element; if a definitive match is not made, displaying a list of possible first elements from the first list and permitting selection of a member of the possible first elements list; reading a second list from the dictionary database based upon the selected first element; attempting to match the set of user input to the second list to select a second element; if a definitive match is not made, displaying a list of possible second elements from the second list and permitting selection of a member of the possible second element list; reading a third list from the dictionary database based upon the selected first element and the selected second element; attempting to match the set of user input to the third list to select a set of third elements and corresponding third element values; if a definitive match is not made, displaying a list of possible third elements from the third list and permitting selection of a set of third elements of the possible third element list and entry of corresponding third element values; composing a specification from the selected member of the first list, the selected member of the second list, and the selected set of third elements and corresponding third element values.
9858263
15147222
1
1. A method comprising: providing a neural network model which has been trained to predict a canonical form, containing a sequence of words, for an input text sequence, containing a sequence of words, the neural network model comprising: an encoder which generates a first representation of the input text sequence based on a representation of n-grams in the text sequence, the encoder including a first neural network which reads the input text sequence and generates a second representation of the input text sequence, and a decoder which sequentially predicts a next term of the canonical form, based on the first and second representations and a predicted prefix of the canonical form, the prefix containing a sequence of at least one word; receiving an input text sequence, containing a sequence of words; with a processor, predicting a canonical form, containing a sequence of words, for the input text sequence with the trained neural network model; and outputting information based on the predicted canonical form.
9430455
12582342
1
1. A method for generating a plurality of decision trees comprising one or more atoms of a form description language, the method comprising: providing an authoring tool that is configured as a graphical editor that provides an author with an interface to input a plurality of questions related to at least one form via a first plurality of shapes, wherein a position of the first plurality of shapes dictates a desired order of the plurality of questions and possible answers that a user might provide, wherein at least a portion of the plurality of questions are different than a field in the form; receiving the questions and an arrangement of the first plurality of shapes from an author, by the authoring tool via the first plurality of shapes, wherein the questions include a first question that is related to selection of the form selected from a plurality of forms to be used, and a second question that is related to data requested in the form identified by the user in answering the first question; determining an order of the questions based on the arrangement of the shapes made by the author; automatically generating, by the computing device utilizing the authoring tool, a first decision tree based on the questions, the data, the possible answers, and the order of the questions; assigning a plurality of document templates to the first decision tree; utilizing the authoring tool to automatically generate, by the computing device, a second decision tree, wherein the second decision tree is generated from a second plurality of shapes; receiving, from the user, user profile data through a second decision tree, wherein the second decision tree is utilized for populating a different form; providing at least a portion of the plurality of questions to a user; receiving, from the user, at least one answer associated with the provided questions; interpreting the answers associated with the questions to determine a value for populating the different form; generating a populated decision tree from the answers; determining a conclusion based on the answers; providing details about the conclusion to the user; and populating the different form based on the populated decision tree, the plurality of document templates, and the user profile data that was received via the second decision tree.
8023983
12758152
1
1. A method in a Push-To-Talk (PTT) capable mobile station for use in buffering PTT voice communications, the method comprising: sending a PTT voice communication request in a wireless communication network; prior to receiving a floor grant in response to the PTT voice communication request: receiving voice input signals at the mobile station and buffering digital voice data corresponding to the voice input signals, wherein the digital voice data is produced by encoding and compressing the voice input signals; receiving the floor grant in response to the PTT voice communication request; and upon receiving the floor grant: retrieving the buffered digital voice data and continuing to buffer digital voice data corresponding to the voice input signals, until all of the buffered digital voice data has been sent from the mobile station.
7822603
12115034
1
1. A method of performing automatic speech recognition on a device, the method comprising: receiving speech from a user on the device; and recognizing the received speech using automatic speech recognition adaptation parameters transmitted from a remote device, the automatic speech recognition adaptation parameters being derived at the remote device based at least in part on automatic speech recognition data provided from the device.
8027831
11729272
1
1. An information display control apparatus comprising: an example sentence storage unit which stores a plurality of example sentences; an input unit which accepts a user's operation of inputting a string of characters; a split form of split verb distinguishing unit which distinguishes whether or not a plurality of words is input in split form of a split verb via the input unit; a split verb example sentence search unit which, when the input plurality of words is distinguished to be in split form of a split verb by the split form of split verb distinguishing unit, searches the example sentences in the example sentence storage unit for an example sentence containing the plurality of words in combined form of the split verb and an example sentence containing the plurality of words in split form of the split verb; and a split verb example sentence display control unit which displays the example sentences searched by the split verb example sentence search unit.
7702614
11694789
1
1. A computer implemented method of maintaining a phrase index for a plurality of documents in a document collection, the method comprising: providing a set of phrase posting lists, each phrase posting list associated with a phrase; establishing a plurality of segments, each segment associated with a subset of the plurality of the documents; periodically updating each segment by: for documents associated with the segment, identifying phrases contained in the document, and updating the phrase posting list for each such phrase to include the document; sharding the phrase posting lists for the identified phrases into a plurality of segment shards, each segment shard containing a disjoint subset of the list of documents in the segment that contain the phrase associated with the phrase posting list; associating each segment shard with an index shard, such that at least one index shard is associated with a plurality of segment shards, each index shard being served by an index server; determining a recently updated segment having updated segment shards; for at least one index shard being served: determining the index shard's associated updated segment shards, and merging the updated segment shards with the index shard to form an updated index shard; and replacing the index shard with the updated index shard.
9589012
14815884
1
1. A computer-implemented method, comprising: receiving from a user a selection of an object among one or more objects included in a data model, the selection made through an object-selection interface; retrieving from computer memory a previously stored object definition that corresponds to the selected object, the previously stored object definition includes: an object query that, when executed, retrieves a set of time stamped events from a data store on a computing device, each event including a portion of raw machine data reflecting activity in an information technology environment; and an object schema identifying a set of one or more fields, each field defined by an extraction rule or regular expression that locates the field in the raw machine data and can be used to extract a field value from the field location from the raw machine data in each event in a subset of the set of time stamped events, each extraction rule or regular expression operating on the raw machine data in an event without modifying the event's raw machine data; and executing, against events in the data store that meet filtering criteria of the object query, a search query that references only field values that are extracted using the object schema and that produces a result based at least in part on the data reflecting the activity of the information technology environment.
8918395
13528197
1
1. A computer-implemented method to associate a semantic cluster with one or more categories of a predefined taxonomy, the method comprising: a) accepting, by a computer system including at least one computer, a plurality of semantic clusters of re-occurring terms within a document, and having a frequency based on the reoccurrence of the term; b) identifying, by the computer system based on the accepted clusters of re-occurring terms within the document, one or more concepts for the document, each concept identifying different re-occurring terms having identical meanings; c) scoring, by the computer system, the identified one or more concepts, the score of each of the one or more concepts weighted by cluster frequency of each of the re-occurring terms identified by said concept; d) identifying, by the computer system, a set of one or more categories using at least some of the one or more scored concepts to look up one or more categories in a concept-category index, wherein a category corresponds to a node of the predefined taxonomy, which defines a structured set of categories; and e) associating, by the computer system, at least some of the one or more categories with the semantic cluster.
8457606
12916703
1
1. A method in a mobile communications device for placing a telephone call, comprising: receiving a destination phone number to call; automatically determining what types of communications interfaces are available to connect to the destination phone number; automatically determining first and second call routing methods of the available call routing methods that should be used to place a call to the destination phone number, wherein the first call routing method utilizes a voice connection protocol and the second call routing method utilizes a data connection protocol; connecting to the destination phone number using a calling card calling method as the first call routing method, the calling card calling method including the steps of: connecting to a calling card platform using a data connection; sending to the calling card platform, using the data connection, a phone number of the mobile communications device that will connect to the calling card platform using the voice connection and at least one of an account number and a personal identification number (PIN) and the destination phone number; connecting to the calling card platform using a voice connection; and sending to the calling card platform, using the voice connection, the phone number of the mobile communications device that will connect to the calling card platform using the voice connection and at least one of the account number, the personal identification number (PIN) and the destination phone number not sent using the data connection; and automatically, when the call to the destination phone number fails using the first call routing method, placing the call and connecting to the destination phone number using the second call routing method.
8595016
13336639
1
1. A method performed by at least one computer processor executing computer program instructions stored on a non-transitory computer-readable medium, wherein the method comprises: (A) identifying, from among a plurality of content selection data associated with a user, first content selection data associated with the user; (B) identifying a first content source associated with the first content selection data; (C) identifying a first selection-specific rule set associated with the first content selection data; (D) receiving first original content from the first content source; (E) applying the first selection-specific rule set to the first original content to produce first rule output; and (F) changing a state of at least one first component of a human-machine dialogue system based on the first rule output.
8704874
13143556
1
1. A method for transmitting a three dimensional (3D) caption signal, the method comprising: preparing a 3D image signal for displaying a 3D image; generating 3D caption data based on a code space, wherein the 3D caption data includes 3D caption information and caption text, wherein the caption data is formatted within picture user data, and wherein the picture user data is inserted at any of Sequence level, Group of Pictures (GOP) level, and Picture Data level; and inserting the 3D caption information and the caption text into a video picture header region to code the image signal, and transmitting the same, such that a caption image including 3D caption text disposed in a 3D caption box is generated based on the 3D caption information and the caption text in a 3D display device, wherein the code space contains base code sets and extended code sets, and wherein the 3D caption information is delivered in at least one extended code set and the at least one extended code set is accessed by using an ‘EXT1’ code in a base code set, such that a caption image including 3D caption text disposed in a 3D caption box is generated based on the 3D caption information and the caption text in a 3D display device.
10146995
15388231
1
1. A non-transitory controller readable medium storing a program causing a controller in a computer to execute steps including: receiving print information from a printer with which the computer can communicate, the print information being text data written as text; deconstructing the text data and generating multiple words; acquiring keyword information identifying a keyword, and the relation information identifying a relationship between the keyword information and a word to detect; and detecting from the multiple words, based on the keyword information and the relation information, the word to detect.
9865250
14499489
1
1. A computer-implemented method for navigating secondary content during a text-to-speech process, the method comprising: outputting first audio including an audio tone preceded by first synthesized speech and followed by second synthesized speech, the audio tone corresponding to an indicator of a first footnote located in a string of text, the first synthesized speech associated with a portion of the string of text prior to the indicator and the second synthesized speech associated with a portion of the string of text following the indicator; detecting first contact on a touch-screen of a computing device within a first period of time following output of the audio tone; determining that the first contact corresponds to a predefined first arc gesture, the first contact extending along both a horizontal axis and a vertical axis from a first point to a second point, a difference between a first horizontal coordinate associated with the first point and a second horizontal coordinate associated with the second point exceeding a horizontal threshold in a first direction relative to the first point, and a difference between a first vertical coordinate associated with the first point and a second vertical coordinate associated with a midpoint of the contact exceeding a vertical threshold; selecting the first footnote in response to the first arc gesture; identifying supplemental text associated with the first footnote; and outputting third synthesized speech corresponding to the supplemental text associated with the first footnote.
20160147744
14893008
0
1. An on-line voice translation method, comprising: conducting voice recognition on first voice information input by a first user, so as to obtain first recognition information; prompting the first user to confirm the first recognition information; translating the confirmed first recognition information to obtain and output first translation information; extracting, according to second information which is fed back by a second user, associated information corresponding to the second information; and correcting the first translation information according to the associated information and outputting the corrected translation information.
8712759
12910408
1
1. A method of semantically parsing a natural language expression, comprising: constructing, by a processor, a first ambiguous meaning representation for a first natural language expression; fully or partially disambiguating, by a processor, the first meaning representation by specializing it by replacing a first semantic descriptor in it by a second, more specific semantic descriptor; associating with at least one semantic descriptor in the meaning representation a weight indicating an evaluation of how good an alternative it is; and adjusting at least one such weight in response to a later parsing or disambiguation action.
20150120336
14523391
0
1. A method of analyzing audio signals for a drive monitoring system, the method comprising: recording an audio signal from a mobile device, the recorded audio signal including a background audio stream and a residual audio signal; communicating with an audio database to obtain a reference signal, wherein the communicating uses a location identifier to determine input sources for the audio database, and wherein the location identifier is chosen from the group consisting of a global positioning system (GPS), cellular network, Wifi signature, and internet protocol address; determining if the background audio stream in the recorded audio signal matches the reference signal; if a match between the background audio stream and the reference signal is confirmed, computing a time alignment between the background audio stream and the reference signal; aligning at least a portion of the recorded audio signal with the reference signal using the time alignment; canceling the background audio stream from the recorded audio signal, wherein the remaining portion of the recorded audio signal, after cancellation of the background audio stream, comprises the residual audio stream; and determining, with a computer processor, a driving behavior factor from the residual audio stream, wherein the driving behavior factor is chosen from the group consisting of: identification of a vehicle where the recorded audio signal was recorded, location of the mobile device within the vehicle, and speech recognition to identify the presence of passengers in the vehicle.
9311394
11590386
1
1. A system, comprising: a TV communicating with the Internet; at least one remote control device wirelessly communicating with the TV; at least one microphone on the remote control device, the remote control device digitizing speech signals of a viewer of the TV and representing a viewer desired video site or video subject from the microphone and sending the signals to the TV; at least one processor coupled to the TV and implementing speech recognition on received speech signals representing a desired video site or video subject to generate recognized speech; and the processor accessing at least one database containing at least one index correlating speech with Internet addresses using the recognized speech to return at least one Internet address of an Internet site, wherein the database includes at least one index derived by the processor from closed captioned text in a televised video program received by the TV and provided to the processor.
8161131
12058672
1
1. A method for delivering dynamic media content to collaborators, the method comprising: providing collaborative event media content, wherein the collaborative event media content further comprises a grammar and a structured document, wherein the grammar is a data structure associating key phrases with presentation actions that facilitates a collaborator navigating the structured document of the collaborative event media content using speech commands; providing data identifying a client's location; storing, in the context server in a data structure comprising a dynamic client context for the client, the data identifying the client's location; detecting an event in dependence upon the dynamic client context, said event being characterized by an event type; identifying one or more collaborators in dependence upon the dynamic client context and the event, the one or more collaborators each being characterized by a collaborator classification; selecting from the structured document a classified structural element in dependence upon the event type and the collaborator classification for each of the one or more collaborators; and transmitting the selected structural element to the one or more collaborators.
20130007589
13173842
0
1. A method comprising: receiving, by a computational device, a first text message in a text messaging format from a mobile device to access a website that stores information in a markup language format; converting, by the computational device, one or more elements of the stored information from the markup language format to the text messaging format; and sending, by the computational device to the mobile device, a second text message that indicates how to interact with the website in the text messaging format.
20020128836
10052145
0
1. A method for speech recognition comprising: a feature-amount extracting step for extracting a feature amount based on a frame of an input utterance; a storing step for determining whether a current processing frame is within or at an end of a candidate word previously registered, and storing the candidate word on the basis of a first hypothesis-storage determining criterion when within a word and on the basis of a second hypothesis-storage determining criterion when at a word end; a developing step for developing a hypothesis by extending utterance segments expressing the word when a stored candidate word is within a word and by joining a word to follow according to an inter-word connection rule when at a word end; an operating step of computing a similarity of between the feature amount extracted from the input utterance and a frame-based feature amount of an acoustic model of the developed hypothesis, and calculating a new recognition score from the similarity and a recognition score of the hypothesis of up to an immediately preceding frame calculated from the similarity; and a step of repeating the storing step, the developing step and the operating step until the processing frame becomes a last frame of the input utterance, and outputting, as a recognition result approximate to the input utterance, at least one of hypotheses in the order of higher recognition score due to processing the last frame.
20060136207
11095555
0
1. A method for two stage utterance verification method, comprising the steps of: a) performing a first utterance verification function based on a support vector machine (SVM) pattern classification method by using feature data inputted from a search block of a speech recognizer; b) determining whether a confidence score, which is a result value of the first utterance verification function, is a misrecognition level for deciding rejection of a speech recognition result; c) performing a second utterance verification function based on a classification and regression tree (CART) pattern classification method by using heterogeneity feature data including meta data extracted from a preprocessing module, intermediate results from function blocks of the speech recognizer and the result of the first utterance verification function when the speech recognition result is accepted by the first utterance verification function, and returning when the speech recognition result is rejected by the first utterance verification function; and d) determining whether the speech recognition result is misrecognition based on a result of the second utterance verification function, transferring the speech recognition result to a system response module when the speech recognition result is accepted by the second utterance verification, and returning when the speech recognition result is rejected by the second utterance verification.
9116989
11319779
1
1. A method comprising: receiving, via a first display device, a spoken content-based free-form natural language query to search content of a plurality of segments within a media presentation that has been processed for content-based searching, the media presentation comprising a series of slides; displaying, via the first display device, the media presentation, text from a speech recognition process applied to the spoken content-based free-form natural language query, and a scrollable search result set in response to the spoken content-based free-form natural language query, the scrollable search result set comprising a portion of the content of the plurality of segments which is associated with the spoken content-based free-form natural language query, while simultaneously transmitting the media presentation to a second display device for display at the second display device without the text and without the scrollable search result set; receiving, via the first display device, a selection from the scrollable search result set, to yield a selected segment of the plurality of segments, wherein the selection is based on a motion input; and transmitting the selected segment to the second display device for display at the second display device as part of the media presentation.
20040165736
10410736
0
1. A method for attenuating wind noise in a signal, comprising: performing time-frequency transform on said signal to obtain transformed data; performing signal analysis on said transformed data to identify spectra dominated by wind noise; attenuating wind noise in said transformed data; constructing a time series from said transformed data.
10025390
15331438
1
1. A computer-implemented method of interacting with a user interface comprising: detecting a plurality of users in a first image obtained from a camera; selecting a first user of the plurality of users based on a priority of the first user among the plurality of the users, the priority being assigned based on an identity of one or more of the plurality of users; providing control of the user interface to the first user of the plurality of users based on the selection; recognizing, from at least a second image obtained from the camera, a gesture of the first user; and interacting with the user interface based on the recognized gesture.
8731919
12288261
1
1. A system for capturing voice files and rendering them searchable, comprising: (a) a database system having a plurality of grammars stored therein; (b) at least one device that electronically captures audio speech for a conversation between two or more participants; (c) a recorder coupled to said at least one device, the recorder capturing audio speech from the device for storage as audio speech data in said database system; and (d) a speech recognition engine adapted to transcribe the audio speech data into machine-readable text data in a plurality of transcription passes using grammars selected from said plurality of stored grammars, and store the machine-readable text data as well as data associating the machine-readable text data with the corresponding audio speech data in the database system for subsequent retrieval by a search application; wherein the speech recognition engine is adapted to select a grammar from said database system prior to performing a first transcription pass, the grammar for a first transcription pass selected on the basis of information pertaining to the subject matter or purpose of the conversation, and information pertaining to one or more of the participants, and further wherein the recognition engine is adapted to revise the machine-readable text data for the conversation by performing a subsequent transcription pass on the audio speech data using a grammar which was not used in the first transcription pass.
7634066
10882703
1
1. A computing device having a memory and a processor for enhancing media processing of a media stream containing speech data, comprising: a terminal data structure to support instantiating terminal objects, each terminal object adhering to a uniform interface, providing a telephony service, and having a terminal class name and a media type; a speech recognition terminal data structure that extends the terminal data structure; a terminal manager to instantiate, based on the terminal data structure and the speech recognition terminal data structure, terminal objects including a speech recognition terminal object to recognize speech having a speech recognition media type; and a TAPI application component providing a telephony API to form a connection with a client and to process the speech data by: registering terminal objects including a speech recognition terminal object; selecting, with the processor, the speech recognition terminal object from among a group of registered terminal objects based on the media type of the registered terminal objects; instantiating the selected speech recognition terminal object using the terminal manager by providing the terminal class name, the media type, and a method of signaling events; and providing the speech data to the instantiated speech recognition terminal object for recognition of the speech data.
20160277701
15014936
0
1. An information processor comprising: a recognition section that recognizes a feature of a viewer of content; an acquisition section that acquires a recognition error that occurs when the feature of the viewer is recognized by the recognition section; and a determination section that determines content to be output, on the basis of the acquired recognition error.
8214387
11097089
1
1. A computer-implemented method of providing a media presentation associated with a rendered document, the method comprising: optically or acoustically capturing a portion of the rendered document containing human-readable text using a portable data capture device; generating a digest of the captured portion based at least in part on content of the text of the captured portion using the portable data capture device; locating a document identifier associated with an electronic counterpart to the rendered document based at least in part on the digest of the captured portion; sending an enhancement package request including the document identifier to a media server; receiving from the media server an enhancement package associated with the document identifier, wherein the enhancement package includes multiple media presentations associated with multiple words of the rendered document, wherein each word of the multiple words is associated with a respective media presentation of the multiple media presentations; optically or acoustically capturing another portion of the rendered document containing human-readable text using the portable data capture device; locating within the enhancement package a media presentation associated with one or more identified words within the another captured portion; and presenting the associated media presentation using a display or speaker of the portable data capture device.
20110153050
13060032
0
1. A method for deriving a media fingerprint from a portion of audio content, comprising the steps of: categorizing the audio content portion; wherein the audio content portion comprises an audio signal; and wherein the categorizing step is based, at least in part, on one or more features of audio content portion, which comprise: a component of the content portion that relates to a first sound category, wherein the component related to the first sound category is mixed with the audio signal; or a component of the content portion that relates to a second sound category, wherein the component related to the second sound category is mixed with the audio signal; upon categorizing the audio content as free of the components that relate to the first sound category or the second sound category, processing the audio signal component; and upon categorizing the audio content as comprising one or more of the components that relate to the first sound category or the second sound category: separating the components that relate to the first sound category or the second sound category from the audio signal; and processing the audio signal independent of the components that relate to the first sound category or the second sound category; wherein the processing steps comprise the step of computing the media fingerprint; and wherein the media fingerprint reliably corresponds to the audio signal.
8644488
12428947
1
1. A method for a computer apparatus for automatically generating a customer interaction log for an interaction between a customer and agent at a contact center comprising the steps of: receiving input comprising at least one spoken utterance from said customer and said agent and processing said received input to generate a call transcript; automatically analyzing said received input comprising analyzing at least said call transcript and, based on results of the analyzing, selecting at least a portion of the received input to generate a customer interaction log using at least one model; displaying said generated customer interaction log for agent review at a graphical user interface of an agent computer; receiving agent feedback to the displayed generated customer interaction log at said graphical user interface; and automatically updating said customer interaction log and said at least one model based on said agent feedback.
9870351
14863829
1
1. A method for extracting and annotating text, the method comprising the steps of: performing, by one or more processors, a copying operation on a first source; extracting, by one or more processors, text data from the copy operation, wherein the text data comprises a stream of characters in a first text format; applying, by one or more processors, heuristics to extracted text data from the first source, wherein the heuristics detect readability of the text data; encoding, by one or more processors, the text data into a second text format, wherein encoding the text data into the second text format maintains structures and elements of the text data; transforming, by one or more processors, the second text format into annotations with the first text format; performing, by one or more processors, a pasting operation on the transformed annotations in the first text format while maintaining a respective arrangement of the structure and elements of the first text format; responsive to transforming the second text format into the annotations with the first text format, resolving, by one or more processors, offsets into the first source to maintain a format, structure, and elements of the annotations within a matrix structure; applying, by one or more processors, validating analytics on the extracted text data, the annotations with the first text format, and search queries within the matrix structure, wherein the validating analytics comprise at least: delimitation and position logic; responsive to analyzing the search queries by applying the validation analytics, determining, by one or more processors, if the text data a line corresponds with a token header, wherein the line is contained within the matrix structure; determining, by one or more processors, character and feature correspondence, wherein a token on a first line has a same character and feature correspondence as another token on a second line; and utilizing, by one or more processors, the positional logic to identify missing values within the text data, as contained within the matrix structure.
8209182
11565194
1
1. An emotion recognition system for automatically assessing human emotional behavior from physical phenomena indicative of the human emotional behavior, the system comprising: one or more a sensors configured to sense, the physical phenomena; and a computer processing system configured to: receive a time series of signals from the one or more sensors; identify features, in the time series of signals that are indicative of the human emotional behavior; and output a gradient, multiple-perspective assessment of the human, emotional behavior based on the identified features that includes a gradient representation of each of multiple emotional states indicated by the human emotional behavior.
8452341
13486261
1
1. An input processing method of a mobile terminal, comprising: collecting a touch event and a voice signal; at least one of sensing the voice signal to generate voice sensing data and generating visual information text data corresponding to the touch event; correcting at least one of the voice sensing data by comparing the voice sensing data with the visual information text data and correcting the visual information text data by comparing the visual information text data with the voice sensing data; and displaying at least one of the corrected voice sensing data and the corrected visual information text data.
8527262
11767104
1
1. A program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform a method for processing natural language text, comprising: receiving as input a natural language text sentence comprising a sequence of white-space delimited words including inflicted words that are formed of morphemes including a stem and one or more affixes; automatically parsing the inflicted words into their constituent morphemes; grouping the parsed morphemes of the inflicted words with the same syntactic role into constituents; identifying a plurality of verb-constituent pairs in the text sentence; predicting potential arguments for each constituent of the grouped morphemes, wherein the constituents are associated with a verb by the verb-constituent pairs and each prediction is weighted for a respective argument and grouped morpheme being considered; assigning a probability to each of the potential arguments, wherein the probability indicates a probability that the potential argument applies to a respective constituent; and outputting a plurality of semantic roles for a given verb/constituent pair as the potential arguments with corresponding probabilities, wherein predicting potential arguments for each constituent of the grouped morphemes and assigning the probability to each of the potential arguments includes: performing lexical/surface analysis; performing morphological analysis; performing semantic analysis; performing syntactic analysis; and integrating results of the lexical/surface analysis, the morphological analysis, the semantic analysis, and the syntactic analysis into a statistical model based on Maximum Entropy to produce a probability model for predicting potential arguments for each constituent of the grouped morphemes and assigning the probability to each of the potential arguments.
5579444
08384397
1
1. A vision based controller for use with an effector for controlling movement of the effector in the execution of a task having a predetermined task definition, the controller comprising: at least one electronic camera arranged for providing a plurality of images relating to different views of objects or features in a defined workspace; image processing means for processing images received from said at least one camera and corresponding to different views of said workspace to extract information relating to features in the images, said image processing means comprising an image segmenting means for segmenting images received from said at least one camera into regions of substantial uniformity and reducing the segmented images into a two-dimensional contour map representing edges of objects or features detected in the images; information comparison means for comparing information extracted from at least two processed images corresponding to different views of the workspace with information held in a knowledge base to derive a three-dimensional internal model of the workspace; planning means for planning a sequence of actions to be performed by said effector in the execution of said task, the sequence being derived from said predetermined task definition and from the derived three-dimensional internal model of the workspace; monitoring means for monitoring actions performed by said effector; and dynamic comparing means for dynamically comparing said performed actions with planned actions of said sequence, and for interrupting the sequence if the performed action deviates to a predetermined extent from the planned action and for requesting amendment to the sequence.
7946959
10413366
1
1. A training device comprising: at least one sensor configured to measure at least one physical performance characteristic of a user during a workout; a first computing device that is portable and selectively attachable to the user, including: a receiver configured to electronically receive an electronic training script defining a sequence in which the user is instructed to perform a plurality of activities, and a presentation unit configured to prompt the user to perform a next activity of the plurality of activities in the sequence based on a measurement of the at least one physical performance characteristic indicating completion of a previous activity in the sequence.
9697700
14085142
1
1. A system comprising: a detector having a housing that carries an ambient condition sensor; an audio output device carried by the housing that generates audio in response to a predetermined condition; an audio input transducer carried by the housing that receives speech audio from a user; control circuits carried by the housing and coupled to the ambient condition sensor, wherein the control circuits include signal processing circuits coupled to the audio input transducer, and wherein the signal processing circuits output a processed speech signal based on the speech audio; and speech recognition circuitry that receives the processed speech signal and recognizes selected speech to implement predetermined functions, wherein, responsive to the processed speech signal indicating a request to silence the audio output device, the control circuits silence the audio output device for a first predetermined time period or a second predetermined time period, wherein the predetermined condition is an alarm condition detected by the ambient condition sensor or a low battery condition detected by the control circuits, wherein the control circuits silence the audio output device for the first predetermined time period responsive to the request to silence the audio output device when the ambient condition sensor detects the alarm condition, wherein the control circuits silence the audio output device for the second predetermined time period responsive to the request to silence the audio output device when the control circuits detect the low battery condition, and wherein the first predetermined time period is shorter than the second predetermined time period.
8595010
12701008
1
1. A computer-readable information storage medium that stores a program for generating Hidden Markov Models to be used for speech recognition with a given speech recognition system, the information storage medium storing a program that renders a computer to function as: a scheduled-to-be-used model group storage section that stores a scheduled-to-be-used model group including a plurality of Hidden Markov Models scheduled to be used by the given speech recognition system; and a filler model generation section that generates Hidden Markov Models to be used as filler models by the given speech recognition system based on all or at least a part of the Hidden Markov Model group in the scheduled-to-be-used model group; wherein the filler model generation section classifies a plurality of probability density functions composing all or at least a part of the Hidden Markov Model group in the scheduled-to-be-used model group into a plurality of clusters, obtains a given parameter for defining probability density functions composing Hidden Markov Models to be used as filler models, based on the obtained probability density functions of each of the clusters.
9460078
14092518
1
1. A device, comprising: one or more processors to: receive, using an input component, a request to process text of a document to identify glossary terms included in the text; determine, using the one or more processors and based on the request, a plurality of sections of the text to process; and process a first section, of the plurality of sections, in parallel with a second section, of the plurality of sections, to identify the glossary terms included in the text, when processing the first section in parallel with the second section, the one or more processors are, for each of the first section and the second section, to: determine a linguistic unit analysis technique based on a file format of a file that includes the text; perform, using the linguistic unit analysis technique, a linguistic unit analysis on a linguistic unit, included in the text, to generate a plurality of ambiguous linguistic units from the linguistic unit, the one or more processors, when performing the linguistic unit analysis on the linguistic unit to generate the plurality of ambiguous linguistic units, being to: perform at least one of: a coordinating conjunction analysis that generates the plurality of ambiguous linguistic units from the linguistic unit when the linguistic unit includes a coordinating conjunction, an adjectival modifier analysis that generates the plurality of ambiguous linguistic units from the linguistic unit when the linguistic unit includes an adjective, or a headword analysis that generates the plurality of ambiguous linguistic units from the linguistic unit when the linguistic unit includes an abstract noun; resolve the plurality of ambiguous linguistic units to generate a set of potential glossary terms that includes a subset of the plurality of ambiguous linguistic units; perform a glossary term analysis on the set of potential glossary terms to generate a set of glossary terms that includes a subset of the set of potential glossary terms; identify a set of included terms, of the set of potential glossary terms, that are included in the set of glossary terms; identify a set of excluded terms, of the set of potential glossary terms, that are excluded from the set of glossary terms; determine a semantic relatedness score between at least one excluded term, of the set of excluded terms, and at least one included term, of the set of included terms; selectively add the at least one excluded term to the set of glossary terms to form a final set of glossary terms based on the semantic relatedness score; and output, using an output component, the final set of glossary terms for the document for presentation via a user interface.
8402036
13167695
1
1. A computer-implemented method for generating a snippet for an entity, wherein each snippet comprises a plurality of sentiments about the entity, the method comprising: selecting one or more textual reviews associated with the entity; identifying a plurality of sentiment phrases based on the one or more textual reviews, wherein each sentiment phrase comprises a sentiment about the entity; selecting one or more sentiment phrases from the plurality of sentiment phrases; generating a snippet based on the selected one or more sentiment phrases; and storing the snippet.
9361360
13814940
1
1. A method for retrieving information from a semantic database having a plurality of semantic data in response to a query, comprising: translating, by one or more processors, each of the plurality of semantic data to a first-order logic formula constructed by one or more atomic symbols and operators; selecting, by one or more processors, a first semantic data as a hub in an offline environment from the plurality of semantic data, wherein the first semantic data is resolved with a number of semantic data based on a resolution rule, and the number of the semantic data resolved with the hub is greater than a threshold, wherein a first standard formula transformed from the translated first-order logic formula of the first semantic data is resolved with a second standard formula transformed from the translated first-order logic formula of any of the number of semantic data, and further wherein one atomic symbol of the atomic symbols exists in the first standard formula and the negation of the atomic symbol exists in the second standard formula; calculating, by one or more processors, the semantic dataset by calculating in a first level of a searching approach, a first resolvent of (1) the hub and (2) a second semantic data which directly links to the hub based on a resolution rule, and in response to the second semantic data being resolved with the hub, selecting the second semantic data as a part of the semantic data set in the offline environment; calculating, by one or more processors, the semantic dataset by calculating in a second level of the searching approach, a second resolvent of (1) the semantic data set resulted in the first level of the searching approach and (2) a third semantic data which is within a predetermined distance from the hub, and in response to the third semantic data being resolved with any semantic data of the semantic data set resulted in the first level of the searching approach, selecting the third semantic data as a part of the semantic data set in the offline environment, wherein the calculating of the semantic data set is continuously executed in a background of the semantic database until a particular calculation limit is reached; indexing, by one or more processors, the semantic data set in the offline environment; modifying, by one or more processors, the semantic database to include the indexed semantic data set in the offline environment; and retrieving, by one or more processors, information from the semantic data set in an online environment in response to the query.
20070061320
11224195
0
1. A keyphrase extraction system comprising: a probability computation component that calculates probability values of a joint probability of a candidate term and a document, a marginal probability of the candidate term and a marginal probability of the document; a partial mutual information metric computation component that computes a partial mutual information metric for the candidate terms based on the probability values; and, a summarization component that identifies one or more summary keyphrases based, at least in part, upon the partial mutual information metric.
9489577
12804518
1
1. A method comprising: receiving a stream of video content; generating speech to text for the received video content; generating passage level annotations from the generated text using natural language processing (NLP); associating the passage level annotations with a timeline; and associating imagery with the text to generate thumbnails at periodic time intervals resulting in a database of annotations to imagery and imagery to annotations.
20020176028
10155309
0
1. A television receiving apparatus for receiving a digital broadcasting at a selected channel, comprising: storage means for storing a voice output switching table registering a voice setting record having a voice output format associated with a broadcasting language of the received digital broadcasting; table updating means for updating said voice output switching table by extracting the broadcasting language of the received digital broadcasting, when the channel of the received digital broadcasting is switched; voice setting record selection changing means for changing the selection of said voice setting record in the order registered in said voice output switching table; voice signal output means for outputting a voice signal on the basis of said voice setting record selected by said voice setting record selecting means; and output means for outputting a video signal for displaying the contents of said voice setting record selected by said voice setting record selection changing means for a fixed period of time.
9886160
13843721
1
1. A method comprising: executing, by a processor of a computing device, at least a portion of a first application that includes a first tab and a second tab from a plurality of tabs, each of the plurality of tabs being associated with a respective document that is configured to be rendered for display by the first application; accessing a permission setting for the first tab, the permission setting being a permission to record an audio and/or visual signal; determining the permission to record the audio and/or visual signal for the first tab of the plurality of tabs based on the permission setting; in response to determining the first tab of the plurality of tabs has the permission to and a second application is recording the audio and/or visual signal, providing a graphical indication, associated with the first tab and visible when a label handle for the first tab is visible, visible when the second tab is in focus and visible when the first tab is not in focus, the graphical indication indicates to a user of the computing device that the first tab is recording the audio and/or visual signal; and triggering a display of the graphical indication within a notification center associated with an operating system executed by the processor of the computing device when the label handle for the first tab is hidden.