input
stringlengths
66
114k
instruction
stringclasses
1 value
output
stringlengths
63
3.46k
1. A method for deep learning training, comprising: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1. 2. The method of claim 1, wherein the candidate units are detection bounding boxes within an image or phones of an input audio feature. 3. The method of claim 1, wherein the candidate units are detection bounding boxes, and wherein soft labelling comprises: providing a label of a class for a detection bounding box based at least partially on an overlap of the detection bounding box with a ground-truth bounding box for the class. 4. The method of claim 3, wherein providing a label of a class comprises: assigning a class label whose value is derived using the area of the overlap of the detection bounding box with the ground-truth bounding box for the class. 5. The method of claim 3, wherein providing a label of a class comprises: assigning a class label whose value is derived from a ratio involving the area of the overlap of the detection bounding box with the ground-truth bounding box for the class. 6. The method of claim 5, wherein assigning a class label comprises: calculating the ratio of the area of the overlap of the detection bounding box with the ground-truth bounding box for the class to the entire area of the detection bounding box. 7. The method of claim 3, wherein providing a label of a class is also based on one or more threshold values. 8. The method of claim 7, wherein providing a class label comprises: assigning a class label of 0 if a value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class is below a first threshold value; assigning a class label of 1 if the value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class is above a second threshold value and if the value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class is the first threshold value, the second threshold value, or between the first and second threshold value, assigning a class label of the value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class. 9. The method of claim 8, wherein the value based on the area of the overlap of the detection bounding box with a ground-truth bounding box for the class is a ratio of the area of the overlap of the detection bounding box with the ground-truth bounding box for the class to the entire area of the detection bounding box. 10. The method of claim 3, wherein providing a label of a class for a detection bounding box is also based on one or more confidence levels provided by a detection stage which also provided the detection bounding box. 11. The method of claim 3, wherein providing a label of a class for a detection bounding box comprises: providing a label of a first class for the detection bounding box based at least partially on the overlap of the detection bounding box with a ground-truth bounding box for the first class; and providing a label of a second class for the detection bounding box based at least partially on the overlap of the detection bounding box with a ground-truth bounding box for the second class. 12. The method of claim 11, wherein there is an overlap of the detection bounding box, the ground-truth bounding box for the first class, and the ground-truth bounding box for the second class, and wherein the first class label and the second class label are also based on the overlap of the detection bounding box, the ground-truth bounding box for the first class, and the ground-truth bounding box for the second class. 13. The method of claim 1, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels directly from classification scores from a probability model or neural network. 14. The method of claim 1, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels directly from classification scores from a hidden Markov Model (HMM), a Gaussian mixture model (GMM), or a pretrained neural network. 15. The method of claim 1, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels using maximum likelihood decoding, a distance metric, a soft output decoding algorithm, or a list decoding scheme. 16. An apparatus for deep learning training, comprising: one or more non-transitory computer-readable media; and at least one processor which, when executing instructions stored on one or more non-transitory computer readable media, performs the steps of: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1. 17. The apparatus of claim 16, wherein the candidate units are detection bounding boxes, and wherein soft labelling comprises: providing a label of a class for a detection bounding box based at least partially on an overlap of the detection bounding box with a ground-truth bounding box for the class 18. The apparatus of claim 16, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels directly from classification scores from a probability model or neural network. 19. A method, comprising: manufacturing a chipset capable of deep learning training comprising: at least one processor which, when executing instructions stored on one or more non-transitory computer readable media, performs the steps of: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1; and the one or more non-transitory computer-readable media which store the instructions. 20. A method of testing an apparatus, comprising: testing whether the apparatus has at least one processor which, when executing instructions stored on one or more non-transitory computer readable media, performs deep learning training comprising the steps of: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1; and testing whether the apparatus has the one or more non-transitory computer-readable media which store the instructions.
Please help me write a proper abstract based on the patent claims.
Apparatuses and methods of manufacturing same, systems, and methods for training deep learning machines are described. In one aspect, candidate units, such as detection bounding boxes in images or phones of an input audio feature, are classified using soft labelling, where at least label has a range of possible values between 0 and 1 based, in the case of images, on the overlap of a detection bounding box and one or more ground-truth bounding boxes for one or more classes.
1. A method comprising: determining the firing state of a plurality of neurons of a first neurosynaptic core substantially in parallel; delivering to at least one additional neurosynaptic core the firing state of the plurality of neurons substantially in parallel. 2. The method of claim 1, wherein the first neurosynaptic core and the at least one additional neurosynaptic core are located on a first chip. 3. The method of claim 2, wherein the substantially parallel delivery is via an inter-core network. 4. The method of claim 3, wherein the substantially parallel delivery is performed by a permutation network, a Clos network, or a butterfly network. 5. The method of claim 1, further comprising: pipelining the firing state of the plurality of neurons. 6. The method of claim 1, further comprising: constructing a binary vector corresponding to the firing state of the plurality of neurons; transmitting the binary vector to the at least one additional neurosynaptic core. 7. The method of claim 1, wherein the first neurosynaptic core is located on a first chip, and the at least one additional neurosynaptic core is located on a second chip. 8. The method of claim 7, further comprising: transmitting the firing state of the plurality of neurons via an inter-chip network connecting the first chip and the second chip. 9. The method of claim 8, wherein the inter-chip network comprises an outgoing port of the first chip and an incoming port of the second chip. 10. The method of claim 8, wherein the inter-chip network comprises an outgoing port of the first chip connected to an incoming port of the first chip. 11. The method of claim 7, wherein the first and second chip are located on a first board. 12. The method of claim 7, wherein the first chip is located on a first board and the second chip is located on a second board, the first and second boards being connected. 13. The method of claim 12, wherein a plurality of boards comprising the first board and the second board is hierarchically arranged, and wherein the first board and the second board are connected via a hierarchy of routers. 14. A system comprising: a plurality of neurosynaptic cores, the neurosynaptic cores comprising a plurality of axons, a plurality of synapses, and a plurality of neurons; a first inter-core network connecting the plurality of neurosynaptic cores, wherein the first inter-core network is adapted to deliver from a first neurosynaptic core of the plurality of neurosynaptic cores to at least one additional neurosynaptic core the firing state of the plurality of neurons of the first neurosynaptic core substantially in parallel; 15. The system of claim 14, wherein the inter-core network comprises a permutation network, a Clos network, or a butterfly network. 16. The system of claim 14, wherein the first inter-core network is located on a first chip and a second inter-core network is located on a second chip, the at least one additional neurosynaptic core being connected to the second inter-core network. 17. The system of claim 16, wherein the first chip and the second chip are adjacent. 18. The system of claim 16, further comprising: a port connecting the first inter-core network to the second inter-core network. 19. The system of claim 14, further comprising: a port connecting the first inter-core network to itself 20. The system of claim 16, wherein the first and second chip are located on a first board. 21. The system of claim 16, wherein the first chip is located on a first board and the second chip is located on a second board, the first and second boards being connected. 22. The system of claim 21, wherein a plurality of boards comprising the first board and the second board is hierarchically arranged, and wherein the first board and the second board are connected via a hierarchy of routers. 23. A method comprising: simulating a plurality of neurosynaptic cores, the simulated neurosynaptic cores comprising a plurality of simulated axons, a plurality of simulated synapses, and a plurality of simulated neurons; simulating a network connecting the plurality of simulated neurosynaptic cores; simulating the determination of the firing state of the plurality of simulated neurons of a first of the simulated neurosynaptic cores; simulating the delivery to an at least one additional of the simulated neurosynaptic cores the firing state of the plurality of simulated neurons. 24. The method of claim 19, wherein the simulated network comprises an inter-core network. 25. The method of claim 19, wherein the simulated network comprises an inter-chip network. 26. The method of claim 19, wherein the simulated network comprises an inter-board network. 27. A computer program product for operating a neurosynaptic network, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: determining the firing state of a plurality of neurons of a first neurosynaptic core substantially in parallel; delivering to at least one additional neurosynaptic core the firing state of the plurality of neurons substantially in parallel. 28. A computer program product for simulating a neurosynaptic network, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: simulating a plurality of neurosynaptic cores, the simulated neurosynaptic cores comprising a plurality of simulated axons, a plurality of simulated synapses, and a plurality of simulated neurons; simulating a network connecting the plurality of simulated neurosynaptic cores; simulating the determination of the firing state of the plurality of simulated neurons of a first of the simulated neurosynaptic cores; simulating the delivery to an at least one additional second of the simulated neurosynaptic cores the firing state of the plurality of simulated neurons.
Please help me write a proper abstract based on the patent claims.
A scalable stream synaptic supercomputer for extreme throughput neural networks is provided. The firing state of a plurality of neurons of a first neurosynaptic core is determined substantially in parallel. The firing state of the plurality of neurons is delivered to at least one additional neurosynaptic core substantially in parallel.
1. A computer implemented method comprising: inputting steps of at least two processes into the computer, wherein each step corresponds to a set of input products, a task to be completed, a set of output products, and a set of target output products, and wherein each input product, each output product, each target output product, and each task, comprises a set of property/value pairs that includes product type and product target values, where the properties have unique URL-based names and the property relationships are supported by an underlying ontology including a hierarchical taxonomy of product types, and where the target products specify a desired range for each corresponding property value; storing each input product and each output product in a shared, non-transitory database as they are generated, wherein each input product and each output product is stored with a unique uniform resource locator (URL); establishing an input product filter for each step, wherein the input product filter comprises performance-triggering criteria defined property/value filters, where one step's output product passes the input product filter of another step; periodically searching the shared database with URL queries for input products; determining with the computer whether each step in any of the processes has been executed, is in the process of being executed, or is awaiting execution; dynamically and automatically assembling or altering a set of steps composing any given process depending on the input products available in the shared database; and comparing with a performance status monitoring engine the input/output products property values to the target property values to automatically assess the status, effectiveness and efficiency of the individual steps and of each process as a whole. 2. The method of claim 1, wherein a given step is deemed executed if all of the given step's input products are found in the shared database and all the given step's output products have been generated and stored in the shared database, wherein the given step is awaiting execution if all of the given step's input products are found in the shared database but not all of the given step's output products have been generated, and wherein the given step is awaiting execution if all of the given step's input products are not found in the shared database. 3. The method of claim 2, where the URLs are defined according to Representational State Transfer (REST) design principles. 4. A method for providing situational awareness to a computer comprising: identifying a first process and a second process that have an aspect in common, wherein each of the first and second processes comprises a series of steps and an end goal; identifying an input product comprising performance-triggering criteria for each step in the first and second processes; generating a unique output product at the completion of each step in the first and second processes; storing the output products in a cloud-based, shared database, wherein each output product and each input product has a unique uniform resource locator (URL) according to a taxonomy; filtering with the computer the output products in the database for every un-executed step by performing URL queries for the input product of each un-executed step; determining that a given step may be executed based on the presence of the given step's input product in the database; and determining with the computer a situational assessment based on the interaction of the steps of the first and second processes and evaluating how interaction between the two processes will affect the attainment of the goals. 5. The method of claim 4 wherein, each product is represented as a set of property/value pairs where the properties have unique URL-based names and the property relationships are supported by an underlying ontology including a hierarchical taxonomy of product types. 6. The method of claim 4, wherein output products are defined in a progressive, generic, hierarchical, machine-understandable, and addressable format. 7. The method of claim 6, wherein the progressive, generic, hierarchical, machine-understandable, and addressable format is JavaScript Object Notation for Linked Data (JSON-LD). 8. The method of claim 7, wherein the output products include property values. 9. The method of claim 8, wherein the output products link to parent/child product types and linked data. 10. The method of claim 4, wherein the common aspect is an aspect or a combination of aspects selected from the group consisting of: a location, a time-range, a person, an organization, a field of study, a given topic, a user-entered property value, and a user-entered property value range. 11. The method of claim 4, wherein the situational assessment further includes an identification of remaining steps, identification of obstacles to executing the remaining steps, a time estimate for completion of the remaining steps, and a probability that the goals will be reached by a certain time. 12. The method of claim 4, further comprising the step of reporting the situational assessment to a user. 13. The method of claim 4, further comprising the step of eliminating a step from the first process based on the situational assessment. 14. The method of claim 4, wherein the first and second processes are tasks performed by the computer. 15. A method for providing situational awareness to a computer-controlled avatar in a simulated environment; identifying at least two processes related to the avatar, wherein each process comprises a series of steps and an end goal; identifying an input product comprising performance-triggering criteria for each step in the identified processes; generating a unique output product at the completion of each step in the identified processes; storing the output products in a cloud-based, shared database, wherein each output product and each input product has a unique uniform resource locator (URL) according to a taxonomy; filtering the output products in the database for every un-executed step by performing URL queries for the input product of each un-executed step; determining with the avatar that a given step may be executed based on the presence of the given step's input product in the database; and developing with the avatar a situational assessment based on the interaction of the steps of the identified processes and evaluating how interaction between the processes will affect the attainment of the respective goals. 16. The method of claim 15 further comprising the step of reporting the situational assessment to a human. 17. The method of claim 15, further comprising the step of having the avatar periodically updating the situational assessment as further information is available. 18. The method of claim 15, further comprising the step of having the avatar alter its behavior in view of the situational assessment.
Please help me write a proper abstract based on the patent claims.
A computer implemented method comprising: inputting steps of at least two processes into the computer; storing process step input products and process step output products in a shared, non-transitory database as they are generated; establishing an input product filter for each process step with performance-triggering-criteria-defined property/value filters, where one step's output product passes the input product filter of another step; periodically searching the shared database for input products; determining whether each step in any of the processes has been executed, is in the process of being executed, or is awaiting execution; dynamically and automatically assembling or altering a set of steps composing any given process depending on the input products available in the shared database; and comparing the input/output products property values to target property values to automatically assess the status, effectiveness and efficiency of the individual steps and each process as a whole.
1. A method for training a neuron network using a processor in communication with a memory, comprising: determining features of a signal using the neuron network; determining an uncertainty measure of the features for classifying the signal; reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal; comparing the reconstructed signal with the signal to produce a reconstruction error; combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling; labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal. 2. The method of claim 1, wherein the labeling comprises: transmitting a labeling request to an annotation device if the rank indicates the necessity of the manual labeling process. 3. The method of claim 1, wherein the determining features are performed by using an encoder neural network. 4. The method of claim 1, wherein the signal is an electroencephalogram (EEG) or an electrocardiogram (ECG). 5. The method of claim 1, wherein the reconstruction error is defined based on a Euclidean distance between the signal and the reconstructed signal. 6. The method of claim 1, wherein the rank is defined based on an addition of an entropy function and the reconstruction error. 7. An active learning system comprising: a human machine interface; a storage device including neural networks; a memory; a network interface controller connectable with a network being outside the system; an imaging interface connectable with an imaging device; and a processor configured to connect to the human machine interface, the storage device, the memory, the network interface controller and the imaging interface, wherein the processor executes instructions for classifying a signal using the neural networks stored in the storage device, wherein the neural networks perform steps of: determining features of the signal using the neuron network; determining an uncertainty measure of the features for classifying the signal; reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal; comparing the reconstructed signal with the signal to produce a reconstruction error; combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling; labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal. 8. The method of claim 7, wherein the labeling comprises: transmitting a labeling request to an annotation device if the rank indicates the necessity of the manual labeling process. 9. The method of claim 7, wherein the determining features are performed by using an encoder neural network. 10. The method of claim 7, wherein the signal is an electroencephalogram (EEG) or an electrocardiogram (ECG). 11. The method of claim 7, wherein the reconstruction error is defined based on a Euclidean distance between the signal and the reconstructed signal. 12. The method of claim 7, wherein the rank is defined based on an addition of an entropy function and the reconstruction error. 13. A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising: determining features of a signal using the neuron network; determining an uncertainty measure of the features for classifying the signal; reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal; comparing the reconstructed signal with the signal to produce a reconstruction error; combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling; labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal. 14. The method of claim 13, wherein the labeling comprises: transmitting a labeling request to an annotation device if the rank indicates the necessity of the manual labeling process. 15. The method of claim 13, wherein the determining features are performed by using an encoder neural network. 16. The method of claim 13, wherein the signal is an electroencephalogram (EEG) or an electrocardiogram (ECG). 17. The method of claim 13, wherein the reconstruction error is defined based on a Euclidean distance between the signal and the reconstructed signal. 18. The method of claim 13, wherein the rank is defined based on an addition of an entropy function and the reconstruction error.
Please help me write a proper abstract based on the patent claims.
A method for training a neuron network using a processor in communication with a memory includes determining features of a signal using the neuron network, determining an uncertainty measure of the features for classifying the signal, reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal, comparing the reconstructed signal with the signal to produce a reconstruction error, combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling, labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal.
1-12. (canceled) 13. A system comprising: an artificial neural network having an at least one input neuron and an at least one output neuron; and one or more computer processor circuits that are configured to host a bundling application that is configured to: identify a software component having a first value for a first identification attribute and a second value for a second identification attribute; generate an input vector derived from the first value and the second value; load the input vector into the at least one input neuron of the artificial neural network; and obtain a yielded output vector from the at least one output neuron of the artificial neural network. 14. The system of claim 13, wherein the yielded output vector corresponds to a software bundle of a plurality of software bundles, and wherein the bundling application is further configured to: determine, based on the yielded output vector, that the software component is associated with the software bundle. 15. The system of claim 13, wherein the software component is associated with a software bundle of a plurality of software bundles, and wherein the bundling application is further configured to: generate a test output vector derived from the software bundle; compare the yielded output vector with the test output vector; and adjust parameters of the artificial neural network based on the comparison of the yielded output vector with the test output vector. 16. The system of claim 15, wherein the bundling application is further configured to: identify a second software component having a third value for the first identification attribute and a fourth value for the second identification attribute; generate a second input vector derived from the third value and the fourth value; load the second input vector into the at least one input neuron of the artificial neural network; obtain a second yielded output vector from the at least one output neuron of the artificial neural network, the second yielded output vector corresponding to a second software bundle of the plurality of software bundles; and determine, based on the second yielded output vector, that the second software component is associated with the second software bundle. 17. A computer program comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to: identify a software component having a first value for a first identification attribute and a second value for a second identification attribute; generate an input vector derived from the first value and the second value; load the input vector into an at least one input neuron of an artificial neural network; and obtain a yielded output vector from an at least one output neuron of the artificial neural network. 18. The computer program product of claim 17, wherein the yielded output vector corresponds to a software bundle of a plurality of software bundles, and wherein the program instructions are executable by the computer to further cause the computer to: determine, based on the yielded output vector, that the software component is associated with the software bundle. 19. The computer program product of claim 17, wherein the software component is associated with a software bundle of a plurality of software bundles, and wherein the program instructions are executable by the computer to further cause the computer to: generate a test output vector derived from the software bundle; compare the yielded output vector with the test output vector; and adjust parameters of the artificial neural network based on the comparison of the yielded output vector with the test output vector. 20. The computer program product of claim 19, wherein the program instructions are executable by the computer to further cause the computer to: identify a second software component having a third value for the first identification attribute and a fourth value for the second identification attribute; generate a second input vector derived from the third value and the fourth value; load the second input vector into the at least one input neuron of the artificial neural network; obtain a second yielded output vector from the at least one output neuron of the artificial neural network, the second yielded output vector corresponding to a second software bundle of the plurality of software bundles; and determine, based on the second yielded output vector, that the second software component is associated with the second software bundle.
Please help me write a proper abstract based on the patent claims.
An artificial neural network is used to manage software bundling. During a training phase, the artificial neural network is trained using previously bundled software components having known values for identification attributes and known software bundle asociations. Once trained, the artifical neural network can be used to identify the proper software bundles for newly discovered sofware components. In this process, a newly discovered software component having known values for the identification attributes is identified. An input vector is derived from the known values. The input vector is loaded into input neurons of the artificial neural network. A yielded output vector is then obtained from an output neuron of the artificial neural network. Based on the composition of the output vector, the software bundle associated with this newly discovered software component is determined.
1-9. (canceled) 10. A computer program product for generating a first answer relationship in a first answer sequence, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising: identifying the first answer sequence, the first answer sequence including a first answer and a second answer; analyzing, using the first answer and the second answer, a corpus to identify a set of influence factors corresponding to both the first answer and the second answer; and generating, based on the set of influence factors, the first answer relationship between the first answer and the second answer. 11. The computer program product of claim 10, wherein the first answer sequence further includes a third answer, and wherein the method further comprises: analyzing, using the third answer, the corpus to identify a second set of influence factors corresponding to both the first answer and the third answer and further to identify a third set of influence factors corresponding to both the second answer and the third answer; generating, based on the second set of influence factors, a second answer relationship between the first answer and the third answer; and generating, based on the third set of influence factors, a third answer relationship between the first answer and the third answer. 12. The computer program product of claim 11, wherein the method further comprises: evaluating, based on the first answer relationship, further based on the second answer relationship, and further based on the third answer relationship, the answer sequence. 13. The computer program product of claim 10, wherein the method further comprises: assigning a relationship score to the first answer relationship, the relationship score calculated based on the set of influence factors; and evaluating, based on the relationship score, the first answer relationship. 14. The computer program product of claim 13, wherein the evaluating the first answer relationship includes determining that the relationship score is below a relationship contraindication threshold, and wherein the method further comprises: identifying, in response to the determining, the first answer sequence as contraindicated. 15. The computer program product of claim 10, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a first characteristic relationship between the first answer and a characteristic; identifying a second characteristic relationship between the second answer and the characteristic; and identifying, based on comparing the first characteristic relationship and the second characteristic relationship, a first influence factor of the set of influence factors. 16. A system for generating a first answer relationship in a first answer sequence, the system comprising: a memory; and at least one processor in communication with the memory, wherein the at least one processor is configured to perform a method comprising: identifying the first answer sequence, the first answer sequence including a first answer and a second answer; analyzing, using the first answer and the second answer, a corpus to identify a set of influence factors corresponding to both the first answer and the second answer; and generating, based on the set of influence factors, the first answer relationship between the first answer and the second answer. 17. The system of claim 16, wherein the first answer sequence further includes a third answer, and wherein the method further comprises: analyzing, using the third answer, the corpus to identify a second set of influence factors corresponding to both the first answer and the third answer and further to identify a third set of influence factors corresponding to both the second answer and the third answer; generating, based on the second set of influence factors, a second answer relationship between the first answer and the third answer; and generating, based on the third set of influence factors, a third answer relationship between the first answer and the third answer. 18. The system of claim 17, wherein the method further comprises: evaluating, based on the first answer relationship, further based on the second answer relationship, and further based on the third answer relationship, the answer sequence. 19. The system of claim 16, wherein the method further comprises: assigning a relationship score to the first answer relationship, the relationship score calculated based on the set of influence factors; and evaluating, based on the relationship score, the first answer relationship. 20. The system of claim 19, wherein the evaluating the first answer relationship includes determining that the relationship score is below a relationship contraindication threshold, and wherein the method further comprises: identifying, in response to the determining, the first answer sequence as contraindicated. 21. The computer program product of claim 10, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a direct influence relationship between the first answer and the second answer; and identifying, based on the direct influence relationship, a first influence factor of the set of influence factors. 22. The computer program product of claim 10, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the corpus. 23. The computer program product of claim 10, wherein the method further comprises: receiving, from a user, a question; parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the question, wherein the identifying the first answer sequence is in response to the parsing; assigning a first relationship score to the first answer relationship, the first relationship score calculated based on the set of influence factors; assigning a first confidence score to the first answer sequence, the first confidence score based in part on the first relationship score; and presenting, the first answer sequence to the user as a response to the question. 24. The system of claim 16, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a first characteristic relationship between the first answer and a characteristic; identifying a second characteristic relationship between the second answer and the characteristic; and identifying, based on comparing the first characteristic relationship and the second characteristic relationship, a first influence factor of the set of influence factors. 25. The system of claim 16, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a direct influence relationship between the first answer and the second answer; and identifying, based on the direct influence relationship, a first influence factor of the set of influence factors. 26. The system of claim 16, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the corpus. 27. The computer program product of claim 16, wherein the method further comprises: receiving, from a user, a question; parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the question, wherein the identifying the first answer sequence is in response to the parsing; assigning a first relationship score to the first answer relationship, the first relationship score calculated based on the set of influence factors; assigning a first confidence score to the first answer sequence, the first confidence score based in part on the first relationship score; and presenting, the first answer sequence to the user as a response to the question.
Please help me write a proper abstract based on the patent claims.
In a question-answering (QA) environment, a first answer sequence is identified. As identified, the first answer sequence includes a first answer and a second answer. A corpus is analyzed using the first answer and the second answer. Based on the analysis, a set of influence factors corresponding to both the first answer and the second answer are identified. A first answer relationship between the first answer and the second answer is then generated based on the set of influence factors.
1-18. (canceled) 19. A method for knowledge discovery, the method comprising: plotting, by a processing device, a set of concepts derived from a selected set of fingerprints on a terminology system, wherein the plotting generates a map; selecting, by the processing device, a single concept from the set of concepts; displaying, by the processing device, the map with the set of concepts and the selected single concept; and indicating, by the processing device, a relative importance of the selected single concept with respect to the set of concepts on the map. 20. The method of claim 19, wherein each fingerprint represents a document. 21. The method of claim 19, wherein each fingerprint represents a person. 22. The method of claim 19, wherein each fingerprint represents an organization. 23. The method of claim 19, wherein indicating the relative importance of the selected single concept comprises displaying the selected single concept on the map in a first color and a remaining one or more concepts from the set of concepts in a second color that is different from the first color. 24. The method of claim 19, wherein indicating the relative importance of the selected single concept comprises displaying the selected single concept on the map with a first object and a remaining one or more concepts from the set of concepts with a second object, wherein the first object is larger than the second object. 25. A system for knowledge discovery, the system comprising: a processing device; and a non-transitory, processor-readable storage medium, the non-transitory, processor-readable storage medium comprising one or more programming instructions that, when executed, cause the processing device to: plot a set of concepts derived from a selected set of fingerprints on a terminology system, wherein the plotting generates a map; select a single concept from the set of concepts; display the map with the set of concepts and the selected single concept; and indicate a relative importance of the selected single concept with respect to the set of concepts on the map. 26. The system of claim 25, wherein each fingerprint represents a document. 27. The system of claim 25, wherein each fingerprint represents a person. 28. The system of claim 25, wherein each fingerprint represents an organization. 29. The system of claim 25, wherein the one or more programming instructions that, when executed, cause the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map in a first color and a remaining one or more concepts from the set of concepts in a second color that is different from the first color. 30. The system of claim 25, wherein the one or more programming instructions that, when executed, cause the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map with a first object and a remaining one or more concepts from the set of concepts with a second object, wherein the first object is larger than the second object. 31. A non-transitory, processor-readable storage medium with computer executable instructions embodied thereon for knowledge discovery, the computer executable instructions directing a processing device to: plot a set of concepts derived from a selected set of fingerprints on a terminology system, wherein the plotting generates a map; select a single concept from the set of concepts; display the map with the set of concepts and the selected single concept; and indicate a relative importance of the selected single concept with respect to the set of concepts on the map. 32. The computer readable medium of claim 31, wherein each fingerprint represents a document. 33. The computer readable medium of claim 31, wherein each fingerprint represents a person. 34. The computer readable medium of claim 31, wherein each fingerprint represents an organization. 35. The computer readable medium of claim 31, wherein the computer executable instructions directing the processing device to indicate the relative importance of the selected single concept further direct the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map in a first color and a remaining one or more concepts from the set of concepts in a second color that is different from the first color. 36. The computer readable medium readable medium of claim 31, wherein the computer executable instructions directing the processing device to indicate the relative importance of the selected single concept further direct the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map with a first object and a remaining one or more concepts from the set of concepts with a second object, wherein the first object is larger than the second object.
Please help me write a proper abstract based on the patent claims.
Provided are methods and systems for knowledge discovery utilizing knowledge profiles.
1. A computer-implemented method comprising: obtaining frame data representative of a plurality of frames captured by a touch-sensitive device; analyzing the frame data to define a respective blob in each frame of the plurality of frames, the blobs being indicative of a touch event; computing a plurality of feature sets for the touch event, each feature set specifying properties of the respective blob in each frame of the plurality of frames; and determining a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification. 2. The computer-implemented method of claim 1, wherein the machine learning classification is configured to generate the non-bimodal classification scores such that each non-bimodal classification score is representative of a probability that the touch event is of a respective type. 3. The computer-implemented method of claim 2, wherein each one of the non-bimodal classification scores is generated by a machine learning classifier configured to accept the plurality of feature sets as inputs. 4. The computer-implemented method of claim 3, wherein the machine learning classifier comprises a random decision forest classifier. 5. The computer-implemented method of claim 1, further comprising: defining a track of the blobs across the plurality of frames for the touch event; and computing a track feature set for the track, wherein determining the type comprises applying the track feature set to a machine learning classifier. 6. The computer-implemented method of claim 1, wherein computing the plurality of feature sets comprises aggregating data indicative of the plurality of feature sets before application of the plurality of feature sets to a machine learning classifier in determining the type of the touch event. 7. The computer-implemented method of claim 1, wherein each feature set comprises data indicative of an appearance of an image patch disposed at the respective blob in each frame. 8. The computer-implemented method of claim 1, wherein each feature set comprises data indicative of an intensity gradient in the frame data for the respective blob in each frame. 9. The computer-implemented method of claim 1, wherein each feature set comprises data indicative of an isoperimetric quotient or other metric of a roundness of the respective blob in each frame. 10. The computer-implemented method of claim 1, wherein the machine learning classification comprises a lookup table-based classification. 11. The computer-implemented method of claim 1, wherein determining the type comprises applying the feature set for a respective frame of the plurality of frames to multiple look-up tables, each look-up table providing a respective individual non-bimodal classification score of the multiple non-bimodal classification scores. 12. The computer-implemented method of claim 11, wherein determining the type comprises combining each of the individual non-bimodal classification scores for the respective frame to generate a blob classification rating score for the respective frame. 13. The computer-implemented method of claim 12, wherein: the multiple look-up tables comprise a first look-up table configured to provide a first rating that the touch event is an intended touch and further comprise a second look-up table to determine a second rating that the touch event is an unintended touch; and determining the type comprises subtracting the second rating from the first rating to determine the blob classification rating score for the respective frame. 14. The computer-implemented method of claim 12, wherein determining the type comprises aggregating the blob classification rating scores across the plurality of frames to determine a cumulative, multi-frame classification score for the touch event. 15. The computer-implemented method of claim 14, wherein determining the type comprises: determining whether the cumulative, multi-frame classification score passes one of multiple classification thresholds; and if not, then iterating the feature set applying, the classification score combining, and the rating score aggregating acts in connection with a further feature set of the plurality of feature sets. 16. The computer-implemented method of claim 14, wherein determining the type further comprises, once the cumulative, multi-frame classification score exceeds passes a palm classification threshold for the touch event, classifying a further blob in a subsequent frame of the plurality of frames that overlaps the touch event as a palm touch event. 17. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises adjusting the blob classification rating score by subtracting a value from the blob classification rating score if the respective blob overlaps an anti-blob. 18. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises, when the blob has an area greater than a threshold area, and when the blob is within a threshold distance of a further blob having bimodal classification scores indicative of a palm, adjusting the blob classification rating score by subtracting a quotient calculated by dividing a blob area of the blob by the threshold area. 19. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises: determining if a number of edge pixels in the respective blob exceeds a threshold; and if the threshold is exceeded, adjusting the blob classification rating score by subtracting a difference between the number of edge pixels and the threshold from the blob classification rating score. 20. A touch-sensitive device comprising: a touch-sensitive surface; a memory in which blob definition instructions, feature computation instructions, and machine learning classification instructions are stored; and a processor coupled to the memory, configured to obtain frame data representative of a plurality of frames captured via the touch-sensitive surface and configured to execute the blob definition instructions to analyze the frame data to define a respective blob in each frame of the plurality of frames, the blobs being indicative of a touch event; wherein the processor is further configured to execute the feature computation instructions to compute a plurality of feature sets for the touch event, each feature set specifying properties of the respective blob in each frame of the plurality of frames; and wherein the processor is further configured to execute the machine learning classification instructions to determine a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification. 21. The touch-sensitive device of claim 20, wherein each non-bimodal classification score is representative of a probability that the touch event is of a respective type. 22. The touch-sensitive device of claim 20, wherein: each non-bimodal classification score is a blob classification score rating for a respective frame of the plurality of frames; and the processor is further configured to execute the machine learning classification instructions to sum the blob classification score ratings over the plurality of frames. 23. The touch-sensitive device of claim 22, wherein the processor is further configured to execute the machine learning classification instructions to combine lookup table ratings from multiple lookup tables to compute each blob classification score rating. 24. The touch-sensitive device of claim 20, wherein the processor is further configured to execute the blob definition instructions to split a connected component into multiple blobs for separate analysis. 25. The touch-sensitive device of claim 20, wherein the processor is further configured to execute the blob definition instructions to define a track for each blob of the touch event across the plurality of frames. 26. The touch-sensitive device of claim 20, wherein the processor is further configured to execute the blob definition instructions to merge multiple connected components for analysis as a single blob. 27. A touch-sensitive device comprising: a touch-sensitive surface; a memory in which a plurality of instruction sets are stored; and a processor coupled to the memory and configured to execute the plurality of instruction sets, wherein the plurality of instructions sets comprise: first instructions to cause the processor to obtain frame data representative of a plurality of sensor images captured by the touch-sensitive device; second instructions to cause the processor to analyze the frame data to define a respective connected component in each sensor image of the plurality of sensor images, the connected components being indicative of a touch event; third instructions to cause the processor to compute a plurality of feature sets for the touch event, each feature set specifying properties of the respective connected component in each sensor image of the plurality of sensor images; fourth instructions to cause the processor to determine a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification; and fifth instructions to cause the processor to provide an output to the computing system, the output being indicative of the type of the touch event; wherein the fourth instructions comprise aggregation instructions to cause the processor to aggregate information representative of the touch event over the plurality of sensor images. 28. The touch-sensitive device of claim 27, wherein: the fourth instructions are configured to cause the processor to apply the plurality of feature sets to a machine learning classifier; and the aggregation instructions are configured to cause the processor to aggregate the plurality of feature sets for the plurality of sensor images before applying the plurality of feature sets. 29. The touch-sensitive device of claim 27, wherein the aggregation instructions are configured to cause the processor to aggregate the multiple non-bimodal classification scores.
Please help me write a proper abstract based on the patent claims.
A method for touch classification includes obtaining frame data representative of a plurality of frames captured by a touch-sensitive device, analyzing the frame data to define a respective blob in each frame of the plurality of frames, the blobs being indicative of a touch event, computing a plurality of feature sets for the touch event, each feature set specifying properties of the respective blob in each frame of the plurality of frames, and determining a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification.
1. A system for managing of a rules engine, the system comprising: a computing device implementing a rules engine; a router to interact with the rules engine and rule engine results, the router configured to feed inputs to and receive outputs from the rules engine; and the router further configured to: programmatically route results of rule execution by the rules engine to a hierarchical structure stored in computer storage for access by subscriber devices. 2. The system of claim 1 wherein the rules engine operates with a defined rules format that supports routing information within the rule itself. 3. The system of claim 1 wherein the rules engine operates with a defined rules format that supports routing information that is associated with the rule. 4. The system of claim 1 wherein the router supports a publish/subscribe connectivity protocol. 5. The system of claim 1 wherein the router supports a MQTT connectivity protocol. 6. The system of claim 1 wherein the rules engine, when a rule executes, automatically publishes the output of the rule to a specific publish topic in the hierarchical structure. 7. The system of claim 1 wherein other processes subscribe to access the hierarchical structure to receive and process outputs of rules that fired. 8. The system of claim 1 wherein the hierarchical structure is arranged as sets of topics with a superset of topics being topics that are higher in the hierarchy than the topic subscribed to. 9. The system of claim 1 wherein the processes/devices are sensor processes and sensor devices. 10. A method for managing a network of sensor devices, the method comprising: firing a rule by a rules engine in a device, the firing of the rule providing rule results; and programmatically, routing the rule results from the rules engine to a hierarchical structure stored in computer storage for access to the results by subscriber devices that subscribe to the rule. 11. The method of claim 10 wherein the rules engine operates with a defined rules format that supports routing information within the rule itself. 12. The method of claim 10 wherein programmatically, routing further comprises: automatically publishing by the rules engine, an output of the rule to a specific publish topic in the hierarchical structure. 13. The method of claim 10 further comprising: subscribing to the hierarchical structure by other processes to receive and process outputs of rules that fired. 14. The method of claim 10 wherein the devices are sensor devices.
Please help me write a proper abstract based on the patent claims.
A networked system for managing a physical intrusion detection/alarm includes tiers devices and a rules engine and router to interact with the rules engine and rule engine results, where the router is configured to feed inputs to and receive outputs from the rules engine, and the router further configured to programmatically route results of rule execution by the rules engine to a hierarchical structure stored in computer storage for access by subscriber devices.
1. A method to identify anomalous behavior of a monitored entity, the method comprising, by a processing system: extracting features from data related to the operation of an entity; mapping the extracted features to states to generate a state sequence; determining an expected value of a metric based on the state sequence; and comparing the determined expected value of the metric to an observed value of the metric. 2. The method of claim 1, further comprising: presenting, via a user interface, a notification of anomalous behavior of the entity if the observed value of the metric differs from the expected value of the metric by a threshold amount. 3. The method of claim 1, wherein the metric is a performance metric or a sustainability metric. 4. The method of claim 1, wherein the data is reported by sensors monitoring various performance parameters of the entity. 5. The method of claim 4, wherein the data is recorded over the course of at east 24 hours of operation of the entity and the state sequence includes a plurality of distinct states. 6. The method of claim 1, wherein the expected value of the metric is determined using a state machine model previously trained on data related to the operation of one or more other entities of the same type as the entity. 7. The method of claim 1, wherein the expected value of the metric is determined using a mean value comparison technique, a distribution comparison technique, or a likelihood comparison technique. 8. A system to identify anomalous behavior of a monitored entity, the system comprising: sensors to report data regarding at least two parameters of an entity during operation; a feature extraction module to extract features from the reported data; a state sequence module to generate a state sequence by mapping the extracted features to a plurality of states; and an anomaly detection module to compare an expected value of a metric based on the state sequence to an observed value of the metric. 9. The system of claim 8, further comprising: a user interface to alert a user of anomalous behavior of the entity if the expected value of the metric differs from the observed value of the metric by a threshold amount. 10. The system of claim 9, wherein the user interface is configured to present a list of detected anomalies ordered by level of importance. 11. The system of claim 8, further comprising: a training module to build a state machine model based on observed operating parameters of one or more other entities of the same type as the entity. 12. The system of claim 8, further comprising: a memory storing a state machine model corresponding to the entity, wherein the anomaly detection module is configured to determine the expected value of the metric using information from the state machine model. 13. The system of claim 12, wherein the plurality of states into which the extracted features are mapped are predetermined based on state patterns in the state machine model. 14. The system of claim 13, wherein the state sequence module comprises a new-state detection module configured to detect a potential new state exhibited by a portion of the extracted features, wherein the potential new state corresponds to a pattern that does not exist in the state machine model. 15. The system of claim 8, wherein the system is configured to identify anomalous behavior in a plurality of monitored entities. 16. The system of claim 15, wherein the data reported by the sensors comprises measured parameters from each of the monitored entities, the state sequence module is configured to generate a state sequence for each of the monitored entities, and the anomaly detection module is configured to detect anomalous behavior in any one of or combination of the monitored entities. 17. The system of claim 15, wherein the plurality of monitored entities is an HVAC system. 18. A non-transitory computer-readable storage medium storing instructions for execution by a computer to identify anomalous behavior of a monitored entity, the instructions when executed causing the computer to: extract features from data characterizing operation of an entity during a time period; map the extracted features to states to generate a state sequence; determine an expected value of a metric based on the state sequence and a state machine model for the entity; compare the determined expected value of the metric to an observed value of the metric; and identify anomalous behavior if the expected value of the metric differs from the observed value of the metric. 19. The computer-readable storage medium of claim 18, the instructions when executed causing the computer to receive the data from a plurality of sensors monitoring performance parameters of the entity.
Please help me write a proper abstract based on the patent claims.
Described herein are techniques for identifying anomalous behavior of a monitored entity. Features can be extracted from data related to operation of an entity. The features can be mapped to a plurality of states to generate a state sequence. An observed value of a metric can be compared to an expected value of the metric based on the state sequence.
1. A method of manifold-aware ranking kernel learning programmed in a memory of a device comprising: a. performing combined supervised kernel learning and unsupervised manifold kernel learning; and b. generating a non-linear kernel model. 2. The method of claim 1 wherein Bregman projection is utilized when performing the supervised kernel learning. 3. The method of claim 1 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 4. The method of claim 1 wherein the result comprises a non-linear metric defined by a kernel model. 5. The method of claim 1 wherein the supervised kernel learning employs a relative comparison constraint. 6. The method of claim 1 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart phone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a high definition video writer/player, a television and a home entertainment system. 7. A method of information retrieval programmed in a memory of a device comprising: a. receiving a search query input; b. performing a search based on the search query input and using a metric kernel learned by manifold-aware ranking kernel learning; and c. presenting a search result of the search. 8. The method of claim 7 wherein manifold-aware ranking kernel learning comprises: i. performing combined supervised kernel learning and unsupervised manifold kernel learning; and ii. generating a non-linear kernel model. 9. The method of claim 8 wherein Bregman projection is utilized when performing the supervised kernel learning. 10. The method of claim 8 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 11. The method of claim 8 wherein the result comprises a non-linear metric defined by a kernel model. 12. The method of claim 8 wherein the supervised kernel learning employs a relative comparison constraint. 13. The method of claim 7 wherein the search result comprises a set of entities from a database that are similar to the search query input. 14. The method of claim 7 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart phone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a high definition video writer/player, a television and a home entertainment system. 15. An apparatus comprising: a. a non-transitory memory for storing an application, the application for: i. performing combined supervised kernel learning and unsupervised manifold kernel learning; and ii. generating a non-linear kernel model; and b. a processing component coupled to the memory, the processing component configured for processing the application. 16. The apparatus of claim 15 wherein Bregman projection is utilized when performing the supervised kernel learning. 17. The apparatus of claim 15 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 18. The apparatus of claim 15 wherein the result comprises a non-linear metric defined by a kernel model. 19. The apparatus of claim 15 wherein the supervised kernel learning employs a relative comparison constraint. 20. An apparatus comprising: a. a non-transitory memory for storing an application, the application for: i. receiving a search query input; ii. performing a search based on the search query input and using a metric kernel learned by manifold-aware ranking kernel learning; and iii. presenting a search result of the search; and b. a processing component coupled to the memory, the processing component configured for processing the application. 21. The apparatus of claim 20 wherein manifold-aware ranking kernel learning comprises: i. performing combined supervised kernel learning and unsupervised manifold kernel learning; and ii. generating a non-linear kernel model. 22. The apparatus of claim 21 wherein Bregman projection is utilized when performing the supervised kernel learning. 23. The apparatus of claim 21 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 24. The apparatus of claim 21 wherein the result comprises a non-linear metric defined by a kernel model. 25. The apparatus of claim 21 wherein the supervised kernel learning employs a relative comparison constraint. 26. The apparatus of claim 20 wherein the search result comprises a set of entities from a database that are similar to the search query input.
Please help me write a proper abstract based on the patent claims.
A manifold-aware ranking kernel (MARK) for information retrieval is described herein. The MARK is implemented by using supervised and unsupervised learning. MARK is ranking-oriented such that the relative comparison formulation directly targets on the ranking problem, making the approach optimal for information retrieval. MARK is also manifold-aware such that the algorithm is able to exploit information from ample unlabeled data, which helps to improve generalization performance, particularly when there are limited number of labeled constraints. MARK is nonlinear: as a kernel-based approach, the algorithm is able to lead to a highly non-linear metric which is able to model complicated data distribution.
1. A method for aggregation of traffic impact metrics, the method comprising: associating, by one or more processors, each of a plurality of holidays with a holiday category of a plurality of holiday categories, wherein the plurality of holiday categories includes a first holiday category and a second holiday category; identifying, by one or more processors, a plurality of points of interest along a link of a transportation network; associating, by one or more processors, at least one of the plurality of points of interest with the first holiday category and at least one of the plurality of points of interest with the second holiday category; determining, by one or more processors, a mean category impact for each of the plurality of holiday categories; and determining, by one or more processors, an aggregated traffic impact metric based, at least in part, on the mean category impact of each of the plurality of holiday categories. 2. The method of claim 1, wherein the aggregated traffic impact metric corresponds to a date on which a holiday of the first holiday category occurs and on which a holiday of the second holiday category occurs. 3. The method of claim 2, further comprising: determining, by one or more processors, a mean weekday volume for each day of a week, wherein the mean weekday volume for each day of the week is based, at least in part, on historic traffic data for one or more previous days corresponding to the day of the week. 4. The method of claim 3, wherein the aggregated traffic impact metric is a function of: (i) the mean category impact for each of the plurality of holiday categories; (ii) a number of points of interest located along the link and associated with each of the plurality of holiday categories; and (iii) a total number of points of interest along the link. 5. The method of claim 4, wherein the mean category impact for each of the plurality of holiday categories is weighted based on a ratio of the number of points of interest along a link that are associated with the holiday category to the total number of points of interest along the link. 6. The method of claim 2, wherein the aggregated traffic impact metric is a sum of: (i) a mean weekday volume of the date; and (ii) the mean category impact for each of the plurality of holiday categories, wherein each mean category impact is weighted based, at least in part, on the mean category impact of each other holiday category. 7. The method of claim 3, wherein the mean category impact for each of the plurality of holiday categories is based on a traffic impact for each holiday of the holiday category. 8. The method of claim 7, wherein the traffic impact for each holiday is based on historical traffic data of one or more previous occurrences of the holiday and a mean weekday volume for a day of a week of each of the one or more previous occurrences. 9. A computer program product for aggregation of traffic impact metrics, the computer program product comprising: a computer readable storage medium and program instructions stored on the computer readable storage medium, the program instructions comprising: program instructions to associate each of a plurality of holidays with a holiday category of a plurality of holiday categories, wherein the plurality of holiday categories includes a first holiday category and a second holiday category; program instructions to identify a plurality of points of interest along a link of a transportation network; program instructions to associate at least one of the plurality of points of interest with the first holiday category and at least one of the plurality of points of interest with the second holiday category; program instructions to determine a mean category impact for each of the plurality of holiday categories; and program instructions to determine an aggregated traffic impact metric based, at least in part, on the mean category impact of each of the plurality of holiday categories. 10. The computer program product of claim 9, wherein the aggregated traffic impact metric corresponds to a date on which a holiday of the first holiday category occurs and on which a holiday of the second holiday category occurs. 11. The computer program product of claim 10, wherein the program instructions further comprise: program instructions to determine a mean weekday volume for each day of a week, wherein the mean weekday volume for each day of the week is based, at least in part, on historic traffic data for one or more previous days corresponding to the day of the week. 12. The computer program product of claim 11, wherein the aggregated traffic impact metric is a function of: (i) the mean category impact for each of the plurality of holiday categories; (ii) a number of points of interest located along the link and associated with each of the plurality of holiday categories; and (iii) a total number of points of interest along the link. 13. The computer program product of claim 12, wherein the mean category impact for each of the plurality of holiday categories is weighted based on a ratio of the number of points of interest along a link that are associated with the holiday category to the total number of points of interest along the link. 14. The computer program product of claim 10, wherein the aggregated traffic impact metric is a sum of: (i) a mean weekday volume of the date; and (ii) the mean category impact for each of the plurality of holiday categories, wherein each mean category impact is weighted based, at least in part, on the mean category impact of each other holiday category. 15. A computer system for aggregation of traffic impact metrics, the computer system comprising: one or more computer processors; one or more computer readable storage media; program instructions stored on the computer readable storage media for execution by at least one of the one or more processors, the program instructions comprising: program instructions to associate each of a plurality of holidays with a holiday category of a plurality of holiday categories, wherein the plurality of holiday categories includes a first holiday category and a second holiday category; program instructions to identify a plurality of points of interest along a link of a transportation network; program instructions to associate at least one of the plurality of points of interest with the first holiday category and at least one of the plurality of points of interest with the second holiday category; program instructions to determine a mean category impact for each of the plurality of holiday categories; and program instructions to determine an aggregated traffic impact metric based, at least in part, on the mean category impact of each of the plurality of holiday categories. 16. The computer system of claim 15, wherein the aggregated traffic impact metric corresponds to a date on which a holiday of the first holiday category occurs and on which a holiday of the second holiday category occurs. 17. The computer system of claim 16, wherein the program instructions further comprise: program instructions to determine a mean weekday volume for each day of a week, wherein the mean weekday volume for each day of the week is based, at least in part, on historic traffic data for one or more previous days corresponding to the day of the week. 18. The computer system of claim 17, wherein the aggregated traffic impact metric is a function of: (i) the mean category impact for each of the plurality of holiday categories; (ii) a number of points of interest located along the link and associated with each of the plurality of holiday categories; and (iii) a total number of points of interest along the link. 19. The computer system of claim 18, wherein the mean category impact for each of the plurality of holiday categories is weighted based on a ratio of the number of points of interest along a link that are associated with the holiday category to the total number of points of interest along the link. 20. The computer system of claim 16, wherein the aggregated traffic impact metric is a sum of: (i) a mean weekday volume of the date; and (ii) the mean category impact for each of the plurality of holiday categories, wherein each mean category impact is weighted based, at least in part, on the mean category impact of each other holiday category.
Please help me write a proper abstract based on the patent claims.
Aggregation of traffic impact metrics is provided. Each of a plurality of holidays is associated with a holiday category of a plurality of holiday categories. The plurality of holiday categories includes a first holiday category and a second holiday category. A plurality of points of interest along a link of a transportation network is identified. At least one of the plurality of points of interest is associated with the first holiday category and at least one of the plurality of points of interest with the second holiday category. A mean category impact for each of the plurality of holiday categories is determined. An aggregated traffic impact metric is determined based, at least in part, on the mean category impact of each of the plurality of holiday categories.
1. A method for annotating natural language text based on an emotive content of the natural language text, the method comprising the steps of: receiving, by one or more computer processors, a natural language text from a user, wherein the natural language text includes typing characteristics metadata, and wherein the typing characteristics metadata includes all of the following: a key press duration; a duration between key presses in the natural language text; a capitalization of the natural language text; a frequency of the capitalization of the natural language text; a frequency of spelling errors in the natural language text; an average word length in the natural language text; and previously deleted natural language text; determining, by one or more computer processors, an emotive content of the natural language text using a machine learning model; and determining, by one or more computer processors, an annotation to the natural language text based on the emotive content, wherein the annotation includes all of the following: an emoticon; a picture; an audio; a video; and a text that describes the emotive content. 2. The method of claim 1, further comprising: receiving, by one or more computer processors, an indication from the user, wherein the indication is to an accuracy of the modification to the natural language text; and updating, by one or more computer processors, the machine learning model based on the indication. 3. (canceled) 4. The method of claim 1, wherein the step of determining, by one or more computer processors, an emotive content of the natural language text using a machine learning model comprises: determining, by one or more computer processors, an emotive content of the natural language text using a machine learning model and the typing characteristics metadata. 5. (canceled) 6. (canceled) 7. The method of claim 1, wherein the annotation is modifying a font of the natural language text. 8. A computer program product for annotating natural language text based on an emotive content of the natural language text, the computer program product comprising: one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media, the program instructions comprising: program instructions to receive a natural language text from a user, wherein the natural language text includes typing characteristics metadata, and wherein the typing characteristics metadata includes all of the following: a key press duration; a duration between key presses in the natural language text; a capitalization of the natural language text; a frequency of the capitalization of the natural language text; a frequency of spelling errors in the natural language text; an average word length in the natural language text; and previously deleted natural language text; program instructions to determine an emotive content of the natural language text using a machine learning model; and program instructions to determine an annotation to the natural language text based on the emotive content, wherein the annotation includes all of following: an emoticon; a picture; an audio; a video; and a text that describes the emotive content. 9. The computer program product of claim 8, further comprising program instructions, stored on the one or more computer readable storage media, to: receive an indication from the user, wherein the indication is to an accuracy of the modification to the natural language text; and update the machine learning model based on the indication. 10. (canceled) 11. The computer program product of claim 8, wherein the program instructions to determine an emotive content of the natural language text using a machine learning model comprise: program instructions to determine an emotive content of the natural language text using a machine learning model and the typing characteristics metadata. 12. (canceled) 13. (canceled) 14. The computer program product of claim 8, wherein the annotation is modifying a font of the natural language text. 15. A computer system for annotating natural language text based on an emotive content of the natural language text, the computer system comprising: one or more computer processors; one or more computer readable storage media; and program instructions, stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the program instructions comprising: program instructions to receive a natural language text from a user, wherein the natural language text includes typing characteristics metadata, and wherein the typing characteristics metadata includes all of the following: a key press duration; a duration between key presses in the natural language text; a capitalization of the natural language text; a frequency of the capitalization of the natural language text; a frequency of spelling errors in the natural language text; an average word length in the natural language text; and previously deleted natural language text; program instructions to determine an emotive content of the natural language text using a machine learning model; and program instructions to determine an annotation to the natural language text based on the emotive content, wherein the annotation includes all of the following: an emoticon; a picture; an audio; a video; and a text that describes the emotive content. 16. The computer system of claim 15, further comprising program instructions, stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, to: receive an indication from the user, wherein the indication is to an accuracy of the modification to the natural language text; and update the machine learning model based on the indication. 17. (canceled) 18. The computer system of claim 15, wherein the program instructions to determine an emotive content of the natural language text using a machine learning model comprise: program instructions to determine an emotive content of the natural language text using a machine learning model and the typing characteristics metadata. 19. (canceled) 20. (canceled)
Please help me write a proper abstract based on the patent claims.
A natural language text is received from a user. The natural language text includes typing characteristics metadata. An emotive content of the natural language text is determined using a machine learning model. The natural language text is modified based on the emotive content.
1. A cognitive state prediction system comprising: a receiving circuit configured to receive an electronic message sent by a first user; a labeling circuit configured to query a second user to associate a label with the electronic message based on a cognitive state of the first user; and a correlating circuit configured to correlate the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of a wearable and an external sensor in a database. 2. The system of claim 1, further comprising an analyzing and label predicting circuit configured to predict a label of a current cognitive state of the first user based on current user data and a plurality of labels stored with the user data in the database. 3. The system of claim 1, further comprising an analyzing and label predicting circuit configured to: analyze the user data stored in the database; analyze a current state of the first user based on current user data being detected by at least one of the wearable and the external sensor; and predict a predicted label of a current cognitive state of the first user to send to associate with a current electronic message being sent by the first user. 4. The system of claim 3, wherein the predicted label includes a plurality of cognitive states of the first user, and wherein each of the plurality of cognitive states of the first user associated with the predicted label includes a confidence level for each of the plurality of cognitive states. 5. The system of claim 1, wherein the labeling circuit further queries the first user to confirm that the label input by the second user is correct for the electronic message. 6. The system of claim 1, wherein the database comprises an external database. 7. The system of claim 1, wherein the database includes pre-configured labels associated with user data of a cohort. 8. The system of claim 1, wherein the receiving circuit further receives electronic calendar data for the first user, and wherein the labeling circuit sends the electronic calendar data with the query to the second user. 9. The system of claim 1, wherein the labeling circuit sends electronic calendar data with the query to the second user such that the second user determines the label based on the cognitive state of the first user and the calendar data. 10. The system of claim 1, wherein the user data detected by at least one of the external sensor and the wearable includes at least one of: a glucose level; blood pressure; electrocardiogram (ECG); a breathing status; a heart rate; a stress level; a perspiration level; a facial expression; a measurement of a body movement; an eye movement; and a voice characteristic. 11. The system of claim 1, wherein the cognitive state of the first user that the second user associates the label with the electronic message comprises an interpreted cognitive state of the first user by the second user. 12. The system of claim 1, wherein the interpreted cognitive state of the first user by the second user is based on prior knowledge of the first user by the second user. 13. The system of claim 1, wherein the user data further includes data corresponding to a mode of the sending of the electronic message by the first user. 14. A cognitive state prediction method comprising: receiving an electronic message sent by a first user; querying a second user to associate a label with the electronic message based on a cognitive state of the first user; and correlating the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of a wearable and an external sensor in a database. 15. The method of claim 14, further comprising analyzing and label predicting in which a label of a current cognitive state of the first user based on current user data and a plurality of labels stored with the user data in the database is predicted. 16. The method of claim 14, further comprising: analyzing the user data stored in the database; analyzing a current state of the first user based on current user data being detected by at least one of the wearable and the external sensor; and predicting a label of a current cognitive state of the first user to send to associate with a current electronic message being sent by the first user. 17. The method of claim 16, wherein the predicted label includes a plurality of cognitive states of the first user, and wherein each of the plurality of cognitive states of the first user associated with the predicted label includes a confidence level. 18. The method of claim 14, wherein the querying further queries the first user to confirm that the label input by the second user is correct for the electronic message. 19. The method of claim 14, wherein the database includes pre-configured labels associated with user data of a cohort. 20. A non-transitory computer-readable recording medium recording a cognitive state prediction program, the program causing a computer to perform: receiving an electronic message sent by a first user; querying a second user to associate a label with the electronic message based on a cognitive state of the first user; and correlating the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of a wearable and an external sensor in a database.
Please help me write a proper abstract based on the patent claims.
A cognitive state prediction method, system, and non-transitory computer readable medium, include a receiving circuit configured to receive an electronic message sent by a first user, a labeling circuit configured to query a second user to associate a label with the electronic message based on a cognitive state of the first user, and a correlating circuit configured to correlate the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of the wearable and an external sensor in a database.
1. A system for indicating a probability of errors in multimedia content, the system comprising: a memory; a processor communicatively coupled to the memory, where the processor is configured to perform monitoring work being performed on multimedia content; identifying distractions during the monitoring of the work being performed; calculating a probability of errors in at least one location of the multimedia content by on the distractions that have been identified; and annotating the location with an indication of the probability of errors. 2. The system of claim 1, wherein the multimedia content is text, sound, a 2-D picture, a 3-D picture, a 2-D video, a 3-D video, or a combination thereof. 3. The system of claim 1, wherein the monitoring work being performed includes contemporaneous monitoring of pop-ups on a graphical user interface, instant messaging, e-mail, operation of telephone, detection of other people within an area, switching of windows on a graphical user interface, amount of elapsed time on a given task, ambient noise, user eye-tracking, user typing speed, user heart rate, user breathing rate, user blink frequency, user skin conductance, or a combination thereof. 4. The system of claim 1, where in the calculating the probability of errors includes using a function F(U,S,P) based on a determination of user state (U); a determination of sensitivity (S) of user input; and a determination of user characteristics stored in a profile (P). 5. The system of claim 4, wherein the user state (U) includes an output of the work being performed on the multimedia content, a day of week, a time of day, a location, or a combination thereof. 6. The system of claim 4, wherein the sensitivity (S) includes a location in the multimedia content, a category of the multimedia content, a complexity of the multimedia content, regulatory requirements, legal requirements, or a combination thereof. 7. The system of claim 4, wherein the profile (P) includes sensitivity according to times of day, history of creating errors, crowd-sourcing of a team, presence of specific individuals within a given area, vocational profile of user, or a combination thereof. 8. The system of claim 4, wherein the function F(U,S,P) uses machine learning 9. The system of claim 1, wherein the annotating the location with an indication of the probability of errors includes annotating with. color of text, color of background area, blinking font, font size, textures, insertion of tracking bubbles, or a combination thereof. 10. The system of claim 1, further comprising displaying the distractions that have been identified in conjunction with the annotating. 11. The system of claim 1, in which the multimedia content is an audio signal and the annotating includes graphical markers in a graphic representation of the audio signal, additional audio, or a combination thereof. 12. The system of claim 1, in which the multimedia content is a video signal and the annotating is graphical markers applied to sections of the video signal. 13. The system of claim 1, in which the multimedia content is a video game or virtual universe and the annotating indicates areas of user game play with probabilities of distractions. 14. A non-transitory computer program product for indicating a probability of errors in multimedia content, the computer program product comprising a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code configured to perform: monitoring work being performed on multimedia content; identifying distractions during the monitoring of the work being performed; calculating a probability of errors in at least one location of the multimedia content by on the distractions that have been identified; and annotating the location with an indication of the probability of errors. 15. The non-transitory computer program product of claim 14, wherein the multimedia content is text, sound, a 2-D picture, a 3-D picture, a 2-D video, a 3-D video, or a combination thereof. 16. The non-transitory computer program product of claim 14, wherein the monitoring work being performed includes contemporaneous monitoring of pop-ups on a graphical user interface, instant messaging, e-mail, operation of telephone, detection of other people within an area, switching of windows on a graphical user interface, amount of elapsed time on a given task, ambient noise, user eye-tracking, user typing speed, user heart rate, user breathing rate, user blink frequency, user skin conductance, or a combination thereof. 17. The non-transitory computer program product of claim 14, where in the calculating the probability of errors includes using a function F(U,S,P) based on a determination of user state (U); a determination of sensitivity (S) of user input; and a determination of user characteristics stored in a profile (P). 18. The non-transitory computer program product of claim 17, wherein the user state (U) includes an output of the work being performed on the multimedia content, a day of week, a time of day, a location, or a combination thereof. 19. The non-transitory computer program product of claim 17, wherein the sensitivity (S) includes a location in the multimedia content, a category of the multimedia content, a complexity of the multimedia content, regulatory requirements, legal requirements, or a combination thereof. 20. The non-transitory computer program product of claim 17, wherein the profile (P) includes sensitivity according to times of day, history of creating errors, crowd-sourcing of a team, presence of specific individuals within a given area, vocational profile of user, or a combination thereof.
Please help me write a proper abstract based on the patent claims.
Disclosed is a novel system and method for indicating a probability of errors in multimedia content. The system determines a user state or possible user distraction level. The user distraction level is indicated in the multimedia content. In one example, work is monitored being performed on the multimedia content. Distractions are identified while the work is being monitored. A probability of errors is calculated in at least one location of the multimedia content by on the distractions that have been identified. Annotations are used to indicate of the probability of errors. In another example, the calculating of probability includes using a function F(U,S,P) based on a combination of: i) a determination of user state (U), ii) a determination of sensitivity (S) of user input, and iii) a determination of user characteristics stored in a profile (P).
1. One or more non-transitory computer-readable storage media storing computer-executable instructions for causing a computing system to perform processing to execute a process model, the processing comprising: receiving a specification of a process model, the process model comprising a plurality of process components; determining a relationship between a first process component and another process component of the plurality of process components using a predictive model, the predictive model associated with at least a portion of the process model; defining a process rule for the first process component, the process rule specifying a second process component to be executed, the process rule comprising the relationship or a heuristic rule; and executing the second process component according to the process rule. 2. The one or more non-transitory computer-readable storage media of claim 1, wherein determining a relationship between a first and another process component of the plurality of process components using predictive model comprises: issuing a request for execution of a stored procedure in a database system. 3. The one or more non-transitory computer-readable storage media of claim 2, wherein the request for execution of a stored procedure in a database system accesses a predictive model of at least a portion of the process model components. 4. The one or more non-transitory computer-readable storage media of claim 1, wherein determining a relationship between a first process component and another process component of the plurality of process components comprises requesting the analysis of the predictive model. 5. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises receiving user input selecting the relationship or the heuristic rule. 6. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises evaluating a confidence level associated with a factor of the predictive model. 7. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises determining whether a current date is later than a threshold date and, if the current date is later than the threshold date, selecting the relationship as the process rule. 8. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises determining whether a number of data points analyzed by the predictive model exceeds a threshold, and, if the threshold is exceeded, selecting the relationship as the process rule. 9. The one or more non-transitory computer-readable storage media of claim 1, further comprises generating a data aggregation comprising data associated with the process model and data associated with the predictive model. 10. The one or more non-transitory computer-readable storage media of claim 1, wherein the defining selects the heuristic rule, the processing further comprising: selecting at a later time the relationship. 11. The one or more non-transitory computer-readable storage media of claim 1, wherein the defining selects the heuristic rule, and wherein the process model is received with heuristic rules forming process rules specifying an order in which the process components should be executed. 12. The one or more non-transitory computer-readable storage media of claim 1, wherein the predictive model is not directly associated with the process model. 13. The one or more non-transitory computer-readable storage media of claim 1, wherein determining a relationship between a first process component and another process component of the plurality of process components comprises determining a relationship between the first process component and each of a plurality of other process components of the plurality of process components. 14. The one or more non-transitory computer-readable storage media of claim 13, the processing further comprising: displaying a plurality of the relationships to a user. 15. The one or more non-transitory computer-readable storage media of claim 14, wherein defining a process rule for the first process component comprises receiving user input selecting one of the plurality of displayed relationships. 16. The one or more non-transitory computer-readable storage media of claim 1, wherein the determining and the defining is carried out on a component-by-component basis for the process model specification as the process model specification is executed. 17. A computing system that implements a process control engine, the computing system comprising: one or more memories; one or more processing units coupled to the one or more memories; and one or more non-transitory computer readable storage media storing instructions that, when loaded into the memories, cause the one or more processing units to perform operations for: implementing a computing platform comprising: a process control engine, the process control engine executing a process specification comprising a plurality of process components; and a rules framework in communication with the process control engine; implementing a database comprising: a data store; and a predictive modeling engine; generating a predictive model of at least a portion of the process components using the predictive modeling engine; determining a rule of the rule framework using the predictive model; and executing with the process control engine at least one of the plurality of process components according to the rule. 18. The computing system of claim 17, the operations further comprising: receiving user input selecting the rule for execution by the process control engine. 19. In a computing system comprising a memory and one or more processors, a method of executing a process specification according to a ruleset, the method comprising: defining a ruleset for the process using a plurality of heuristic rules, the process comprising a plurality of process components; defining a predictive model for at least a portion of the plurality of process components; executing the process according to the ruleset; training the predictive model using data obtained during the executing; determining a rule for the process specification using the predictive model; revising the ruleset to include the determined rule; and executing the process according to the revised ruleset. 20. The method of claim 19, further comprising: receiving user input revising the ruleset to include the rule.
Please help me write a proper abstract based on the patent claims.
A specification of the process model is received. The process model includes a plurality of process components. A relationship between a first process component and another process component of the plurality of process components is determined using a predictive model. A process rule for the first process component is determined. The process rule specified a second process component to be executed. The process rule includes the relationship determined using the predictive model or a heuristic rule. The second process component is executed according to the process rule.
1. A method of operating a quantum computing device, comprising: causing a quantum computing device to evolve from a first state to a second state according to a schedule, the first state corresponding to a first Hamiltonian, the second state corresponding to a second Hamiltonian, wherein the schedule includes an X schedule for Hamiltonian terms in the X basis, and a Z schedule for Hamiltonian terms in the Z basis, and wherein the schedule is nonlinear or piecewise linear in the X schedule, the Z schedule, or both the X schedule and the Z schedule. 2. The method of claim 1, wherein the schedule includes one or more sequences where the X schedule and the Z schedule converge toward one another and one or more sequences where the X schedule and the Z schedule diverge from one another. 3. The method of claim 1, wherein the X schedule and the Z schedule intersect only in a latter half of the respective schedules. 4. The method claim 1, wherein one or both of the X schedule or the Z schedule has terms that vary and wherein the variation in terms is greater in a latter half of the respective schedule than in a front half of the respective schedule. 5. The method of claim 1, further comprising generating the schedule by performing a schedule-training process beginning from an initial schedule, wherein the initial schedule includes one or more of: (a) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein the initial X schedule and the initial Z schedule are both constant; (b) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one of the initial X schedule or the initial Z schedule is constant, and the other one of the initial X schedule or the initial Z schedule is nonconstant; (c) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one of the initial X schedule or the initial Z schedule is linear, and the other one of the initial X schedule or the initial Z schedule is nonlinear and nonconstant; (d) all initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the initial X schedule or the initial Z schedule have terms that vary with greater degree in a latter half of the respective schedule; (e) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the initial X schedule or the initial Z schedule have terms that vary with greater degree in a latter half of the respective schedule; or (f) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the initial X schedule or the initial Z schedule have terms that are constant in a first half of the respective schedule and that vary in a second half of the respective schedule. 6. The method of claim 1, wherein the second Hamiltonian is a solution to an optimization problem, and wherein the schedule-training process uses one or more training problems having a size that is a smaller than a size of the optimization problem. 7. The method of claim 1, further comprising generating the schedule by: modifying an initial schedule from its initial state to create a plurality of modified schedules; testing the modified schedules relative to one or more problem instances; selecting one of the modified schedules based on an observed improvement in solving one or more of the problem instances. 8. The method of claim 7, wherein the generating further comprises iterating the acts of modifying, testing, and selecting until no further improvement is observed in the selected modified schedule. 9. The method of claim 1, wherein, for at least one step of the Z schedule or X schedule, the sign of the Z schedule or X schedule step is opposite of the sign of the respective final step of the Z schedule or X schedule. 10. The method of claim 1, wherein, for at least one step of the Z schedule or X schedule, the sign of the Z schedule or X schedule step switches from positive to negative or vice versa. 11. The method of claim 1, wherein one or more terms of the first Hamiltonian are noncommuting with corresponding terms of the second Hamiltonian. 12. A method, comprising: generating a learned schedule for controlling a quantum computing device by performing a schedule-training process beginning from an initial schedule, wherein the initial schedule includes an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis. 13. The method of claim 12, wherein at least one of the initial X schedule or the initial Z schedule is nonlinear. 14. The method of claim 12, wherein the initial X schedule and the initial Z schedule are both constant. 15. The method of claim 12, wherein one of the initial X schedule or the initial Z schedule is constant, and the other one of the initial X schedule or the initial Z schedule is nonconstant. 16. The method of claim 12, wherein one of the initial X schedule or the initial Z schedule is linear, and the other one of the initial X schedule or the initial Z schedule is nonlinear and nonconstant. 17. The method of claim 12, wherein one or both of the initial X schedule or the initial Z schedule have terms that vary with greater degree in a latter half of the respective schedule. 18. The method of claim 12, wherein one or both of the initial X schedule or the initial Z schedule have terms that are constant in a first half of the respective schedule and that vary in a second half of the respective schedule. 19. The method of claim 12, wherein the learned schedule includes one of: (a) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, wherein the learned schedule includes one or more sequences where the learned X schedule and the learned Z schedule converge toward one another and one or more sequences where the learned X schedule and the learned Z schedule diverge from one another; (b) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, wherein the learned X schedule and the learned Z schedule intersect only in a latter half of the respective schedules. (c) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the learned X schedule or the learned Z schedule have terms that vary and wherein the variation in terms is greater in a latter half of the respective schedule than in a front half of the respective schedule; (d) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, and wherein, for at least one step of the learned Z schedule or learned X schedule, the sign of the learned Z schedule or learned X schedule step is opposite of the sign of the respective final step of the learned Z schedule or learned X schedule; or (e) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, and wherein, for at least one step of the learned Z schedule or learned X schedule, the sign of the learned Z schedule or learned X schedule step switches from positive to negative or vice versa. 20. A system, comprising: a processor; and at least one memory coupled to the processor and having stored thereon processor-executable instructions for: generating a schedule for controlling a quantum computing device by performing a schedule-training process beginning from an initial schedule; and causing a quantum computing device to evolve from a first state to a second state according to the schedule, the first state corresponding to a first Hamiltonian, the second state corresponding to a second Hamiltonian, wherein the schedule includes an X schedule for Hamiltonian terms in the X basis, and a Z schedule for Hamiltonian terms in the Z basis, and wherein the schedule is nonlinear or piecewise linear in one or both of the X schedule or the Z schedule.
Please help me write a proper abstract based on the patent claims.
Among the embodiments disclosed herein are variants of the quantum approximate optimization algorithm with different parametrization. In particular embodiments, a different objective is used: rather than looking for a state which approximately solves an optimization problem, embodiments of the disclosed technology find a quantum algorithm that will produce a state with high overlap with the optimal state (given an instance, for example, of MAX-2-SAT). In certain embodiments, a machine learning approach is used in which a “training set” of problems is selected and the parameters optimized to produce large overlap for this training set. The problem was then tested on a larger problem set. When tested on the full set, the parameters that were found produced significantly larger overlap than optimized annealing times. Testing on other random instances (e.g., from 20 to 28 bits) continued to show improvement over annealing, with the improvement being most notable on the hardest problems. Embodiments of the disclosed technology can be used, for example, for near-term quantum computers with limited coherence times.
1. A computer implemented method for developing a system to fabricate test data into a database, the method comprising: receiving, using a processor system, a file format layout of the database, wherein the database includes variables; defining rules independently of the file format layout of the database; receiving, using the processor system, the rules that are defined independently of the file format layout of the database; wherein the rules impose constraints on the variables; wherein the rules being defined independently of the file format layout prevents the rules from imposing any limit on a first manner in which the rules are defined; wherein the rules being defined independently of the file format layout prevents the rules from imposing any limit on a second manner in which relationships between and among the variables are defined; defining a constraint problem based on the variables and the constraints; and solving the constraint problem. 2. The computer implemented method of claim 1, wherein solving the constraint problem generates an assignment of fabricated test data to each one of the variables. 3. The computer implemented method of claim 2 further comprising generating an output comprising the file format layout having the fabricated test data, wherein the fabricated test data conforms to the rules. 4. The computer implemented method of claim 3, wherein the file format layout comprises a template. 5. The computer implemented method of claim 1, wherein the constraint problem is solved using a constraint satisfaction problem (CSP) solver. 6. The computer implemented method of claim 1, wherein the rules include an individual rule that imposes a constraint on more than one of the variables. 7. The computer implemented method of claim 1, wherein the file format layout comprises is selected from the group consisting of: a database; a flat file; a message; a data stream; and a web service call. 8-20. (canceled)
Please help me write a proper abstract based on the patent claims.
Embodiments are directed to a computer implemented method for fabricating test data. The method includes receiving, using a processor system, a file format layout having variables. The method further includes receiving, using the processor system, rules that are defined independently of the file format layout, wherein the rules impose constraints on the variables. The method further includes defining a constraint problem based on the variables and the constraints, and solving the constraint problem.
1. A non-transitory machine-readable medium storing instructions for relationship extraction executable by a machine to cause the machine to: apply unsupervised relationship learning to a logic knowledge base and a plurality of entity groups recognized from a document to provide a probabilistic model; perform joint inference on the probabilistic model to make simultaneous statistical judgments about a respective relationship between at least two entities in one of the plurality of entity groups; and extract a relationship between at least two entities in one of the plurality of entity groups based on the joint inference. 2. The medium of claim 1, wherein the respective relationship is an unidentified relationship between at least two entities in one of the plurality of entity groups and wherein the relationship between at least two entities is a most likely relationship between at least two entities in one of the plurality of entity groups. 3. The medium of claim 1, wherein the logic knowledge base includes a plurality of first-order logic formulas. 4. The medium of claim 3, wherein the probabilistic model includes the plurality of first-order logic formulas and a plurality of weights. 5. The medium of claim 4, wherein the instructions to apply unsupervised relationship learning includes instructions to associate the plurality of weights with the plurality of first-order logic formulas and wherein each of the plurality of weights is associated with one of the plurality of first-order logic formulas. 6. The medium of claim 5, wherein the plurality of associated weights collectively provide a plurality of probabilities that are associated with the plurality of first-order logic formulas. 7. The medium of claim 6, wherein the probabilities of the plurality of first-order logic formulas are provided via a log-linear model. 8. The medium of claim 3, including instructions to extract the relationship between at least two entities in one of the plurality of entity groups using the plurality of first-order logic formulas. 9. A system for relationship extraction comprising a processing resource in communication with a non-transitory machine readable medium having instructions executed by the processing resource to implement: an unsupervised relationship learning engine to apply unsupervised relationship learning to a first-order logic knowledge base and a plurality of entity pairs recognized from a textual document to provide a probabilistic graphical model; a joint inference engine to perform joint inference on the probabilistic graphical model to make simultaneous statistical judgments about a respective relationship between each of the plurality of recognized entity pairs; and an extraction engine to extract a relationship between an entity pair based on the joint inference. 10. The system of claim 9, including instructions to extract a relationship between a recognized entity pair. 11. The system of claim 10, including instructions to make a plurality of probabilistic determinations in parallel for a plurality of recognized entity pairs to make a statistical judgment about the respective relationships between the plurality of recognized entity pairs. 12. The system of claim 9, wherein the instructions executable to extract the implicit relationship between the entity pair includes instruction to relationally auto-correlate a variable pertaining to a first recognized entity pair with a variable pertaining to a second recognized entity pair to extract an implicit relationship between the entity pair based on the joint inference. 13. A method for relationship extraction comprising: applying unsupervised relationship learning to a first-order logic knowledge base and a plurality of entity pairs recognized from a textual document to provide a probabilistic graphical model, wherein a plurality of relationships between the plurality of recognized entity pairs are not labeled; performing joint inference on the probabilistic graphical model to make simultaneous statistical judgments about a respective relationship between each of the plurality of recognized entity pairs; and extracting an implicit relationship between an entity pair based on the joint inference. 14. The method of claim 13, wherein the textual document does not provide explicit support for the implicit relationship. 15. The method of claim 13, wherein a portion of the plurality of first-order logic formulas represent implicit relationships.
Please help me write a proper abstract based on the patent claims.
Relationship extraction can include applying unsupervised relationship learning to a logic knowledge base and a plurality of entity groups recognized from a document to provide a probabilistic model. Relationship extraction can include performing joint inference on the probabilistic model to make simultaneous statistical judgments about a respective relationship between at least two entities in one of the plurality of entity groups. Relationship extraction can include extracting a relationship between at least two entities in one of the plurality of entity groups based on the joint inference.
1. A semantic entity relation detection classifier training system, comprising: one or more computing devices, said computing devices being in communication with each other via a computer network whenever there is a plurality of computing devices; and a computer program having program modules executable by the one or more computing devices, the one or more computing devices being directed by the program modules of the computer program to, receive a query click log and a knowledge graph, find queries included in the query click log that are associated with entities found in the knowledge graph, said entities being associated with a knowledge graph domain of interest, infer explicit relations from the found queries and generate an explicit relations data set comprising queries associated with the inferred explicit relations, infer implicit relations from the found queries and generate an implicit relations data set comprising queries associated with the inferred implicit relations, and train a semantic entity relation detection classifier using the explicit and implicit data sets to find an explicit or implicit relation, or both, in a query. 2. The system of claim 1, wherein the program module for finding queries included in the query click log that are associated with entities found in the knowledge graph, comprises sub-modules for: identifying one or more central entity types in the knowledge graph which correspond to a domain of interest; for each identified central entity type, finding central type entities in the knowledge graph that correspond to the central entity type under consideration, establishing a central entity type property list for each of the found central type entities that comprises the found central type entity and other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, as well as the type of relation existing between the found central type entity and each of the other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, combining the central entity type property list established for the identified central entity types to produce a combined entity property list, and finding queries associated with entities listed in the combined entity property list in the query click log. 3. The system of claim 2, wherein the sub-module for finding queries associated with entities listed in the combined entity property list in the query click log, comprises sub-modules for: creating a seed query from an entity in the combined entity property list; finding query click log queries that include the seed query; identifying uniform resource locators (URLs) from the query click log that are associated with at least one of the found queries; and finding other queries in the query click log that are associated with at least one of the identified URLs. 4. The system of claim 3, wherein the sub-module for finding queries associated with entities listed in the combined entity property list in the query click log, further comprises a sub-module for eliminating the query click log queries found to include the seed query from consideration prior to identifying URLs that do not meet a prescribed length criteria, or quantity criteria, or both. 5. The system of claim 2, wherein the sub-module for finding queries associated with entities listed in the combined entity property list in the query click log, comprises sub-modules for: identifying one or more relations of an entity in the combined entity property list each of which points to at least one URL in the knowledge graph; generating a list of the pointed to URLs; and finding queries in the query click log that are associated with at least one of the listed URLs. 6. The system of claim 2, further comprising a sub-module for, after queries associated with entities listed in the combined entity property list in the query click log are found, eliminating from consideration those found queries that are non-natural spoken language queries. 7. The system of claim 1, wherein an entity in the knowledge graph has said prescribed degree of relation to a central type entity whenever the entity is associated with an incoming relation from the central type entity, or is reachable in the knowledge graph from the central type entity within a prescribed number of relations. 8. The system of claim 1, wherein the program module for inferring explicit relations from the found queries and generating an explicit relations data set comprising queries associated with the inferred explicit relations, comprises sub-modules for: scanning the found queries to find those queries exhibiting an inferred explicit relation between entities wherein an inferred explicit relation between entities is defined as the presence of an entity and a closely related entity in the same query, and wherein an entity is closely related to another entity whenever the entity is connected to the another entity in the knowledge graph by no more than a prescribed number of intermediate entities; determining the types of relation exhibited by a pair of entities in each query exhibiting an inferred explicit relation; and generating an explicit relations data set comprising the text of queries associated with the inferred explicit relations as well as the type of relation assigned to each of the entities in the pair. 9. The system of claim 8, wherein said prescribed number of intermediate entities is one, such that entities that were directly connected to each other are considered closely related, as well as entities that are connected to another entity by no more than one intermediate entity. 10. The system of claim 8, wherein the sub-module for scanning the found queries to find those queries exhibiting an inferred explicit relation between entities, comprises sub-modules for: determining if an entity associated with a found query is connected in the knowledge graph to another entity by a directed connector or path of connectors originating at the entity associated with a found query by no more than the prescribed number of intermediate entities; whenever the entity associated with the found query is connected in the knowledge graph to another entity by a directed connector or path of connectors originating at the entity associated with a found query by no more than the prescribed number of intermediate entities, determining if said other entity is also contained in the found query; and whenever said other entity is also contained in the found query, designating the found query as exhibiting an inferred explicit relation between the entities. 11. The system of claim 10, wherein the sub-module for scanning the found queries to find those queries exhibiting an inferred explicit relation between entities, further comprises sub-modules for: for a query designated as exhibiting an inferred explicit relation between a pair of entities contained therein, identifying the relation label assigned to each connector connecting the pair of entities in the knowledge graph; determining the relation of said other entity of the entity pair based on the identify relation label or labels and assigning the determined relation to said other entity of the entity pair; and assigning the relation of the entity associated with a found query, if known, to that entity of the entity pair. 12. The system of claim 8, wherein the sub-module for scanning the found queries to find those queries exhibiting an inferred explicit relation between entities, comprises sub-modules for: identifying an entity pair in the knowledge graph having a first entity of the pair that is connected to another entity of the pair by a directed connector or path of connectors originating at the first entity by no more than the prescribed number of intermediate entities, and whose connector or connectors connecting the pair of entities have relation label or labels that correspond to a semantic entity relation type associated with a domain of interest; determining if a found query contains the identified entity pair; and whenever the found query contains the identified entity pair, designating the found query as exhibiting an inferred explicit relation between the entities, assigning the semantic entity relation type associated with the domain of interest to said other entity of the entity pair, and assigning the relation of the first entity of the pair, if known, to that entity. 13. A semantic entity relation detection classifier training system, comprising: one or more computing devices, said computing devices being in communication with each other via a computer network whenever there is a plurality of computing devices; and a computer program having program modules executable by the one or more computing devices, the one or more computing devices being directed by the program modules of the computer program to, receive a query click log and a knowledge graph, find queries included in the query click log that are associated with entities found in the knowledge graph, said entities being associated with a knowledge graph domain of interest, infer implicit relations from the found queries and generate an implicit relations data set comprising queries associated with the inferred implicit relations, and train a semantic entity relation detection classifier using at least the implicit data set to find a relation in a query. 14. The system of claim 13, wherein the program module for finding queries included in the query click log that are associated with entities found in the knowledge graph, comprises sub-modules for: identifying one or more central entity types in the knowledge graph which correspond to a domain of interest; for each identified central entity type, finding central type entities in the knowledge graph that correspond to the central entity type under consideration, establishing a central entity type property list for each of the found central type entities that comprises the found central type entity and other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, as well as the type of relation existing between the found central type entity and each of the other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, combining the central entity type property list established for the identified central entity types to produce a combined entity property list, and finding queries associated with entities listed in the combined entity property list in the query click log. 15. The system of claim 14, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: for each of one or more of the found queries, using the query click log to identify from a found query the URL associated with a result presented from a search of the query that was selected by a user, determining if an entity associated with the identified URL is found in the query, wherein an entity is associated with a URL if the entity points to that URL in the knowledge graph, whenever the entity associated with the identified URL is not found in the query, using said combined entity property list to identify a central entity type related to the entity associated with the identified URL and what type of relation exists between that central entity type and the entity associated with the identified URL, and inferring the existence of an implicit relation from the found query and assigning the identified relation type to the entity associated with the identified URL; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the entity associated with the URL indentified from that query. 16. The system of claim 14, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: for each of one or more of the found queries, using the query click log to identify from a found query the URL associated with a result presented from a search of the query that was selected by a user, determining if an entity associated with the identified URL is found in the query, wherein an entity is associated with a URL if the entity points to that URL in the knowledge graph, whenever the entity associated with the identified URL is not found in the query, using said combined entity property list to identify a central entity type related to the entity associated with the identified URL and determining if the identified the central entity type is found in the query, whenever the identified the central entity type is found in the query, identifying what type of relation exists between that central entity type and the entity associated with the identified URL, and inferring the existence of an implicit relation from the found query and assigning the identified relation type to the entity associated with the identified URL; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the entity associated with the URL indentified from that query. 17. The system of claim 13, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: identifying, for one or more semantic entity relation types associated with a domain of interest, at least one entity pair in the knowledge graph having a first entity of the pair that is connected to another entity of the pair by a directed connector or path of connectors originating at the first entity by no more than the prescribed number of intermediate entities, and whose connector or connectors connecting the pair of entities have a relation label or labels that correspond to the semantic entity relation type associated with a domain of interest; determining, for each entity pair identified, if a found query contains the first entity of the pair, but not the other entity of the pair, whenever the found query contains the first entity of the pair, but not the other entity of the pair, using the query click log to identify from the found query the URL associated with a result presented from a search based on the query that was selected by a user, and determining if the other entity of the pair is associated with the identified URL, wherein an entity is associated with a URL if the entity points to that URL in the knowledge graph, whenever the other entity of the pair is associated with the identified URL, designating the found query infers an implicit relation, and assigning the semantic entity relation type associated with the domain of interest to said other entity of the entity pair, and assigning the relation of the first entity of the pair, if known, to that entity; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the first entity of an entity pair associated with the query and the type of relation assigned to said other entity of the entity pair. 18. The system of claim 13, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: identifying, for one or more semantic entity relation types associated with a domain of interest, those found queries having the name of the relation type or a variation thereof contained therein, and at least one entity pair in the knowledge graph having a first entity of the pair that is connected to another entity of the pair by a directed connector or path of connectors originating at the first entity by no more than the prescribed number of intermediate entities, and whose connector or connectors connecting the pair of entities have a relation label or labels that correspond to the semantic entity relation type; determining, for each entity pair identified and each found query identified, if the query contains the first entity of the pair, but not the other entity of the pair, whenever the found query contains the first entity of the pair, but not the other entity of the pair, designating the found query infers an implicit relation, and assigning the semantic entity relation type associated with the domain of interest to said other entity of the entity pair, and assigning the relation of the first entity of the pair, if known, to that entity; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the first entity of an entity pair associated with the query and the type of relation assigned to said other entity of the entity pair. 19. The system of claim 13, further comprising a program module for inferring explicit relations from the found queries and generating an explicit relations data set comprising queries associated with the inferred explicit relations, and wherein the program module for training the semantic entity relation detection classifier comprises training the semantic entity relation detection classifier using the explicit and implicit data sets to find an explicit or implicit relation, or both, in a query. 20. A semantic entity relation detection classifier training system, comprising: one or more computing devices, said computing devices being in communication with each other via a computer network whenever there is a plurality of computing devices; and a computer program having program modules executable by the one or more computing devices, the one or more computing devices being directed by the program modules of the computer program to, receive a query click log and a knowledge graph, find queries included in the query click log that are associated with entities found in the knowledge graph, said entities being associated with a knowledge graph domain of interest, infer explicit relations from the found queries and generate an explicit relations data set comprising queries associated with the inferred explicit relations, infer implicit relations from the found queries and generate an implicit relations data set comprising queries associated with the inferred implicit relations, train a first classifier using the implicit relations data set to produce an implicit relations classifier that can find implicit relations in a query, apply the implicit relations classifier to each of the queries in the explicit relations data set to find queries predicted to have an implicit relation or implicit relations, augment the explicit relations data set, said augmenting comprising, for each query in the explicit relations data set predicted to have an implicit relation or implicit relations, adding the implicit relation or implicit relations predicted for that query to the explicit relations data set entry associated with the query to produce an augmented explicit relations data set, and train a second classifier using the augmented explicit relations data set to produce a combined relations classifier that can find explicit, or implicit relations, or both, in a query.
Please help me write a proper abstract based on the patent claims.
Semantic entity relation detection classifier training implementations are presented that are generally used to train a semantic entity relation detection classifier to identify relations expressed in a natural language query. In one general implementation, queries are found in a search query click log that exhibit relations and entity types found in a semantic knowledge graph. Explicit relations are inferred from the found queries and an explicit relations data set is generated that includes queries associated with the inferred explicit relations. In addition, implicit relations are inferred from the found queries and an implicit relations data set is generated that includes queries associated with the inferred implicit relations. A semantic entity relation detection classifier is then trained using the explicit and implicit data sets.
1. A computerized system for evaluating the likelihood of technology change incidents, comprising: a computer apparatus including a processor, a memory, and a network communication device; and a technology change evaluation module stored in the memory, executable by the processor, and configured for: retrieving a plurality of encoded records regarding a plurality of historic information technology operational activities from an activity record database; decoding each of the plurality of encoded records into a plurality of decoded records, each of the decoded records comprising a binary value in each of a plurality of data fields, the plurality of data fields including a first data field defining whether one of the historic information technology operational activities is associated with a prior technology incident; processing the decoded records using a technology incident predictive model to produce an incident predictive algorithm for predicting whether a technology change event will cause a technology incident, the incident predictive algorithm defining a subset of the data fields and a weight factor for each data field in the subset of the data fields; retrieving a change record related to a future technology change event, the change record comprising change information related to one or more of the plurality of data fields; and evaluating the change information in the change record using the incident predictive algorithm to determine a likelihood that the future technology change event will cause a future technology incident. 2. The computerized system according to claim 1, wherein: the plurality of decoded records are associated with a first time period; the technology change evaluation module is configured for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm. 3. The computerized system according to claim 1, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field. 4. The computerized system according to claim 1, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field. 5. The computerized system according to claim 1, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 6. The computerized system according to claim 1, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; the plurality of decoded records are associated with a first time period; the technology change evaluation module is configured for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises: processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm; performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field; performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field; and performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 7. The computerized system according to claim 1, wherein the technology change evaluation module is configured for periodically updating the incident predictive algorithm. 8. A computer program product for evaluating the likelihood of technology change incidents, comprising a non-transitory computer-readable storage medium having computer-executable instructions for: retrieving a plurality of encoded records regarding a plurality of historic information technology operational activities from an activity record database; decoding each of the plurality of encoded records into a plurality of decoded records, each of the decoded records comprising a binary value in each of a plurality of data fields, the plurality of data fields including a first data field defining whether one of the historic information technology operational activities is associated with a prior technology incident; processing the decoded records using a technology incident predictive model to produce an incident predictive algorithm for predicting whether a technology change event will cause a technology incident, the incident predictive algorithm defining a subset of the data fields and a weight factor for each data field in the subset of the data fields; retrieving a change record related to a future technology change event, the change record comprising change information related to one or more of the plurality of data fields; and evaluating the change information in the change record using the incident predictive algorithm to determine a likelihood that the future technology change event will cause a future technology incident. 9. The computer program product according to claim 8, wherein: the plurality of decoded records are associated with a first time period; the non-transitory computer-readable storage medium has computer-executable instructions for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm. 10. The computer program product according to claim 8, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field. 11. The computer program product according to claim 8, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field. 12. The computer program product according to claim 8, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 13. The computer program product according to claim 8, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; the plurality of decoded records are associated with a first time period; the non-transitory computer-readable storage medium has computer-executable instructions for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises: processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm; performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field; performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field; and performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 14. The computer program product according to claim 8, wherein the non-transitory computer-readable storage medium has computer-executable instructions for periodically updating the incident predictive algorithm. 15. A computerized method for evaluating the likelihood of technology change incidents, comprising: retrieving, via a computer processor, a plurality of encoded records regarding a plurality of historic information technology operational activities from an activity record database; decoding, via a computer processor, each of the plurality of encoded records into a plurality of decoded records, each of the decoded records comprising a binary value in each of a plurality of data fields, the plurality of data fields including a first data field defining whether one of the historic information technology operational activities is associated with a prior technology incident; processing, via a computer processor, the decoded records using a technology incident predictive model to produce an incident predictive algorithm for predicting whether a technology change event will cause a technology incident, the incident predictive algorithm defining a subset of the data fields and a weight factor for each data field in the subset of the data fields; retrieving, via a computer processor, a change record related to a future technology change event, the change record comprising change information related to one or more of the plurality of data fields; and evaluating, via a computer processor, the change information in the change record using the incident predictive algorithm to determine a likelihood that the future technology change event will cause a future technology incident. 16. The computerized method according to claim 15, wherein: the plurality of decoded records are associated with a first time period; the method comprises incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm. 17. The computerized method according to claim 15, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field. 18. The computerized method according to claim 15, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field. 19. The computerized method according to claim 15, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 20. The computerized method according to claim 15, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; the plurality of decoded records are associated with a first time period; the computerized method comprises incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises: processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm; performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field; performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field; and performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field.
Please help me write a proper abstract based on the patent claims.
Embodiments of the present invention relate to apparatuses, systems, methods and computer program products for a technology configuration system. Specifically, the system typically provides operational data processing of a plurality of records associated with information technology operational activities, for dynamic transformation of data and evaluation of interdependencies of technology resources. In other aspects, the system typically provides technical language processing of the plurality of records for transforming technical and descriptive data, and constructing categorical activity records. The system may be configured to achieve significant reduction in memory storage and processing requirements by performing categorical data encoding of the plurality of records. The system may employ a dynamic categorical data decoding process, which delivers a reduction in processing time when the encoded records are decoded for evaluating the exposure of technology change events to technology incidents and modifying such technology change events.
1. A non-transitory recording medium having recorded thereon a machine learning result editing program that is a processing program configured to generate a group of relevant words on a basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on a basis of input data, the machine learning result editing program that causes a computer to execute a process comprising: causing a display unit to display the generated group of relevant words; and exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process is performed by using the group from which the designated word has been eliminated. 2. The non-transitory recording medium according to claim 1 having recorded thereon the machine learning result editing program, wherein the machine learning result editing program causes the computer to execute the process further comprising: when learning a new piece of input data in a machine learning process, learning the new piece of input data in the machine learning process while using, as an initial value, a parameter used for expressions of words included in the group other than the word for which the elimination designation has been received. 3. The non-transitory recording medium according to claim 1 having recorded thereon file machine learning result editing program, wherein the group of relevant words is a group containing a relatively large number of words that are, as individual words, used is predetermined expressions close to each other in a result of learning the expressions of the words. 4. A method for editing a machine learning result that is a processing method by which a group of relevant words is generated on a basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on a basis of input data, wherein a computer is caused to execute a process comprising: causing a display unit to display the generated group of relevant words, using a processor; and exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process is performed by using the group from which the designated word has been eliminated, using the processor. 5. The method for editing the machine learning result according to claim 4, wherein the computer is caused to execute the process further comprising: when learning a new piece of input data in a machine learning process, learning the new piece of input data in the machine learning process while using, as an initial value, a parameter used for expressions of words, included in the group other than the word for which the elimination designation has been received, using the processor. 6. The method for editing the machine learning result according to claim 4, wherein the group of relevant words is a group containing a relatively large number of words that are, as individual words, used in predetermined expressions close to each other in a result of learning the expressions of the words. 7. An information processing apparatus that generates a group of relevant words on a basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on a basis of input data, the information processing apparatus comprising: a memory; and a processor coupled to the memory, wherein the processor-executes a process comprising: causing a display unit to display the generated group of relevant words; and exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process performed by using the group from which the designated word has been eliminated. 7. The information processing apparatus according to claim 7, wherein, when learning a new piece of input da ta in a machine Learning process, the exercising includes learning the new piece of input data in the machine teaming process while using, as an initial value, a parameter used for expressions of words included in the group other than the word for which fee elimination designation has been received. 9. The information processing apparatus according to claim 7, wherein fee group of relevant words is a group containing a relatively large number of words that are, as individual words, used in predetermined expressions close to each other in a result of learning the expressions of the words.
Please help me write a proper abstract based on the patent claims.
A machine learning result editing program recorded on a recording medium causes a computer to execute a process of generating a group of relevant words on the basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on the basis of input data. The machine learning result editing program causes the computer to execute: a process of causing a display unit to display the generated group of relevant words; and a process of exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process is performed by using the group from which the designated word has been eliminated.
1. A method comprising: receiving or accessing data encapsulating a sample of at least a portion of one or more files; feeding at least a portion of the received or accessed data as a time-based sequence into a recurrent neural network (RNN) trained using historical data; extracting, by the RNN, a final hidden state hi in a hidden layer of the RNN in which i is a number of elements of the sample; and determining, using the RNN and the final hidden state, whether at least a portion of the sample is likely to comprise malicious code. 2. The method of claim 1, wherein the received or accessed data forms at least part of a data stream. 3. The method of claim 1, wherein the at least a portion of the received or accessed data comprises a series of fixed-length encoded words. 4. The method of claim 1, wherein the elements comprises a series of instructions. 5. The method of claim 1, wherein the hidden state is defined by: ht=f(x, ht-1), wherein hidden state ht is a time-dependent function of input x as well as a previous hidden state ht-1. 6. The method of claim 1, wherein the RNN is an Elman network. 7. The method of claim 6, wherein the Elman network has deep transition or decoding functions. 8. The method of claim 6, wherein the Elman network parameterizes f(x, ht-1) as ht=g(W1x+Rht-1); where hidden state ht is a time-dependent function of input x as well as previous hidden state ht-1, W1 is a matrix defining input-to-hidden connections, R is a matrix defining the recurrent connections, and g(·) is a differentiable nonlinearity. 9. The method of claim 8 further comprising: adding an output layer on top of the hidden layer, such that ot=σ(W2ht) where ot is output, W2 defines a linear transformation of hidden activations, and σ(·) is a logistic function. 10. The method of claim 9 further comprising: applying backpropagation through time by which parameters of network W2, W1, and R are iteratively refined to drive the output otto a desired value as portions of the received or accessed data are passed through the RNN. 11. The method of claim 1, wherein the RNN is a long short term memory network. 12. The method of claim 1, wherein the RNN is a clockwork RNN. 13. The method of claim 1, wherein the RNN is a deep transition function. 14. The method of claim 1, wherein the RNN is an echo-state network. 15. The method of claim 1 further comprising: providing data characterizing the determination. 16. The method of claim 15, wherein providing data comprises at least one of: transmitting the data to a remote computing system, loading the data into memory, or storing the data. 17. The method of claim 1, wherein the files are binary files. 18. The method of claim 1, wherein the files are executable files. 19. A system comprising: at least one programmable data processor; and memory storing instructions which, when executed by the at least one programmable data processor, result in operations comprising: receiving or accessing data encapsulating a sample of at least a portion of one or more files; feeding at least a portion of the received or accessed data as a time-based sequence into a recurrent neural network (RNN) trained using historical data; extracting, by the RNN, a final hidden state hi in a hidden layer of the RNN in which i is a number of elements of the sample; and determining, using the RNN and the final hidden state, whether at least a portion of the sample is likely to comprise malicious code. 20. A non-transitory computer program product storing instructions which, when executed by at least one programmable data processor forming part of at least one computing device, result in operations comprising: receiving or accessing data encapsulating a sample of at least a portion of one or more files; feeding at least a portion of the received or accessed data as a time-based sequence into a recurrent neural network (RNN) trained using historical data; extracting, by the RNN, a final hidden state hi in a hidden layer of the RNN in which i is a number of elements of the sample; and determining, using the RNN and the final hidden state, whether at least a portion of the sample is likely to comprise malicious code.
Please help me write a proper abstract based on the patent claims.
Using a recurrent neural network (RNN) that has been trained to a satisfactory level of performance, highly discriminative features can be extracted by running a sample through the RNN, and then extracting a final hidden state hi, where i is the number of instructions of the sample. This resulting feature vector may then be concatenated with the other hand-engineered features, and a larger classifier may then be trained on hand-engineered as well as automatically determined features. Related apparatus, systems, techniques and articles are also described.
1. A digital conversation generating processor-implemented method, comprising: instantiating a conversational artificial-intelligence agent; identifying an individual target for conversation; initiating a conversation with the individual target by the artificial-intelligence agent by providing a first portion of a conversational dialogue to the individual target; recording a response from the individual target to the first portion of the conversational dialogue; and responding to the response from the individual target with a next contextual portion of the conversational dialogue. 2-20. (canceled)
Please help me write a proper abstract based on the patent claims.
The APPARATUSES, METHODS AND SYSTEMS FOR A DIGITAL CONVERSATION MANAGEMENT PLATFORM (“DCM-Platform”) transforms digital dialogue from consumers, client demands and, Internet search inputs via DCM-Platform components into tradable digital assets, and client needs based artificial intelligence campaign plan outputs. In one implementation, The DCM-Platform may capture and examine conversations between individuals and artificial intelligence conversation agents. These agents may be viewed as assets. One can measure the value and performance of these agents by assessing their performance and ability to generate revenue from prolonging conversations and/or ability to effect sales through conversations with individuals.
1.-20. (canceled) 21. A system to associate computing devices with each other based on computer network activity, comprising: a data processing system having a matching engine and a connector executed by one or more processors, the data processing to: identify a first linking factor based on a connection between a first computing device and the computer network at a first location during a first time period, and based on a connection between a second computing device and the computer network at the first location during the first time period; monitor for a second linking factor based on input activity at the first computing device in a second time period, and based on input activity at the second computing device in the second time period; monitor for a third linking factor based on activity at the first computing device at the first location during a third time period, and based on activity at the second computing device at a second location during the third time period; determine a negative match probability based on the second linking factor and based on the third linking factor; and indicate, in a data structure, a non-link between the first computing device and the second computing device based on the negative match probability. 22. The system of claim 21, comprising the data processing system to: remove, from the data structure, a previous link to indicate the non-link between the first computing device and the second computing device. 23. The system of claim 21, comprising the data processing system to: determine a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generate a positive match probability based on the first linking factor and based on the number of computing devices; and determine to generate the non-link based on the positive match probability, the negative match probability, and a threshold. 24. The system of claim 21, comprising the data processing system to: determine a link between the first computing device and the second computing device in a time period prior to the second time period and the third time period; and determine to remove the link based on the negative match probability. 25. The system of claim 21, wherein the data processing system comprises a geographic location module, the data processing system to: receive geo-location data points from the first computing device to determine the first computing device is at the first location; and receive geo-location data points from the second computing device to determine the second computing device is at the first location. 26. The system of claim 21, wherein the data processing system comprises a geographic location module, the data processing system to: receive geo-location data points from the first computing device, the geo-location data points comprising at least one of Global Positioning System information, Wi-Fi information, an IP address, Bluetooth information or cell tower triangulation information. 27. The system of claim 21, comprising the data processing system to: determine the first location based on a first IP address; determine the second location based on a second IP address. 28. The system of claim 21, comprising the data processing system to: determine a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generate a positive match probability based on the first linking factor and based on the number of computing devices; increase the positive match probability based on an identification, by the data processing system, of activity from the first computing device corresponding to a cessation of activity at the second computing device at a fourth time period; determine to link the first computing device with the second computing device based on the positive match probability and the negative match probability at or subsequent to the fourth time period; and generate a link between the first computing device and the second computing device at or subsequent to the fourth time period. 29. The system of claim 28, comprising the data processing system to: select, at or subsequent to the fourth time period, a content item for placement with an online document on the second computing device based on the link and based on computer network activity of the first computing device. 30. The system of claim 28, comprising the data processing system to: link the first computing device with the second computing device based on an overall match probability based on a weight for positive match probability, a weight for negative match probability, the positive match probability, and the negative match probability. 31. The system of claim 28, comprising the data processing system to: calibrate the weight for positive match probability and the weight for negative match probability based on known links and known non-links. 32. The system of claim 21, comprising the data processing system to: monitor for a fourth linking factor of a third computing device and a fourth computing device based on input activity at the third computing device during a fourth time period, and based on input activity at the fourth computing device during the fourth time period; monitor for a fifth linking factor of the third computing device and the fourth computing device based on activity at the third computing device at a third location during a fifth time period, and based on activity at the fourth computing device at a fourth location during the fifth time period; generate a positive match probability based on the fourth linking factor of the third computing device and the fourth computing device and based on the fifth linking factor of the third computing device and the fourth computing device; determine a link between the third computing device and the fourth computing device based on the positive match probability; and indicate, in a data structure, a link between the third computing device and the fourth computing device. 33. A method of associating computing devices with each other based on computer network activity, comprising: identifying, by a data processing system comprising a matching engine and a connector executed by at least one processor, a first linking factor based on a connection between a first computing device and the computer network at a first location during a first time period, and based on a connection between a second computing device and the computer network at the first location during the first time period; monitoring, by the data processing system, for a second linking factor based on input activity at the first computing device in a second time period, and based on input activity at the second computing device in the second time period; monitoring, by the data processing system, for a third linking factor based on activity at the first computing device at the first location during a third time period, and based on activity at the second computing device at a second location during the third time period; determining, by the data processing system, a negative match probability based on the second linking factor and based on the third linking factor; and generating, by the data processing system, a non-link between the first computing device and the second computing device based on the negative match probability. 34. The method of claim 33, comprising: creating, by the data processing system, a data structure to indicate the non-link between the first computing device and the second computing device. 35. The method of claim 33, comprising: determining a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generating a positive match probability based on the first linking factor and based on the number of computing devices; and determining to generate the non-link based on the positive match probability, the negative match probability, and a threshold. 36. The method of claim 33, comprising: determining a link between the first computing device and the second computing device in a time period prior to the second time period and the third time period; and determining to remove the link based on the negative match probability. 37. The method of claim 33, comprising: receive geo-location data points from the first computing device, the geo-location data points comprising at least one of Global Positioning System information, Wi-Fi information, an IP address, Bluetooth information or cell tower triangulation information. 38. The method of claim 33, comprising: determining a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generating a positive match probability based on the first linking factor and based on the number of computing devices; increasing the positive match probability based on an identification, by the data processing system, of activity from the first computing device corresponding to a cessation of activity at the second computing device at a fourth time period; linking the first computing device with the second computing device based on the positive match probability and the negative match probability at or subsequent to the fourth time period; and generating a link between the first computing device and the second computing device at or subsequent to the fourth time period. 39. The method of claim 38, comprising: selecting, at or subsequent to the fourth time period, a content item for placement with an online document on the second computing device based on the link and based on computer network activity of the first computing device. 40. The method of claim 33, comprising: monitoring for a fourth linking factor of a third computing device and a fourth computing device based on input activity at the third computing device during a fourth time period, and based on input activity at the fourth computing device during the fourth time period; monitoring for a fifth linking factor of the third computing device and the fourth computing device based on activity at the third computing device at a third location during a fifth time period, and based on activity at the fourth computing device at a fourth location during the fifth time period; generating a positive match probability based on the fourth linking factor of the third computing device and the fourth computing device and based on the fifth linking factor of the third computing device and the fourth computing device; determining a link between the third computing device and the fourth computing device based on the positive match probability; and indicating, in a data structure, a link between the third computing device and the fourth computing device.
Please help me write a proper abstract based on the patent claims.
The present disclosure is directed to associating computing devices with each other based on computer network activity for selection of content items as part of an online content item placement campaign. A first linking factor is identified based on a connection between a first device and the computer network via a first IP address during a first time period, and based on a connection between a second device and the computer network via the first IP address during the first time period. A number of devices that connect with the computer network via the first IP address is determined. A positive match probability is generated. A second and third linking factors are monitored. A negative match probability is determined based on the second and third linking factors. The first device is linked with the second device based on the positive and negative match probabilities.
1. A quantum processor comprising: N successive groups of a plurality of qubits (1, 2, . . . , N), wherein N is greater than or equal to three; wherein each group of qubits of the N successive groups of a plurality of qubits comprises a plurality of substantially parallel qubits; wherein each qubit of a first group of the N successive groups of a plurality of qubits is sized and shaped so that it crosses substantially perpendicularly a portion of each qubit of only a second group of qubits of the N successive groups of a plurality of qubits; wherein each qubit of a last group of the N successive groups of a plurality of qubits is sized and shaped so that it crosses substantially perpendicularly a portion of each qubit of only a second to last group of the N successive groups of a plurality of qubits; wherein each qubit of any given group of the N-2 successive groups of a plurality of qubits, not including the first group and the last group, is sized and shaped so that it crosses substantially perpendicularly a portion of each qubit of only a corresponding successive group and a corresponding preceding group of the N successive groups of a plurality of qubits; and a plurality of couplers, each coupler for providing a communicative coupling at a crossing of two qubits. 2. The quantum processor as claimed in claim 1, wherein the quantum processor is used for implementing a neural network comprising a plurality of neurons and a plurality of synapses; wherein each neuron of the plurality of neurons is associated to a qubit and each synapse of the plurality of synapses is associated to a coupler of the quantum processor. 3. A method for training the neural network implemented in the quantum processor claimed in claim 2, the method comprising: providing initialization data for initializing the plurality of couplers and the plurality of qubits of the quantum processor; until a criterion is met: performing a quantum sampling of the quantum processor to provide first empirical means; obtaining at least one training data instance for training the neural network; performing a quantum sampling of the quantum processor; wherein no bias is assigned to the qubits of the first group of the N successive groups of a plurality of qubits of the quantum processor; wherein the couplings of the first group of qubits of the N successive groups of a plurality of qubits and the second group of the N successive groups of a plurality of qubits are switched off; further wherein the biases of the second group of qubits of the N successive groups of a plurality of qubits are altered using the biases on a first group of neurons associated with the first group of qubits, the weights of the switched off couplings, and the at least one training data instance, to determine second empirical means; updating corresponding weights and biases of the couplers and the qubits of the quantum processor using the first and second empirical means; and providing final weights and biases of the couplers and the qubits of the quantum processor indicative of data representative a trained neural network. 4. The method as claimed in claim 3, wherein the initialization data comprise a plurality of biases, each for a qubit of the plurality of qubits; a plurality of weights, each weight for a coupler of the plurality of couplers and a learning rate schedule. 5. The method as claimed in claim 4, wherein the providing of the initialization data is performed using an analog computer comprising the quantum processor and a digital computer operatively connected to the analog computer. 6. The method as claimed in claim 3, wherein the at least one training data instance is obtained from a previously generated data set. 7. The method as claimed in claim 3, wherein the at least one training data instance is obtained from a real-time source. 8. The method as claimed in claim 6, wherein the generated data set is stored in a digital computer operatively connected to an analog computer comprising the quantum processor. 9. The method as claimed in claim 7, wherein the real-time source originates from a digital computer operatively connected to an analog computer comprising the quantum processor. 10. The method as claimed in claim 3, wherein the criterion comprises a stopping condition; wherein the stopping condition comprises determining if there is no further training data instance available. 11. The method as claimed in claim 5, wherein the digital computer comprises a memory; further wherein the providing of the final weights and biases of the couplers and the qubits of the quantum processor comprises storing the final weights and biases of the couplers and the qubits of the quantum processor in the memory of the digital computer. 12. The method as claimed in claim 5, wherein the providing of the final weights and biases of the couplers and the qubits of the quantum processor comprises providing the final weights and biases of the couplers and the qubits of the quantum processor to another processing unit operatively connected to the digital computer. 13. A digital computer comprising: a central processing unit; a display device; a communication port for operatively connecting the digital computer to an analog computer comprising a quantum processor used for implementing a neural network as claimed in claim 2; a memory unit comprising an application for training the neural network, the application comprising: instructions for providing initialization data for initializing the plurality of couplers and the plurality of biases of the qubits of the quantum processor; instructions for, until a criterion is met: performing a quantum sampling of the quantum processor to provide first empirical means; obtaining at least one training data instance for training the neural network; performing a quantum sampling of the quantum processor; wherein no bias is assigned to the qubits of the first group of the N successive groups of a plurality of qubits of the quantum processor; wherein the couplings of the first group of qubits of the N successive groups of a plurality of qubits and the second group of qubits of the N successive groups of a plurality of qubits are switched off; further wherein the biases of the second group of qubits of the N successive groups of a plurality of qubits are altered using the biases on a first group of neurons associated with the first group of qubits, the weights of the switched off couplings, and the at least one training data instance, to determine second empirical means; and updating corresponding weights and biases of the couplers and the qubits of the quantum processor using the first and the second empirical means; and instructions for providing final weights and biases of the couplers and the qubits of the quantum processor as data representative of a trained neural network. 14. A non-transitory computer readable storage medium for storing computer-executable instructions which, when executed, cause a digital computer to perform a method for training the neural network implemented in the quantum processor claimed in claim 2, the method comprising: providing initialization data for initializing the plurality of couplers and the plurality of qubits of the quantum processor; until a criterion is met: performing a quantum sampling of the quantum processor to provide first empirical means; obtaining at least one training data instance for training the neural network; performing a quantum sampling of the quantum processor; wherein no bias is assigned to the qubits of the first group of the N successive groups of a plurality of qubits of the quantum processor; wherein the couplings of the first group of qubits of the N successive groups of a plurality of qubits and the second group of qubits of the N successive groups of a plurality of qubits are switched off; further wherein the biases of the second group of qubits of the N successive groups of a plurality of qubits are altered using the biases on a first group of neurons associated with the first group of qubits, the weights of the switched off couplings, and the at least one training data instance, to determine second empirical means; and updating corresponding weights and biases of the couplers and the qubits of the quantum processor using the first and second empirical means; and providing final weights and biases of the couplers and the qubits of the quantum processor indicative of data representative a trained neural network. 15. The method as claimed in claim 2, wherein the neural network is a Deep Boltzmann Machine.
Please help me write a proper abstract based on the patent claims.
A quantum processor comprises a first set of qubits comprising a first plurality of substantially parallel qubits; a second set of qubits comprising N successive groups of a plurality of qubits (1, 2, . . . , N), wherein N is greater than or equal to two; wherein each group of qubits comprises a plurality of substantially parallel qubits; wherein each qubit of the first plurality of substantially parallel qubits of the first set of qubits crosses substantially perpendicularly a portion of the plurality of substantially parallel qubits of a first group of the second set of qubits; wherein each qubit of any given group of the second set of qubits crosses substantially perpendicularly a portion of the plurality of substantially parallel qubits of a successive group of the second set of qubits and a plurality of couplers, each coupler for providing a communicative coupling at a crossing of two qubits.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
1
Edit dataset card