patent_number,patent_title,patent_abstract,patent_date,text 4094307,method and apparatus for aiding in the anatomical localization of dysfunction in a brain,a method and apparatus for synthesizing a set of optimal sensory stimuli designed to elicit an optimal response for each particular brain electrode location in a subject whose brain is being examined to anatomically localize brain dysfunction. a pseudorandom input signal having the general characteristics of gaussian white noise is generated and converted into a color video visual stimulus which can be observed by the subject and summed on his retina and associated neural network. a plurality of electrodes are positioned with respect to various different and distinct areas of the brain of the subject to be examined. the subject is shown the color video visual stimulus and the electrical analog response from the electrodes is amplified and stored. the stored analog response signals are cross-correlated with the resynthesized input signal to compute a wiener kernel representation of the response for each electrode. portions of the pseudorandom input signal which resulted in insignificant analog responses are masked out so that the subsequent generation of pseudorandom input signals will be bandwidth-limited. the analog responses to the bandwidth-limited visual stimulus are cross-correlated with the resynthesized masked input signal and new wiener kernel representations are recomputed for each electrode. the recomputed wiener kernel representations of the response from each electrode are then multiplied in an array processor with the resynthesized bandwidth-limited input signal to compute an optimum visual stimulus for each of the electrodes. these optimum visual stimuli may be subsequently displayed to the subject alone or in conjunction with psychophysical tests to aid in anatomically localizing dysfunction in a brain under examination.,1978-06-13,The title of the patent is method and apparatus for aiding in the anatomical localization of dysfunction in a brain and its abstract is a method and apparatus for synthesizing a set of optimal sensory stimuli designed to elicit an optimal response for each particular brain electrode location in a subject whose brain is being examined to anatomically localize brain dysfunction. a pseudorandom input signal having the general characteristics of gaussian white noise is generated and converted into a color video visual stimulus which can be observed by the subject and summed on his retina and associated neural network. a plurality of electrodes are positioned with respect to various different and distinct areas of the brain of the subject to be examined. the subject is shown the color video visual stimulus and the electrical analog response from the electrodes is amplified and stored. the stored analog response signals are cross-correlated with the resynthesized input signal to compute a wiener kernel representation of the response for each electrode. portions of the pseudorandom input signal which resulted in insignificant analog responses are masked out so that the subsequent generation of pseudorandom input signals will be bandwidth-limited. the analog responses to the bandwidth-limited visual stimulus are cross-correlated with the resynthesized masked input signal and new wiener kernel representations are recomputed for each electrode. the recomputed wiener kernel representations of the response from each electrode are then multiplied in an array processor with the resynthesized bandwidth-limited input signal to compute an optimum visual stimulus for each of the electrodes. these optimum visual stimuli may be subsequently displayed to the subject alone or in conjunction with psychophysical tests to aid in anatomically localizing dysfunction in a brain under examination. dated 1978-06-13 4536844,method and apparatus for simulating aural response information,"speech and like signals are analyzed based on a model of the function of the human hearing system. the model of the inner ear is expressed as signal processing operations which map acoustic signals into neural representations. specifically, a high order transfer function is modeled as a cascade/parallel filterbank network of simple linear, time-invariant second-order filter sections. signal transduction and compression are based on a half-wave rectification with a non-linearly coupled, variable time constant automatic gain control network. the result is a simple device which simulates the complex signal transfer function associated with the human ear. the invention lends itself to implementation in digital circuitry for real-time or near real-time processing of speech and other sounds.",1985-08-20,"The title of the patent is method and apparatus for simulating aural response information and its abstract is speech and like signals are analyzed based on a model of the function of the human hearing system. the model of the inner ear is expressed as signal processing operations which map acoustic signals into neural representations. specifically, a high order transfer function is modeled as a cascade/parallel filterbank network of simple linear, time-invariant second-order filter sections. signal transduction and compression are based on a half-wave rectification with a non-linearly coupled, variable time constant automatic gain control network. the result is a simple device which simulates the complex signal transfer function associated with the human ear. the invention lends itself to implementation in digital circuitry for real-time or near real-time processing of speech and other sounds. dated 1985-08-20" 4592359,multi-channel implantable neural stimulator,"a combination of a transmitter and implantable receiver are disclosed wherein data is conveyed from transmitter to receiver utilizing a data format in which each channel to be stimulated is adapted to convey information in monopolar, bipolar or analog form. the data format includes two types of code words: transition words in which one bit is assigned to each channel and can be used to create monopolar pulsatile or bipolar pulsatile waveforms; and amplitude words which can create analog waveforms one channel at a time. an essential element of the output system is a current source digital to analog converter which responds to the code words to form the appropriate output on each channel. each output is composed of a set of eight current sources, four with one polarity of current and the other four with the opposite polarity of current. in each group of four, the current sources are binarily related, i, 2i, 4i and 8i. in this arrangement each channel can supply 16 amplitudes times two polarities of current; meaning 32 current levels. this channel is simply a 5-bit digit to analog converter. the output circuitry contains charge balance switches. these switches are designed to recover residual charge when the current sources are off. they are also designed to current limit during charge recovery if the excess charge is too great so that they do not cause neural damage. each channel charge balances (will not pass dc current or charge) and charge limits to prevent electrode damage and bone growth. the charge balancing is performed by the charge balancing switches and by the blocking capacitor. the charge limiting is performed by the blocking capacitor only. the charge level on each channel is defined using a switch network ladder which combines a plurality of parallel connected switches; closure of each switch doubles the current level handed off from the previous switch.",1986-06-03,"The title of the patent is multi-channel implantable neural stimulator and its abstract is a combination of a transmitter and implantable receiver are disclosed wherein data is conveyed from transmitter to receiver utilizing a data format in which each channel to be stimulated is adapted to convey information in monopolar, bipolar or analog form. the data format includes two types of code words: transition words in which one bit is assigned to each channel and can be used to create monopolar pulsatile or bipolar pulsatile waveforms; and amplitude words which can create analog waveforms one channel at a time. an essential element of the output system is a current source digital to analog converter which responds to the code words to form the appropriate output on each channel. each output is composed of a set of eight current sources, four with one polarity of current and the other four with the opposite polarity of current. in each group of four, the current sources are binarily related, i, 2i, 4i and 8i. in this arrangement each channel can supply 16 amplitudes times two polarities of current; meaning 32 current levels. this channel is simply a 5-bit digit to analog converter. the output circuitry contains charge balance switches. these switches are designed to recover residual charge when the current sources are off. they are also designed to current limit during charge recovery if the excess charge is too great so that they do not cause neural damage. each channel charge balances (will not pass dc current or charge) and charge limits to prevent electrode damage and bone growth. the charge balancing is performed by the charge balancing switches and by the blocking capacitor. the charge limiting is performed by the blocking capacitor only. the charge level on each channel is defined using a switch network ladder which combines a plurality of parallel connected switches; closure of each switch doubles the current level handed off from the previous switch. dated 1986-06-03" 4699875,diagnosis of amyotrophic lateral sclerosis by neurotrophic factors,"the present invention is based on the discovery that amyotrophic lateral sclerosis (als), parkinson disease and alzheimer disease are due to lack of a disorder-specific neurotrophic hormone. diagnosis is accomplished by assaying hormones specific for a particular neuronal network or system: the motor neurotrophic hormones from muscle in the motor neural system are used to diagnose and treat als, dopamine neurotrophic hormones from striatum in the migrostriatal neural system are used to diagnose and treat parkinsonism, and cholinergic neurotrophic hormones released from the cortex and hippocampus which are specific for cholinergic neorons of the nucleus basalis and septal nucleus are used to diagnose and treat alzheimer's disease. with tissue culture, the presence or absence of specific neurotrophic hormones can be assessed in als, parkinsonism, and alzheimer disease. if there is a deficiency, extracted and purified neurotrophic hormones specific to the particular neuronal network or system can be injected in als and alzheimer disease and in parkinsonism.",1987-10-13,"The title of the patent is diagnosis of amyotrophic lateral sclerosis by neurotrophic factors and its abstract is the present invention is based on the discovery that amyotrophic lateral sclerosis (als), parkinson disease and alzheimer disease are due to lack of a disorder-specific neurotrophic hormone. diagnosis is accomplished by assaying hormones specific for a particular neuronal network or system: the motor neurotrophic hormones from muscle in the motor neural system are used to diagnose and treat als, dopamine neurotrophic hormones from striatum in the migrostriatal neural system are used to diagnose and treat parkinsonism, and cholinergic neurotrophic hormones released from the cortex and hippocampus which are specific for cholinergic neorons of the nucleus basalis and septal nucleus are used to diagnose and treat alzheimer's disease. with tissue culture, the presence or absence of specific neurotrophic hormones can be assessed in als, parkinsonism, and alzheimer disease. if there is a deficiency, extracted and purified neurotrophic hormones specific to the particular neuronal network or system can be injected in als and alzheimer disease and in parkinsonism. dated 1987-10-13" 4701407,diagnosis of alzheimer disease,"the present invention is based on the discovery that amyotrophic lateral sclerosis (als), parkinson disease and alzheimer disease are due to lack of a disorder-specific neurotrophic hormone. diagnosis is accomplished by assaying hormones specific for a particular neuronal network or system: the motor neurotrophic hormones from muscle in the motor neural system are used to diagnose and treat als, dopamine neurotrophic hormones from striatum in the nigrostriatal neural system are used to diagnose and treat parkinsonism, and cholinergic neurotrophic hormones released from the cortex and hippocampus which are specific for cholinergic neurons of the nucleus basalis and septal nucleus are used to diagnose and treat alzheimer's disease. with tissue culture, the presence or absence of specific neurotrophic hormones can be assessed in als, parkinsonism, and alzheimer disease. if there is a deficiency, extracted and purified neurotrophic hormones specific to the particular neuronal network or system can be injected in als and alzheimer disease and in parkinsonism.",1987-10-20,"The title of the patent is diagnosis of alzheimer disease and its abstract is the present invention is based on the discovery that amyotrophic lateral sclerosis (als), parkinson disease and alzheimer disease are due to lack of a disorder-specific neurotrophic hormone. diagnosis is accomplished by assaying hormones specific for a particular neuronal network or system: the motor neurotrophic hormones from muscle in the motor neural system are used to diagnose and treat als, dopamine neurotrophic hormones from striatum in the nigrostriatal neural system are used to diagnose and treat parkinsonism, and cholinergic neurotrophic hormones released from the cortex and hippocampus which are specific for cholinergic neurons of the nucleus basalis and septal nucleus are used to diagnose and treat alzheimer's disease. with tissue culture, the presence or absence of specific neurotrophic hormones can be assessed in als, parkinsonism, and alzheimer disease. if there is a deficiency, extracted and purified neurotrophic hormones specific to the particular neuronal network or system can be injected in als and alzheimer disease and in parkinsonism. dated 1987-10-20" 4737929,highly parallel computation network employing a binary-valued t matrix and single output amplifiers,"advantageous neural network realizations are achieved by employing only negative gain amplifiers and a clipped t matrix having conductances t.sub.ij which have only two values. preferably, one of these values is a preselected value set by the value of a fixed resistor, and the other value is zero, created simply with an open circuit. values for the t.sub.ij terms of the clipped t matrix are obtained through an iterative process which operates on the clipped and nonclipped matrices and minimizes the error resulting from the use of the clipped t matrix.",1988-04-12,"The title of the patent is highly parallel computation network employing a binary-valued t matrix and single output amplifiers and its abstract is advantageous neural network realizations are achieved by employing only negative gain amplifiers and a clipped t matrix having conductances t.sub.ij which have only two values. preferably, one of these values is a preselected value set by the value of a fixed resistor, and the other value is zero, created simply with an open circuit. values for the t.sub.ij terms of the clipped t matrix are obtained through an iterative process which operates on the clipped and nonclipped matrices and minimizes the error resulting from the use of the clipped t matrix. dated 1988-04-12" 4752906,temporal sequences with neural networks,"a sequence generator employing a neural network having its output coupled to at least one plurality of delay elements. the delayed outputs are fed back to an input interconnection network, wherein they contribute to the next state transition through an appropriate combination of interconnections.",1988-06-21,"The title of the patent is temporal sequences with neural networks and its abstract is a sequence generator employing a neural network having its output coupled to at least one plurality of delay elements. the delayed outputs are fed back to an input interconnection network, wherein they contribute to the next state transition through an appropriate combination of interconnections. dated 1988-06-21" 4760437,neural networks,"neural network type information processing devices have been proposed. in these devices, a matrix structure is utilized with impedance at the matrix intersection points. it has been found that excellent versatility in design is achieved by utilizing photoconductors at these intersection points and thus affording the possibility of controlling impedance by, in turn, controlling the level of incident light.",1988-07-26,"The title of the patent is neural networks and its abstract is neural network type information processing devices have been proposed. in these devices, a matrix structure is utilized with impedance at the matrix intersection points. it has been found that excellent versatility in design is achieved by utilizing photoconductors at these intersection points and thus affording the possibility of controlling impedance by, in turn, controlling the level of incident light. dated 1988-07-26" 4782460,computing apparatus comprising a programmable resistor,"computing apparatus (e.g., a neural network) advantageously comprises a programmable resistor body comprising typically a multiplicity of resistors r.sub.ij. the resistance of any given r.sub.ij is changeable from a relatively high resistance to a lower resistance by application of an appropriate electrical signal, and can be reset to a higher resistance by application of an appropriate signal of reverse polarity. exemplarily, a programmable resistor body comprises a thin layer of bismuth oxide or strontium barium niobate.",1988-11-01,"The title of the patent is computing apparatus comprising a programmable resistor and its abstract is computing apparatus (e.g., a neural network) advantageously comprises a programmable resistor body comprising typically a multiplicity of resistors r.sub.ij. the resistance of any given r.sub.ij is changeable from a relatively high resistance to a lower resistance by application of an appropriate electrical signal, and can be reset to a higher resistance by application of an appropriate signal of reverse polarity. exemplarily, a programmable resistor body comprises a thin layer of bismuth oxide or strontium barium niobate. dated 1988-11-01" 4807168,hybrid analog-digital associative neural network,"random access memory is used to store synaptic information in the form of a matrix of rows and columns of binary digits. n rows read in sequence are processed through switches and resistors, and a summing amplifier to n neural amplifiers in sequence, one row for each amplifier, using a first array of sample-and-hold devices s/h1 for commutation. the outputs of the neural amplifiers are stored in a second array of sample-and-hold devices s/h2 so that after n rows are processed, all of said second array of sample-and-hold devices are updated. a second memory may be added for binary values of 0 and -1, and processed simultaneously with the first to provide for values of 1, 0, and -1, the results of which are combined in a difference amplifier.",1989-02-21,"The title of the patent is hybrid analog-digital associative neural network and its abstract is random access memory is used to store synaptic information in the form of a matrix of rows and columns of binary digits. n rows read in sequence are processed through switches and resistors, and a summing amplifier to n neural amplifiers in sequence, one row for each amplifier, using a first array of sample-and-hold devices s/h1 for commutation. the outputs of the neural amplifiers are stored in a second array of sample-and-hold devices s/h2 so that after n rows are processed, all of said second array of sample-and-hold devices are updated. a second memory may be added for binary values of 0 and -1, and processed simultaneously with the first to provide for values of 1, 0, and -1, the results of which are combined in a difference amplifier. dated 1989-02-21" 4866645,neural network with dynamic refresh capability,"an analog neural network composed of an array of capacitors for storing weighted electric charges. electric charges, or voltages, on the capacitors control the impedance (resistance) values of a corresponding plurality of mosfets which selectively couple input signals to one input of a summing amplifier. a plurality of semiconductor gating elements (e.g. mosfets) selectively couple to the capacitor's weighted analog voltage values received serially over an input line. the weighted voltage on the input line are periodically applied to the proper capacitors in the neural network via the gating elements so as to refresh the weighted electric charges on the capacitors, and at a multiplex rate that maintains the voltages on the capacitors within acceptable tolerance levels.",1989-09-12,"The title of the patent is neural network with dynamic refresh capability and its abstract is an analog neural network composed of an array of capacitors for storing weighted electric charges. electric charges, or voltages, on the capacitors control the impedance (resistance) values of a corresponding plurality of mosfets which selectively couple input signals to one input of a summing amplifier. a plurality of semiconductor gating elements (e.g. mosfets) selectively couple to the capacitor's weighted analog voltage values received serially over an input line. the weighted voltage on the input line are periodically applied to the proper capacitors in the neural network via the gating elements so as to refresh the weighted electric charges on the capacitors, and at a multiplex rate that maintains the voltages on the capacitors within acceptable tolerance levels. dated 1989-09-12" 4873455,programmable ferroelectric polymer neural network,"the network comprises several memory elements made of ferroelectric polymer, arranged in a matrix organization at the intersections of row and column electrodes. each memory element (mij) memorizes a synaptic coefficient a.sub.ij of the network which may be restored by pyroelectric effect on the corresponding column of the network. amplifier circuits, respectively connected to the columns, give a voltage which is equal to the sum, to which a sign is assigned, of the products of the synaptic coefficients by the voltage components applied to each of the lines of the network.",1989-10-10,"The title of the patent is programmable ferroelectric polymer neural network and its abstract is the network comprises several memory elements made of ferroelectric polymer, arranged in a matrix organization at the intersections of row and column electrodes. each memory element (mij) memorizes a synaptic coefficient a.sub.ij of the network which may be restored by pyroelectric effect on the corresponding column of the network. amplifier circuits, respectively connected to the columns, give a voltage which is equal to the sum, to which a sign is assigned, of the products of the synaptic coefficients by the voltage components applied to each of the lines of the network. dated 1989-10-10" 4876731,neural network model in pattern recognition using probabilistic contextual information,"a pattern recognition system for recognizing an unknown pattern comprised of symbols which are part of a pattern system which is devoid of inherent context such as numbers. artificial contextual information based on other than symbol features and the pattern system and in the form of probability weighted expected interpretations are stored and used in the processing phase of recognition. in the system disclosed, the system comprises a neural network whose forward and feedback paths are controlled by the output cells of the network based, in part, on the contextual information.",1989-10-24,"The title of the patent is neural network model in pattern recognition using probabilistic contextual information and its abstract is a pattern recognition system for recognizing an unknown pattern comprised of symbols which are part of a pattern system which is devoid of inherent context such as numbers. artificial contextual information based on other than symbol features and the pattern system and in the form of probability weighted expected interpretations are stored and used in the processing phase of recognition. in the system disclosed, the system comprises a neural network whose forward and feedback paths are controlled by the output cells of the network based, in part, on the contextual information. dated 1989-10-24" 4884216,neural network system for adaptive sensory-motor coordination of multijoint robots for single postures,"a neural-like network system that adaptively controls a visually guided, two-jointed robot arm to reach spot targets in three dimensions. the system learns and maintains visual-motor calibrations by itself, starting with only loosely defined relationships. the geometry of the system is composed of distributed, interleaved combinations of actuator inputs. it is fault tolerant and uses analog processing. learning is achieved by modifying the distributions of input weights in the system after each arm positioning. modifications of the weights are made incrementally according to errors of consistency between the actuator signals used to orient the cameras and those used to move the arm.",1989-11-28,"The title of the patent is neural network system for adaptive sensory-motor coordination of multijoint robots for single postures and its abstract is a neural-like network system that adaptively controls a visually guided, two-jointed robot arm to reach spot targets in three dimensions. the system learns and maintains visual-motor calibrations by itself, starting with only loosely defined relationships. the geometry of the system is composed of distributed, interleaved combinations of actuator inputs. it is fault tolerant and uses analog processing. learning is achieved by modifying the distributions of input weights in the system after each arm positioning. modifications of the weights are made incrementally according to errors of consistency between the actuator signals used to orient the cameras and those used to move the arm. dated 1989-11-28" 4885757,digital adaptive receiver employing maximum-likelihood sequence estimation with neural networks,"a maximum-likelihood sequence estimator receiver includes a matched filter connected to a digital transmission channel and a sampler for providing sampled signals output by the matched filter. the sampled signals are input to an analog neural network to provide high-speed outputs representative of the transmission channel signals. the neural network outputs are also provided as inputs to a coefficient estimator which produces coefficients for feedback to the matched filter. for time-varying transmission channel characteristics, the coefficient estimator provides a second coefficient output which is utilized for changing the interconnection strengths of the neural network connection matrix to offset the varying transmission channel characteristics.",1989-12-05,"The title of the patent is digital adaptive receiver employing maximum-likelihood sequence estimation with neural networks and its abstract is a maximum-likelihood sequence estimator receiver includes a matched filter connected to a digital transmission channel and a sampler for providing sampled signals output by the matched filter. the sampled signals are input to an analog neural network to provide high-speed outputs representative of the transmission channel signals. the neural network outputs are also provided as inputs to a coefficient estimator which produces coefficients for feedback to the matched filter. for time-varying transmission channel characteristics, the coefficient estimator provides a second coefficient output which is utilized for changing the interconnection strengths of the neural network connection matrix to offset the varying transmission channel characteristics. dated 1989-12-05" 4891782,parallel neural network for a full binary adder,a method for performing the addition of two n-bit binary numbers using palel neural networks. the value of a first register is converted and transferred into a second register in a mathematical fashion so as to add the numbers of the first register into the second register. when the first register contains all zeros then the desired sum is found in the second register.,1990-01-02,The title of the patent is parallel neural network for a full binary adder and its abstract is a method for performing the addition of two n-bit binary numbers using palel neural networks. the value of a first register is converted and transferred into a second register in a mathematical fashion so as to add the numbers of the first register into the second register. when the first register contains all zeros then the desired sum is found in the second register. dated 1990-01-02 4893255,spike transmission for neural networks,"pulse trains are utilized for the transmission of information in a neural network. a squash function is achieved by logically or'ing together pulsed outputs, giving f(x) approximately 1-e.sup.-x. for back propagation, as derived by rumelhart, the derivative of the squash function is available by examining the time when no or'ed together pulses are present, being 1-f(x), or e.sup.-x. logically and'ing of the two signals. mulitplication of input frequencies by weights is accomplished by modulating the width of the output pulses, while keeping the frequency the same.",1990-01-09,"The title of the patent is spike transmission for neural networks and its abstract is pulse trains are utilized for the transmission of information in a neural network. a squash function is achieved by logically or'ing together pulsed outputs, giving f(x) approximately 1-e.sup.-x. for back propagation, as derived by rumelhart, the derivative of the squash function is available by examining the time when no or'ed together pulses are present, being 1-f(x), or e.sup.-x. logically and'ing of the two signals. mulitplication of input frequencies by weights is accomplished by modulating the width of the output pulses, while keeping the frequency the same. dated 1990-01-09" 4896053,solitary wave circuit for neural network emulation,""" a circuit for emulating a nerve cell is used to generate one or more simple neural networks. in the preferred embodiment, the circuit comprises an lc ladder circuit including one or more modules, each of the modules comprising an """"l"""" two-port circuit comprising a first shunt branch having a variable capacitor, a second shunt branch having a series-connected conductance and a variable d.c. bias source, and a third branch connected in series with the first and second branches, the third branch comprising an active inductor. the inductor is formed by one or more operational amplifiers interconnected in a feedback configuration. each of the variable capacitances and the inductances cooperate to emulate a portion of a neuron by receiving a stimulus and generating or propagating a unidirectional solitary wave output representing an action potential. """,1990-01-23,"The title of the patent is solitary wave circuit for neural network emulation and its abstract is "" a circuit for emulating a nerve cell is used to generate one or more simple neural networks. in the preferred embodiment, the circuit comprises an lc ladder circuit including one or more modules, each of the modules comprising an """"l"""" two-port circuit comprising a first shunt branch having a variable capacitor, a second shunt branch having a series-connected conductance and a variable d.c. bias source, and a third branch connected in series with the first and second branches, the third branch comprising an active inductor. the inductor is formed by one or more operational amplifiers interconnected in a feedback configuration. each of the variable capacitances and the inductances cooperate to emulate a portion of a neuron by receiving a stimulus and generating or propagating a unidirectional solitary wave output representing an action potential. "" dated 1990-01-23" 4897811,n-dimensional coulomb neural network which provides for cumulative learning of internal representations,""" a learning algorithm for the n-dimensional coulomb network is disclosed which is applicable to multi-layer networks. the central concept is to define a potential energy of a collection of memory sites. then each memory site is an attractor of other memory sites. with the proper definition of attractive and repulsive potentials between various memory sites, it is possible to minimize the energy of the collection of memories. by this method, internal representations may be """"built-up"""" one layer at a time. following the method of bachmann et al. a system is considered in which memories of events have already been recorded in a layer of cells. a method is found for the consolidation of the number of memories required to correctly represent the pattern environment. this method is shown to be applicable to a supervised or unsupervised learning paradigm in which pairs of input and output patterns are presented sequentially to the network. the resulting learning procedure develops internal representations in an incremental or cumulative fashion, from the layer closest to the input, to the output layer. """,1990-01-30,"The title of the patent is n-dimensional coulomb neural network which provides for cumulative learning of internal representations and its abstract is "" a learning algorithm for the n-dimensional coulomb network is disclosed which is applicable to multi-layer networks. the central concept is to define a potential energy of a collection of memory sites. then each memory site is an attractor of other memory sites. with the proper definition of attractive and repulsive potentials between various memory sites, it is possible to minimize the energy of the collection of memories. by this method, internal representations may be """"built-up"""" one layer at a time. following the method of bachmann et al. a system is considered in which memories of events have already been recorded in a layer of cells. a method is found for the consolidation of the number of memories required to correctly represent the pattern environment. this method is shown to be applicable to a supervised or unsupervised learning paradigm in which pairs of input and output patterns are presented sequentially to the network. the resulting learning procedure develops internal representations in an incremental or cumulative fashion, from the layer closest to the input, to the output layer. "" dated 1990-01-30" 4904881,exclusive-or cell for neural network and the like,a semiconductor cell for producing an output current that is related to the match between an input vector pattern and a weighting pattern is described. the cell is particularly useful as a synapse cell within a neural network to perform pattern recognition tasks. the cell includes a pair of input lines for receiving a differential input vector element value and a pair of output lines for providing a difference current to a current summing neural amplifier. a plurality of floating gate devices each having a floating gate member are employed in the synapse cell to store charge in accordance with a predetermined weight pattern. each of the floating gate devices is uniquely coupled to a combination of an output current line and an input voltage line such that the difference current provided to the neural amplifier is related to the match between the input vector and the stored weight.,1990-02-27,The title of the patent is exclusive-or cell for neural network and the like and its abstract is a semiconductor cell for producing an output current that is related to the match between an input vector pattern and a weighting pattern is described. the cell is particularly useful as a synapse cell within a neural network to perform pattern recognition tasks. the cell includes a pair of input lines for receiving a differential input vector element value and a pair of output lines for providing a difference current to a current summing neural amplifier. a plurality of floating gate devices each having a floating gate member are employed in the synapse cell to store charge in accordance with a predetermined weight pattern. each of the floating gate devices is uniquely coupled to a combination of an output current line and an input voltage line such that the difference current provided to the neural amplifier is related to the match between the input vector and the stored weight. dated 1990-02-27 4904882,superconducting optical switch,""" a combination of optical interconnect technology with superconducting matal to form a superconducting neural network array. superconducting material in a matrix has the superconducting current decreased in one filament of the matrix by interaction of the cooper pairs with radiation controlled by a spatial light modulator. this decrease in current results in a switch of current, in a relative sense, to another filament in the matrix. this """"switching"""" mechanism can be used in a digital or analog fashion in a superconducting computer application. """,1990-02-27,"The title of the patent is superconducting optical switch and its abstract is "" a combination of optical interconnect technology with superconducting matal to form a superconducting neural network array. superconducting material in a matrix has the superconducting current decreased in one filament of the matrix by interaction of the cooper pairs with radiation controlled by a spatial light modulator. this decrease in current results in a switch of current, in a relative sense, to another filament in the matrix. this """"switching"""" mechanism can be used in a digital or analog fashion in a superconducting computer application. "" dated 1990-02-27" 4906865,sample and hold circuit for temporal associations in a neural network,"a sample and hold circuit for introducing delayed feedback into an associative memory is described. the circuit continuously samples an output sequence derived from a neural network; then, in response to a clock signal, it holds that output sequence until the next clock signal. the held sequence is coupled back to the input of the network so that the present output sequence becomes some function of the past output sequence. this delayed feedback enables the associative recall of a memorized sequence from the neural network.",1990-03-06,"The title of the patent is sample and hold circuit for temporal associations in a neural network and its abstract is a sample and hold circuit for introducing delayed feedback into an associative memory is described. the circuit continuously samples an output sequence derived from a neural network; then, in response to a clock signal, it holds that output sequence until the next clock signal. the held sequence is coupled back to the input of the network so that the present output sequence becomes some function of the past output sequence. this delayed feedback enables the associative recall of a memorized sequence from the neural network. dated 1990-03-06" 4912647,neural network training tool,"a method of training an artificial neural network uses a first computer configured as a plurality of interconnected neural units arranged in a network. a neural unit has a first subunit and a second subunit. the first subunit has first inputs and a corresponding first set of variables for operating upon the first inputs to provide a first output during a forward pass. the first set of variables can change in response to feedback representing differences between desired network outputs and actual network outputs. the second subunit has a plurality of second inputs, and a corresponding second set of variables for operating upon the second inputs to provide a second output. the second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs. the computer provides an activating variable representing the difference between current second output and previous second outputs. the activating variable is added to the feedback to accelerate the change of said first set of variables. a second computer is configured as a plurality of interconnected neural units arranged in a network. the network is functionally equivalent to the network of the first computer in a forward pass when provided with sets of values corresponding to each converged first set of variables of the first computer.",1990-03-27,"The title of the patent is neural network training tool and its abstract is a method of training an artificial neural network uses a first computer configured as a plurality of interconnected neural units arranged in a network. a neural unit has a first subunit and a second subunit. the first subunit has first inputs and a corresponding first set of variables for operating upon the first inputs to provide a first output during a forward pass. the first set of variables can change in response to feedback representing differences between desired network outputs and actual network outputs. the second subunit has a plurality of second inputs, and a corresponding second set of variables for operating upon the second inputs to provide a second output. the second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs. the computer provides an activating variable representing the difference between current second output and previous second outputs. the activating variable is added to the feedback to accelerate the change of said first set of variables. a second computer is configured as a plurality of interconnected neural units arranged in a network. the network is functionally equivalent to the network of the first computer in a forward pass when provided with sets of values corresponding to each converged first set of variables of the first computer. dated 1990-03-27" 4912649,accelerating learning in neural networks,"a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon the unit inputs to provide a unit output. a plurality of examples are serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for adjusting each set of variables in response to feedback representing differences between the network output for each example and the desired output. the examples are iterated while those values which change are identified. the examples are reiterated and the algorithm is applied to only those values which changed in a previous iteration.",1990-03-27,"The title of the patent is accelerating learning in neural networks and its abstract is a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon the unit inputs to provide a unit output. a plurality of examples are serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for adjusting each set of variables in response to feedback representing differences between the network output for each example and the desired output. the examples are iterated while those values which change are identified. the examples are reiterated and the algorithm is applied to only those values which changed in a previous iteration. dated 1990-03-27" 4912651,speeding learning in neural networks,"a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output. a plurality of examples are serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for adjusting each set of variables in response to feedback representing differences between the network output for each example and the desired output. the examples are iterated until the signs of the outputs of the units of the output layer converge. then each set of variables is multiplied by a multiplier. the examples are reiterated until the magnitude of the outputs of the units of the output layer converge.",1990-03-27,"The title of the patent is speeding learning in neural networks and its abstract is a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output. a plurality of examples are serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for adjusting each set of variables in response to feedback representing differences between the network output for each example and the desired output. the examples are iterated until the signs of the outputs of the units of the output layer converge. then each set of variables is multiplied by a multiplier. the examples are reiterated until the magnitude of the outputs of the units of the output layer converge. dated 1990-03-27" 4912652,fast neural network training,a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output in the range between binary 1 and binary 0. a plurality of training examples is serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for changing each set of variables in response to feedback representing differences between the network output for each example and the desired output. the examples are iterated while the output of a unit is observed. the feedback to a unit is adjusted so that a larger feedback is obtained when the output of the unit is near binary 1 or binary 0 than when the output is midrange between binary 1 or binary 0.,1990-03-27,The title of the patent is fast neural network training and its abstract is a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output in the range between binary 1 and binary 0. a plurality of training examples is serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for changing each set of variables in response to feedback representing differences between the network output for each example and the desired output. the examples are iterated while the output of a unit is observed. the feedback to a unit is adjusted so that a larger feedback is obtained when the output of the unit is near binary 1 or binary 0 than when the output is midrange between binary 1 or binary 0. dated 1990-03-27 4912653,trainable neural network,"a trainable artificial neural network includes a computer configured as a plurality of interconnected neural units arranged in a layered network. an input layer has a network input and an output layer has a network output. a neural unit has a first subunit and a second subunit, with the first subunit having one or more first inputs and a corresponding first set of variables for operating upon the said first inputs to provide a first output. the first set of variables can change in response to feedback representing differences between desired network outputs and actual network outputs. the second subunit has a plurality second inputs, and a corresponding second set of variables for operating upon said second inputs to provide a second output. the second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs. the computer provides an activating variable representing the difference between current second output and previous second outputs, and adds the activating variable to said feedback to accelerate the change of the first set of variables.",1990-03-27,"The title of the patent is trainable neural network and its abstract is a trainable artificial neural network includes a computer configured as a plurality of interconnected neural units arranged in a layered network. an input layer has a network input and an output layer has a network output. a neural unit has a first subunit and a second subunit, with the first subunit having one or more first inputs and a corresponding first set of variables for operating upon the said first inputs to provide a first output. the first set of variables can change in response to feedback representing differences between desired network outputs and actual network outputs. the second subunit has a plurality second inputs, and a corresponding second set of variables for operating upon said second inputs to provide a second output. the second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs. the computer provides an activating variable representing the difference between current second output and previous second outputs, and adds the activating variable to said feedback to accelerate the change of the first set of variables. dated 1990-03-27" 4912654,neural networks learning method,"a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output in the range positive 1 and negative 1. a plurality of examples are serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for calculating changes to the sets of variables in response to feedback representing differences between the network output for each example and the desired output. the absolute magnitude of the product of an input and the corresponding output of a unit is calculated. the feedback to that unit is adjusted in response to absolute magnitude so that said feedback is larger with a larger absolute magnitude than with a smaller absolute magnitude.",1990-03-27,"The title of the patent is neural networks learning method and its abstract is a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output in the range positive 1 and negative 1. a plurality of examples are serially provided to the network input and the network output is observed. the computer is programmed with a back propagation algorithm for calculating changes to the sets of variables in response to feedback representing differences between the network output for each example and the desired output. the absolute magnitude of the product of an input and the corresponding output of a unit is calculated. the feedback to that unit is adjusted in response to absolute magnitude so that said feedback is larger with a larger absolute magnitude than with a smaller absolute magnitude. dated 1990-03-27" 4912655,adjusting neural networks,"a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output. the computer is programmed with a back propagation algorithm. a plurality of examples are serially provided to the network input and the network output is observed. the examples are iterated and proposed changes to each set of variables are calculated in response to feedback representing differences betwen the network output for each example and the desired output. the proposed changes are accumulated for a predetermined number of iterations, whereupon the accumulated proposed changes are added to the set of variables.",1990-03-27,"The title of the patent is adjusting neural networks and its abstract is a method of accelerating the training of an artificial neural network uses a computer configured as an artificial neural network with a network input and a network output, and having a plurality of interconnected units arranged in layers including an input layer and an output layer. each unit has a multiplicity of unit inputs and a set of variables for operating upon a unit inputs to provide a unit output. the computer is programmed with a back propagation algorithm. a plurality of examples are serially provided to the network input and the network output is observed. the examples are iterated and proposed changes to each set of variables are calculated in response to feedback representing differences betwen the network output for each example and the desired output. the proposed changes are accumulated for a predetermined number of iterations, whereupon the accumulated proposed changes are added to the set of variables. dated 1990-03-27" 4914603,training neural networks,"a method of training an artificial neural network uses a computer configured as a plurality of interconnected neural units arranged in a layered network including an input layer having a network input, and an output layer having a network output. a neural unit has a first subunit and a second subunit. the first subunit having one or more first inputs, and a corresponding first set of variables for operating upon the first inputs to provide a first output. the first set of variables can change in response to feedback representing differences between desired network outputs for selected network inputs and actual network outputs. the second subunit has a plurality of second inputs, and a corresponding second set of variables for operating upon said second inputs to provide a second output. the second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs. the computer provides an activating variable representing the difference between current second output and previous second outputs. a series of examples of data is provided as network input to said network. the activating variable is added to the feedback to accelerate the change of said first set of variables. the actual resulting network outputs are compared to desired outputs corresponding to the examples. the examples are iterated until the network outputs converge to a solution.",1990-04-03,"The title of the patent is training neural networks and its abstract is a method of training an artificial neural network uses a computer configured as a plurality of interconnected neural units arranged in a layered network including an input layer having a network input, and an output layer having a network output. a neural unit has a first subunit and a second subunit. the first subunit having one or more first inputs, and a corresponding first set of variables for operating upon the first inputs to provide a first output. the first set of variables can change in response to feedback representing differences between desired network outputs for selected network inputs and actual network outputs. the second subunit has a plurality of second inputs, and a corresponding second set of variables for operating upon said second inputs to provide a second output. the second set of variables can change in response to differences between desired network outputs for selected network inputs and actual network outputs. the computer provides an activating variable representing the difference between current second output and previous second outputs. a series of examples of data is provided as network input to said network. the activating variable is added to the feedback to accelerate the change of said first set of variables. the actual resulting network outputs are compared to desired outputs corresponding to the examples. the examples are iterated until the network outputs converge to a solution. dated 1990-04-03" 4914708,system for self-organization of stable category recognition codes for analog input patterns,"a neural network includes a feature representation field which receives input patterns. signals from the feature representation field select a category from a category representation field through a first adaptive filter. based on the selected category, a template pattern is applied to the feature representation field, and a match between the template and the input is determined. if the angle between the template vector and a vector within the representation field is too great, the selected category is reset. otherwise the category selection and template pattern are adapted to the input pattern as well as the previously stored template. a complex representation field includes signals normalized relative to signals across the field and feedback for pattern contrast enhancement.",1990-04-03,"The title of the patent is system for self-organization of stable category recognition codes for analog input patterns and its abstract is a neural network includes a feature representation field which receives input patterns. signals from the feature representation field select a category from a category representation field through a first adaptive filter. based on the selected category, a template pattern is applied to the feature representation field, and a match between the template and the input is determined. if the angle between the template vector and a vector within the representation field is too great, the selected category is reset. otherwise the category selection and template pattern are adapted to the input pattern as well as the previously stored template. a complex representation field includes signals normalized relative to signals across the field and feedback for pattern contrast enhancement. dated 1990-04-03" 4918618,discrete weight neural network,"a neural network using interconnecting weights each with two values, one of which is selected for use, can be taught to map a set of input vectors to a set of output vectors. a set of input vectors is applied to the network and in response, a set of output vectors is produced by the network. the error is the difference between desired outputs and actual outputs. the network is trained in the following manner. a set of input vectors is presented to the network, each vector being propogated forward through the network to produce an output vector. a set of error vectors is then presented to the network and propagated backwards. each tensor weight element includes a selective change means which accumulates particular information about the error. after all the input vectors are presented, an update phase is initiated. during the update phase, in accordance with the results of the derived algorithm, the selective change means selects the other weight value if selecting the other weight value will decrease the total error. only one such change is made per set. after the update phase, if a selected value was changed, the entire process is repeated. when no values are switched, the network has adapted as well as it can, and the training is completed.",1990-04-17,"The title of the patent is discrete weight neural network and its abstract is a neural network using interconnecting weights each with two values, one of which is selected for use, can be taught to map a set of input vectors to a set of output vectors. a set of input vectors is applied to the network and in response, a set of output vectors is produced by the network. the error is the difference between desired outputs and actual outputs. the network is trained in the following manner. a set of input vectors is presented to the network, each vector being propogated forward through the network to produce an output vector. a set of error vectors is then presented to the network and propagated backwards. each tensor weight element includes a selective change means which accumulates particular information about the error. after all the input vectors are presented, an update phase is initiated. during the update phase, in accordance with the results of the derived algorithm, the selective change means selects the other weight value if selecting the other weight value will decrease the total error. only one such change is made per set. after the update phase, if a selected value was changed, the entire process is repeated. when no values are switched, the network has adapted as well as it can, and the training is completed. dated 1990-04-17" 4926064,sleep refreshed memory for neural network,"a method and apparatus are disclosed for implementing a neural network having a sleep mode during which capacitively stored synaptic connectivity weights are refreshed. each neuron outputs an analog activity level, represented in a preferred embodiment by the frequency of digital pulses. feed-forward synaptic connection circuits couple the activity level outputs of first level neurons to inputs of second level neurons, and feed-back synaptic connection circuits couple outputs of second level neurons to inputs of first level neurons, the coupling being weighted according to connectivity weights stored on respective storage capacitors in each synaptic connection circuit. the network learns according to a learning algorithm under which the connections in both directions between a particular first level neuron and a particular second level neuron are strengthened to the extent of concurrence of high activity levels in both the first and second level neurons, and weakened to the extent of concurrence of a high activity level in the second level neuron and a low activity level in the first level neuron. the network is put to sleep by disconnecting all environmental inputs and providing a non-specific low activity level signal to each of the first level neurons. this causes the network to randomly traverse its state space with low intensity resonant firings, each state being visited with a probability responsive to the initial connectivity weights of the connections which abut the second level neuron representing such state. refresh is accomplished since the learning algorithm remains active during sleep. thus, the sleep refresh mechanism enhances the contrast in the connectivity terrain and strengthens connections that would otherwise wash out due to lack of visitation while the system is awake. a deep sleep mechanism is also provided for preventing runaway strengthening of favored states, and also to encourage weber law compliance.",1990-05-15,"The title of the patent is sleep refreshed memory for neural network and its abstract is a method and apparatus are disclosed for implementing a neural network having a sleep mode during which capacitively stored synaptic connectivity weights are refreshed. each neuron outputs an analog activity level, represented in a preferred embodiment by the frequency of digital pulses. feed-forward synaptic connection circuits couple the activity level outputs of first level neurons to inputs of second level neurons, and feed-back synaptic connection circuits couple outputs of second level neurons to inputs of first level neurons, the coupling being weighted according to connectivity weights stored on respective storage capacitors in each synaptic connection circuit. the network learns according to a learning algorithm under which the connections in both directions between a particular first level neuron and a particular second level neuron are strengthened to the extent of concurrence of high activity levels in both the first and second level neurons, and weakened to the extent of concurrence of a high activity level in the second level neuron and a low activity level in the first level neuron. the network is put to sleep by disconnecting all environmental inputs and providing a non-specific low activity level signal to each of the first level neurons. this causes the network to randomly traverse its state space with low intensity resonant firings, each state being visited with a probability responsive to the initial connectivity weights of the connections which abut the second level neuron representing such state. refresh is accomplished since the learning algorithm remains active during sleep. thus, the sleep refresh mechanism enhances the contrast in the connectivity terrain and strengthens connections that would otherwise wash out due to lack of visitation while the system is awake. a deep sleep mechanism is also provided for preventing runaway strengthening of favored states, and also to encourage weber law compliance. dated 1990-05-15" 4926180,analog to digital conversion using correlated quantization and collective optimization,"a 1-bit nonstandard a/d converter for converting a block u of n samples of a continuous time analog signal u(t) into n corresponding 1-bit binary values x, such that a distortion measure of the form d(u,x)=(au-bx).sup.t (au-bx) is minimized, is implemented with an n-input parallel sample-and-hold circuit and a neural network having n nonlinear amplifiers, where u and x are n-dimensional vectors, and a and b are n.times.n matrices. minimization of the above distortion measure is equivalent to minimizing the quantity equ 1/2x.sup.t b.sup.t bx-u.sup.t a.sup.t bx, which is achieved to at least a good approximation by the n-amplifier neural network. accordingly, the conductances of the feedback connections among the amplifiers are defined by respective off-diagonal elements of the matrix -b.sup.t b. additionally, each amplifier of the neural network is connected to receive the analog signal samples through respective conductances defined by the matrix b.sup.t. furthermore, each amplifier receives a respective constant signal defined by the diagonal elements of the matrix -b.sup.t b. the stabilized outputs of the n amplifiers are the binary values of the digital signal x. a multiple-bit nonstandard a/d converter based on for foregoing 1-bit a/d converter is also disclosed.",1990-05-15,"The title of the patent is analog to digital conversion using correlated quantization and collective optimization and its abstract is a 1-bit nonstandard a/d converter for converting a block u of n samples of a continuous time analog signal u(t) into n corresponding 1-bit binary values x, such that a distortion measure of the form d(u,x)=(au-bx).sup.t (au-bx) is minimized, is implemented with an n-input parallel sample-and-hold circuit and a neural network having n nonlinear amplifiers, where u and x are n-dimensional vectors, and a and b are n.times.n matrices. minimization of the above distortion measure is equivalent to minimizing the quantity equ 1/2x.sup.t b.sup.t bx-u.sup.t a.sup.t bx, which is achieved to at least a good approximation by the n-amplifier neural network. accordingly, the conductances of the feedback connections among the amplifiers are defined by respective off-diagonal elements of the matrix -b.sup.t b. additionally, each amplifier of the neural network is connected to receive the analog signal samples through respective conductances defined by the matrix b.sup.t. furthermore, each amplifier receives a respective constant signal defined by the diagonal elements of the matrix -b.sup.t b. the stabilized outputs of the n amplifiers are the binary values of the digital signal x. a multiple-bit nonstandard a/d converter based on for foregoing 1-bit a/d converter is also disclosed. dated 1990-05-15" 4931674,programmable analog voltage multiplier circuit means,"an improved programmable analog voltage multiplier circuit means (pavmcm) cluding various embodiments thereof that are operable in linear/nonlinear fashion. the pavmcm is generally made up of multiplier circuit means, at least one switch means and at least one capacitor means. the switch means is connected to a programmable analog voltage (pav) input and the capacitor means. the circuit means is composed of a high impedance analog voltage (hiav) programming input, an analog voltage input and current source output means. the capacitor means is connected to the switch means and the hiav programming input. the capacitor means receives and dynamically stores a pav input when the switch is closed and then applies the dynamically stored pav input to the hiav programming input of the circuit means when the switch is opened. the product of the pav input and the analog voltage input for a circuit means provides the multiplied current output of the output means thereof. because of the high impedance of a fet gate means, it may be used where its gate means is the programming input of the pavmcm means. pavmcm means can be formed using fet multiplier and differential amplifier multiplier circuit means. the pavmcm can be arranged to form embodiments of analog vector-vector and analog vector-matrix multiplier circuit means. one of the advantages of the pavmcm when configured as a vector-matrix multiplier circuit means is that it is useful in an artificial neural network as well as for pattern recognition.",1990-06-05,"The title of the patent is programmable analog voltage multiplier circuit means and its abstract is an improved programmable analog voltage multiplier circuit means (pavmcm) cluding various embodiments thereof that are operable in linear/nonlinear fashion. the pavmcm is generally made up of multiplier circuit means, at least one switch means and at least one capacitor means. the switch means is connected to a programmable analog voltage (pav) input and the capacitor means. the circuit means is composed of a high impedance analog voltage (hiav) programming input, an analog voltage input and current source output means. the capacitor means is connected to the switch means and the hiav programming input. the capacitor means receives and dynamically stores a pav input when the switch is closed and then applies the dynamically stored pav input to the hiav programming input of the circuit means when the switch is opened. the product of the pav input and the analog voltage input for a circuit means provides the multiplied current output of the output means thereof. because of the high impedance of a fet gate means, it may be used where its gate means is the programming input of the pavmcm means. pavmcm means can be formed using fet multiplier and differential amplifier multiplier circuit means. the pavmcm can be arranged to form embodiments of analog vector-vector and analog vector-matrix multiplier circuit means. one of the advantages of the pavmcm when configured as a vector-matrix multiplier circuit means is that it is useful in an artificial neural network as well as for pattern recognition. dated 1990-06-05" 4931763,memory switches based on metal oxide thin films,""" mno.sub.2-x thin films (12) exhibit irreversible memory switching (28) with an """"off/on"""" resistance ratio of at least about 10.sup.3 and the tailorability of """"on"""" state (20) resistance. such films are potentially extremely useful as a """"connection"""" element in a variety of microelectronic circuits and arrays (24). such films provide a pre-tailored, finite, non-volatile resistive element at a desired place in an electric circuit, which can be electrically turned off (22) or """"disconnected"""" as desired, by application of an electrical pulse. microswitch structures (10) constitute the thin film element, contacted by a pair of separate electrodes (16a, 16b) and have a finite, pre-selected on resistance which is ideally suited, for example, as a programmable binary synaptic connection for electronic implementation of neural network architectures. the mno.sub.2-x microswitch is non-volatile, patternable, insensitive to ultraviolet light, and adherent to a variety of insulating substrates (14), such as glass and silicon dioxide-coated silicon substrates. """,1990-06-05,"The title of the patent is memory switches based on metal oxide thin films and its abstract is "" mno.sub.2-x thin films (12) exhibit irreversible memory switching (28) with an """"off/on"""" resistance ratio of at least about 10.sup.3 and the tailorability of """"on"""" state (20) resistance. such films are potentially extremely useful as a """"connection"""" element in a variety of microelectronic circuits and arrays (24). such films provide a pre-tailored, finite, non-volatile resistive element at a desired place in an electric circuit, which can be electrically turned off (22) or """"disconnected"""" as desired, by application of an electrical pulse. microswitch structures (10) constitute the thin film element, contacted by a pair of separate electrodes (16a, 16b) and have a finite, pre-selected on resistance which is ideally suited, for example, as a programmable binary synaptic connection for electronic implementation of neural network architectures. the mno.sub.2-x microswitch is non-volatile, patternable, insensitive to ultraviolet light, and adherent to a variety of insulating substrates (14), such as glass and silicon dioxide-coated silicon substrates. "" dated 1990-06-05" 4937872,neural computation by time concentration,"apparatus that solves the problem of pattern recognition in a temporal signal that is subject to distortions and time warp. the arrangement embodying the invention comprises a neural network, an input interconnection network, and a plurality of signal modification circuits. a plurality of input leads delivers a preselected characteristic stimulus to associated signal modification units, and in response to an applied stimulus, each signal modification unit develops a plurality of output signals that begins at the time of stimulus application, rises to a peak, and decays thereafter. the mean time delay of each output (time to reach the peak) is different for each of the modification unit output signals. the outputs of the signal modification units are applied to the input interconnection unit wherein connections are made in accordance with the sequences that are to be recognized.",1990-06-26,"The title of the patent is neural computation by time concentration and its abstract is apparatus that solves the problem of pattern recognition in a temporal signal that is subject to distortions and time warp. the arrangement embodying the invention comprises a neural network, an input interconnection network, and a plurality of signal modification circuits. a plurality of input leads delivers a preselected characteristic stimulus to associated signal modification units, and in response to an applied stimulus, each signal modification unit develops a plurality of output signals that begins at the time of stimulus application, rises to a peak, and decays thereafter. the mean time delay of each output (time to reach the peak) is different for each of the modification unit output signals. the outputs of the signal modification units are applied to the input interconnection unit wherein connections are made in accordance with the sequences that are to be recognized. dated 1990-06-26" 4941122,neural network image processing system,"a neural-simulating system for an image processing system includes a plurality of networks arranged in a plurality of layers, the output signals of ones of the layers provide input signals to the others of the layers. each of the plurality of layers include a plurality of neurons operating in parallel on the input signals to the layers. the plurality of neurons within a layer are arranged in groups. each of the neurons within a group operates in parallel on the input signals. each neuron within a group of neurons operates to extract a specific feature of an area of the image being processed. each of the neurons derives output signals from the input signals representing the relative weight of the input signal applied thereto based upon a continuously differential transfer function for each function.",1990-07-10,"The title of the patent is neural network image processing system and its abstract is a neural-simulating system for an image processing system includes a plurality of networks arranged in a plurality of layers, the output signals of ones of the layers provide input signals to the others of the layers. each of the plurality of layers include a plurality of neurons operating in parallel on the input signals to the layers. the plurality of neurons within a layer are arranged in groups. each of the neurons within a group operates in parallel on the input signals. each neuron within a group of neurons operates to extract a specific feature of an area of the image being processed. each of the neurons derives output signals from the input signals representing the relative weight of the input signal applied thereto based upon a continuously differential transfer function for each function. dated 1990-07-10" 4943556,superconducting neural network computer and sensor array,""" a combination of optical interconnect technology with superconducting matal to form a superconducting neural network array. superconducting material in a matrix has the superconducting current decreased in one filament of the matrix by interaction of the cooper pairs with radiation controlled by a spatial light modulator. this decrease in current results in a switch of current, in a relative sense, to another filament in the matrix. this """"switching"""" mechanism can be used in a digital or analog fashion in a superconducting computer application. """,1990-07-24,"The title of the patent is superconducting neural network computer and sensor array and its abstract is "" a combination of optical interconnect technology with superconducting matal to form a superconducting neural network array. superconducting material in a matrix has the superconducting current decreased in one filament of the matrix by interaction of the cooper pairs with radiation controlled by a spatial light modulator. this decrease in current results in a switch of current, in a relative sense, to another filament in the matrix. this """"switching"""" mechanism can be used in a digital or analog fashion in a superconducting computer application. "" dated 1990-07-24" 4945494,neural network and system,"neural network systems (100) with learning and recall are applied to clustered multiple-featured data (122, 124, 126) and analog data.",1990-07-31,"The title of the patent is neural network and system and its abstract is neural network systems (100) with learning and recall are applied to clustered multiple-featured data (122, 124, 126) and analog data. dated 1990-07-31" 4947482,state analog neural network and method of implementing same,"a neural network is implemented by discrete-time, continuous voltage state analog device in which neuron, synapse and synaptic strength signals are generated in highly parallel analog circuits in successive states from stored values of the interdependent signals calculated in a previous state. the neuron and synapse signals are refined in a relaxation loop while the synaptic strength signals are held constant. in learning modes, the synaptic strength signals are modified in successive states from stable values of the analog neuron signals. the analog signals are stored for as long as required in master/slaver sample and hold circuits as digitized signals which are periodically refreshed to maintain the stored voltage within a voltage window bracketing the original analog signal.",1990-08-07,"The title of the patent is state analog neural network and method of implementing same and its abstract is a neural network is implemented by discrete-time, continuous voltage state analog device in which neuron, synapse and synaptic strength signals are generated in highly parallel analog circuits in successive states from stored values of the interdependent signals calculated in a previous state. the neuron and synapse signals are refined in a relaxation loop while the synaptic strength signals are held constant. in learning modes, the synaptic strength signals are modified in successive states from stable values of the analog neuron signals. the analog signals are stored for as long as required in master/slaver sample and hold circuits as digitized signals which are periodically refreshed to maintain the stored voltage within a voltage window bracketing the original analog signal. dated 1990-08-07" 4950917,semiconductor cell for neural network employing a four-quadrant multiplier,"a synapse cell for use in providing a weighted connection strength is disclosed. the cell employs a four-quadrant multiplier and a pair of floating gate devices. various charge levels are programmed onto the floating gate devices, establishing weight and reference levels. these levels affect the current flowing through the multiplier. the output of the cell thus becomes a multiple of the input and the programmed charge difference.",1990-08-21,"The title of the patent is semiconductor cell for neural network employing a four-quadrant multiplier and its abstract is a synapse cell for use in providing a weighted connection strength is disclosed. the cell employs a four-quadrant multiplier and a pair of floating gate devices. various charge levels are programmed onto the floating gate devices, establishing weight and reference levels. these levels affect the current flowing through the multiplier. the output of the cell thus becomes a multiple of the input and the programmed charge difference. dated 1990-08-21" 4951239,artificial neural network implementation,"an artificial neural network having analog circuits for simultaneous parallel processing using individually variable synaptic input weights. the processing is implemented with a circuit adapted to vary the weight, which may be stored in a metal oxide field effect transistor, for teaching the network by addressing from outside the network or for hebbian or delta rule learning by the network itself.",1990-08-21,"The title of the patent is artificial neural network implementation and its abstract is an artificial neural network having analog circuits for simultaneous parallel processing using individually variable synaptic input weights. the processing is implemented with a circuit adapted to vary the weight, which may be stored in a metal oxide field effect transistor, for teaching the network by addressing from outside the network or for hebbian or delta rule learning by the network itself. dated 1990-08-21" 4954963,neural network and system,"neural network systems (100) with learning and recall are applied to clustered multiple-featured data (122,124,126) and analog data.",1990-09-04,"The title of the patent is neural network and system and its abstract is neural network systems (100) with learning and recall are applied to clustered multiple-featured data (122,124,126) and analog data. dated 1990-09-04" 4956564,adaptive synapse cell providing both excitatory and inhibitory connections in an associative network,"the present invention covers a synapse cell for providing a weighted connection between an input voltage line and an output summing line having an associated capacitance. connection between input and output lines in the associative network is made using one or more floating-gate transistors which provide both excitatory as well as inhibitory connections. as configured, each transistor's control gate is coupled to an input line and its drain is coupled to an output summing line. the floating-gate of the transistor is used for storing a charge which corresponds to the strength or weight of the neural connection. when a binary voltage pulse having a certain duration is applied to the control gate of the floating-gate transistor, a current is generated which acts to discharge the capacitance associated with the output summing line. the current, and therefore the resulting discharge, is directly proportional to the charge stored on the floating-gate member and the duration of the input pulse.",1990-09-11,"The title of the patent is adaptive synapse cell providing both excitatory and inhibitory connections in an associative network and its abstract is the present invention covers a synapse cell for providing a weighted connection between an input voltage line and an output summing line having an associated capacitance. connection between input and output lines in the associative network is made using one or more floating-gate transistors which provide both excitatory as well as inhibitory connections. as configured, each transistor's control gate is coupled to an input line and its drain is coupled to an output summing line. the floating-gate of the transistor is used for storing a charge which corresponds to the strength or weight of the neural connection. when a binary voltage pulse having a certain duration is applied to the control gate of the floating-gate transistor, a current is generated which acts to discharge the capacitance associated with the output summing line. the current, and therefore the resulting discharge, is directly proportional to the charge stored on the floating-gate member and the duration of the input pulse. dated 1990-09-11" 4958939,centering scheme for pattern recognition,the invention relates to a neural network centering scheme for translation-invariant pattern recognition. the scheme involves the centering of a pattern about its centroid to prepare it for subsequent subjugation to an associative match. the scheme is utilized in a camera assembly of the type used for image acquisition. movement of the camera assembly is controlled in accordance with the scheme to effect the centering of a pattern in the field of view window of the camera assembly.,1990-09-25,The title of the patent is centering scheme for pattern recognition and its abstract is the invention relates to a neural network centering scheme for translation-invariant pattern recognition. the scheme involves the centering of a pattern about its centroid to prepare it for subsequent subjugation to an associative match. the scheme is utilized in a camera assembly of the type used for image acquisition. movement of the camera assembly is controlled in accordance with the scheme to effect the centering of a pattern in the field of view window of the camera assembly. dated 1990-09-25 4959532,optical neural network and method,"an optical neural network stores optical transmission weightings as angularly and spatially distributed gratings within a phase conjugate mirror (pcm), the pcm using a stimulated process to generate a phase conjugated return beam without separate external pump mechanisms. an error signal is generated in response to differences between the actual and a desired output optical pattern, and is used to adjust the pcm gratings toward the desired output. one or more intermediate image planes may be employed along with the input and output planes. the input and intermediate planes, as well as the error signal, are preferably displayed on the surface of a spatial light modulator. the output optical signal is transduced into an electrical format for training the neural network; with the error signal also generated electrically. a significant increase in neuron and interconnection capacity is realized, without cross-talk between neurons, compared to prior optical neural networks.",1990-09-25,"The title of the patent is optical neural network and method and its abstract is an optical neural network stores optical transmission weightings as angularly and spatially distributed gratings within a phase conjugate mirror (pcm), the pcm using a stimulated process to generate a phase conjugated return beam without separate external pump mechanisms. an error signal is generated in response to differences between the actual and a desired output optical pattern, and is used to adjust the pcm gratings toward the desired output. one or more intermediate image planes may be employed along with the input and output planes. the input and intermediate planes, as well as the error signal, are preferably displayed on the surface of a spatial light modulator. the output optical signal is transduced into an electrical format for training the neural network; with the error signal also generated electrically. a significant increase in neuron and interconnection capacity is realized, without cross-talk between neurons, compared to prior optical neural networks. dated 1990-09-25" 4961002,synapse cell employing dual gate transistor structure,"a synapse cell for providing a weighted connection between an input voltage line and an output summing line having an associated capacitance. connection between input and output lines in the associative network is made using a dual-gate transistor. the transistor has a floating gate member for storing electrical charge, a pair of control gates coupled to a pair of input lines, and a drain coupled to an output summing line. the floating gate of the transistor is used for storing a charge which corresponds to the strength or weight of the neural connection. when a binary voltage pulse having a certain duration is applied to either one or both of the control gates of the transistor, a current is generated. this current acts to discharge the capacitance associated with the output summing line. furthermore, by employing a dual-gate structure, programming disturbance of neighboring devices in the network is practically eliminated.",1990-10-02,"The title of the patent is synapse cell employing dual gate transistor structure and its abstract is a synapse cell for providing a weighted connection between an input voltage line and an output summing line having an associated capacitance. connection between input and output lines in the associative network is made using a dual-gate transistor. the transistor has a floating gate member for storing electrical charge, a pair of control gates coupled to a pair of input lines, and a drain coupled to an output summing line. the floating gate of the transistor is used for storing a charge which corresponds to the strength or weight of the neural connection. when a binary voltage pulse having a certain duration is applied to either one or both of the control gates of the transistor, a current is generated. this current acts to discharge the capacitance associated with the output summing line. furthermore, by employing a dual-gate structure, programming disturbance of neighboring devices in the network is practically eliminated. dated 1990-10-02" 4961005,programmable neural circuit implementable in cmos very large scale integration,"the present invention is a neural network circuit including a plurality of neuron circuits. each neuron circuit has an input node for receiving an input signal, an output node for generating an output signal and a self-feedback control node for receiving a self-feedback signal. an interconnection device having an electrically controllable conductance is connected between the input nodes of each pair of neuron circuits. the neural network circuit is consequently programmable via the voltages applied to the self-feedback control nodes and the interconnection devices. such programmability permits the neural network circuit to store certain sets of desirable steady states. in the preferred embodiment the individual neuron circuits and the interconnection devices are constructed in very large scale integration cmos. thus this neural network circuit can be easily constructed with large numbers of neurons.",1990-10-02,"The title of the patent is programmable neural circuit implementable in cmos very large scale integration and its abstract is the present invention is a neural network circuit including a plurality of neuron circuits. each neuron circuit has an input node for receiving an input signal, an output node for generating an output signal and a self-feedback control node for receiving a self-feedback signal. an interconnection device having an electrically controllable conductance is connected between the input nodes of each pair of neuron circuits. the neural network circuit is consequently programmable via the voltages applied to the self-feedback control nodes and the interconnection devices. such programmability permits the neural network circuit to store certain sets of desirable steady states. in the preferred embodiment the individual neuron circuits and the interconnection devices are constructed in very large scale integration cmos. thus this neural network circuit can be easily constructed with large numbers of neurons. dated 1990-10-02" 4962342,dynamic synapse for neural network,an electronic circuit is disclosed having a sample/hold amplifier connected to an adaptive amplifier. a plurality of such electronic circuits may be configured in an array of rows and columns. an input voltage vector may be compared with an analog voltage vector stored in a row or column of the array and the stored vector closest to the applied input vector may be identified and further processed. the stored analog value may be read out of the synapse by applying a voltage to a read line. an array of the readable synapses may be provided and used in conjunction with a dummy synapse to compensate for an error offset introduced by the operating characteristics of the synapses.,1990-10-09,The title of the patent is dynamic synapse for neural network and its abstract is an electronic circuit is disclosed having a sample/hold amplifier connected to an adaptive amplifier. a plurality of such electronic circuits may be configured in an array of rows and columns. an input voltage vector may be compared with an analog voltage vector stored in a row or column of the array and the stored vector closest to the applied input vector may be identified and further processed. the stored analog value may be read out of the synapse by applying a voltage to a read line. an array of the readable synapses may be provided and used in conjunction with a dummy synapse to compensate for an error offset introduced by the operating characteristics of the synapses. dated 1990-10-09 4963725,adaptive optical neural network,"an adaptive optical network is provided for the implementation of learning algorithms. the network comprises a double mach-zehnder interferometer in conjunction with a photorefractive crystal that functions as a holographic medium. light from selectable sources on opposite sides of a beamsplitter is passed through the interferometer, at least one arm of which includes a spatial light modulator for imprinting a data pattern on the light. the light is directed into the holographic medium to develop a refractive index grating corresponding to the data pattern. light from the hologram is sensed by a photodetector that provides a signal to a threshold device. the output of the threshold device is compared with a reference signal to produce an error signal that can be used to select the source of light directed through the network. the interconnections of the optical devices function to compute the inner product between the elements of the data pattern and their weight factors. selecting the light source and changing the data pattern provide additive and subtractive weight change capability for implementing various learning algorithms.",1990-10-16,"The title of the patent is adaptive optical neural network and its abstract is an adaptive optical network is provided for the implementation of learning algorithms. the network comprises a double mach-zehnder interferometer in conjunction with a photorefractive crystal that functions as a holographic medium. light from selectable sources on opposite sides of a beamsplitter is passed through the interferometer, at least one arm of which includes a spatial light modulator for imprinting a data pattern on the light. the light is directed into the holographic medium to develop a refractive index grating corresponding to the data pattern. light from the hologram is sensed by a photodetector that provides a signal to a threshold device. the output of the threshold device is compared with a reference signal to produce an error signal that can be used to select the source of light directed through the network. the interconnections of the optical devices function to compute the inner product between the elements of the data pattern and their weight factors. selecting the light source and changing the data pattern provide additive and subtractive weight change capability for implementing various learning algorithms. dated 1990-10-16" 4965443,focus detection apparatus using neural network means,"an optical image transmitted through a photographing lens is incident on a light-receiving unit of a two-dimensional matrix. an output from the light-receiving unit is input to a first arithmetic logic unit, and the first arithmetic logic unit calculates actual object brightness values in consideration of an aperture value of an aperture. an output from the first arithmetic logic unit is supplied to a multiplexer and a neural network. the neural network determines a main part of the object from a pattern of brightness values of the respective photoelectric transducer elements and outputs a position signal of the main part. the multiplexer selectively passes the brightness value of the photoelectric transducer element corresponding to the main part of the object from the outputs generated by the first arithmetic logic unit. an output from the multiplexer is supplied to a second arithmetic logic unit. the second arithmetic logic unit performs a focus detection calculation based on only the brightness of the main part. the photographing lens is moved along the optical axis, thereby performing a focusing operation.",1990-10-23,"The title of the patent is focus detection apparatus using neural network means and its abstract is an optical image transmitted through a photographing lens is incident on a light-receiving unit of a two-dimensional matrix. an output from the light-receiving unit is input to a first arithmetic logic unit, and the first arithmetic logic unit calculates actual object brightness values in consideration of an aperture value of an aperture. an output from the first arithmetic logic unit is supplied to a multiplexer and a neural network. the neural network determines a main part of the object from a pattern of brightness values of the respective photoelectric transducer elements and outputs a position signal of the main part. the multiplexer selectively passes the brightness value of the photoelectric transducer element corresponding to the main part of the object from the outputs generated by the first arithmetic logic unit. an output from the multiplexer is supplied to a second arithmetic logic unit. the second arithmetic logic unit performs a focus detection calculation based on only the brightness of the main part. the photographing lens is moved along the optical axis, thereby performing a focusing operation. dated 1990-10-23" 4965725,neural network based automated cytological specimen classification system and method,an automated screening system and method for cytological specimen classification in which a neural network is utilized in performance of the classification function. also included is an automated microscope and associated image processing circuitry.,1990-10-23,The title of the patent is neural network based automated cytological specimen classification system and method and its abstract is an automated screening system and method for cytological specimen classification in which a neural network is utilized in performance of the classification function. also included is an automated microscope and associated image processing circuitry. dated 1990-10-23 4970819,firearm safety system and method,actuation of the firing mechanism of a firearm is prevented until grip pattern sensing means on the handgrip of the firearm supply to a microprocessor signals corresponding to a grip pattern stored in a programmed simulated neural network memory. all of these components are contained within the firearm. programming of the neural network memory is accomplished by using a host computer with a simulated neural network to train that network to recognize a particular grip pattern using grip pattern signals generated by the grip pattern sensing means as the sensing means is repeatedly gripped for the person for whom the firearm is to be programmed.,1990-11-20,The title of the patent is firearm safety system and method and its abstract is actuation of the firing mechanism of a firearm is prevented until grip pattern sensing means on the handgrip of the firearm supply to a microprocessor signals corresponding to a grip pattern stored in a programmed simulated neural network memory. all of these components are contained within the firearm. programming of the neural network memory is accomplished by using a host computer with a simulated neural network to train that network to recognize a particular grip pattern using grip pattern signals generated by the grip pattern sensing means as the sensing means is repeatedly gripped for the person for whom the firearm is to be programmed. dated 1990-11-20 4972187,numeric encoding method and apparatus for neural networks,"a numeric encoding method and apparatus for neural networks, encodes numeric input data into a form applicable to an input of a neural network by partitioning a binary input into n-bit input segments, each of which is replaced with a code having m adjacent logic ones and 2.sup.n -1 logic zeros, the bit position of the least significant of the m logic ones corresponding to the binary value of the input segment it replaces. the codes are concatenated to form an encoded input. a decoding method decodes an output from the neural network into a binary form by partitioning the output into output segments having 2.sup.n +m-1 bits each, each of which is replaced with an n-bit binary segment being a bracketed weighted average of the significances of logic ones present in the output segment. the binary segments are concatenated to form a decoded output.",1990-11-20,"The title of the patent is numeric encoding method and apparatus for neural networks and its abstract is a numeric encoding method and apparatus for neural networks, encodes numeric input data into a form applicable to an input of a neural network by partitioning a binary input into n-bit input segments, each of which is replaced with a code having m adjacent logic ones and 2.sup.n -1 logic zeros, the bit position of the least significant of the m logic ones corresponding to the binary value of the input segment it replaces. the codes are concatenated to form an encoded input. a decoding method decodes an output from the neural network into a binary form by partitioning the output into output segments having 2.sup.n +m-1 bits each, each of which is replaced with an n-bit binary segment being a bracketed weighted average of the significances of logic ones present in the output segment. the binary segments are concatenated to form a decoded output. dated 1990-11-20" 4972363,neural network using stochastic processing,"an apparatus and method for implementing a neural network having n nodes coupled to one another by interconnections having interconnect weights t.sub.ij that quantify the influence of node j on node i. the apparatus comprises a node circuit for each node and a data processor. the data processor receives one or more library members, and transmits the interconnect weights to the node circuits. the data processor also stores a current state vector, and receives input data representing a library member to be retrieved. the data processor then performs an iteration in which the current state vector is sent to the node circuits, and an updated state vector is received from the node circuits, the iteration being commenced by setting the current state vector equal to the input data. each node circuit comprises one or more stochastic processors for multiplying the state vector elements by the corresponding interconnect weights, to determine the updated state vector. each stochastic processor preferably includes means for generating a pseudorandom sequence of numbers, and using such sequence to encode the interconnect weights and state vector elements into stochastic input signals that are then multiplied by a stochastic multiplier comprising delay means and an and gate.",1990-11-20,"The title of the patent is neural network using stochastic processing and its abstract is an apparatus and method for implementing a neural network having n nodes coupled to one another by interconnections having interconnect weights t.sub.ij that quantify the influence of node j on node i. the apparatus comprises a node circuit for each node and a data processor. the data processor receives one or more library members, and transmits the interconnect weights to the node circuits. the data processor also stores a current state vector, and receives input data representing a library member to be retrieved. the data processor then performs an iteration in which the current state vector is sent to the node circuits, and an updated state vector is received from the node circuits, the iteration being commenced by setting the current state vector equal to the input data. each node circuit comprises one or more stochastic processors for multiplying the state vector elements by the corresponding interconnect weights, to determine the updated state vector. each stochastic processor preferably includes means for generating a pseudorandom sequence of numbers, and using such sequence to encode the interconnect weights and state vector elements into stochastic input signals that are then multiplied by a stochastic multiplier comprising delay means and an and gate. dated 1990-11-20" 4972473,data communication method and apparatus using neural-networks,"a data communication apparatus comprises: means for dividing data to be transmitted into a plurality of blocks and extracting the data from each block; a first multi-layered neural network of three or more layers which has weighting coefficients to output the same data as the input data for the data extracted from each block and which can output data from an intermediate layer; the transmission data extracted from each block being inputted to the first neural network and outputted from the intermediate layer; means for encoding the transmission data which is outputted from the intermediate layer of the first neural network and, thereafter, transmitting; means for receiving and decoding the transmitted data; a second multi-layered neural network of three or more layers which has the same weight coefficients as those of the first neural network and can input data from an intermediate layer; the decoded data of each block being inputted to the second neural network and outputted from an output layer; and means for restoring the data on the basis of the output data from the output layer of the second neural network.",1990-11-20,"The title of the patent is data communication method and apparatus using neural-networks and its abstract is a data communication apparatus comprises: means for dividing data to be transmitted into a plurality of blocks and extracting the data from each block; a first multi-layered neural network of three or more layers which has weighting coefficients to output the same data as the input data for the data extracted from each block and which can output data from an intermediate layer; the transmission data extracted from each block being inputted to the first neural network and outputted from the intermediate layer; means for encoding the transmission data which is outputted from the intermediate layer of the first neural network and, thereafter, transmitting; means for receiving and decoding the transmitted data; a second multi-layered neural network of three or more layers which has the same weight coefficients as those of the first neural network and can input data from an intermediate layer; the decoded data of each block being inputted to the second neural network and outputted from an output layer; and means for restoring the data on the basis of the output data from the output layer of the second neural network. dated 1990-11-20" 4974169,neural network with memory cycling,"an information processing system and method to calculate output values for a group of neurons. the method comprises transmitting input values for the neurons to a memory unit of a processing section, and then calculating a multitude of series of neuron output values over a multitude of cycles. during a first period of each cycle, a first series of neuron output values are calculated from neuron input values stored in a first memory area of the memory unit; and during a second period of each cycle, a second series of neuron output values are calculated from neuron input values stored in a second memory area of the memory unit. the transmitting step includes the steps of storing in the first memory area of the memory unit, neuron input values transmitted to the memory unit during the period immediately preceding the first period of each cycle; and storing in the second memory area of the memory unit neuron input values transmitted to the memory unit, during the first period of each cycle.",1990-11-27,"The title of the patent is neural network with memory cycling and its abstract is an information processing system and method to calculate output values for a group of neurons. the method comprises transmitting input values for the neurons to a memory unit of a processing section, and then calculating a multitude of series of neuron output values over a multitude of cycles. during a first period of each cycle, a first series of neuron output values are calculated from neuron input values stored in a first memory area of the memory unit; and during a second period of each cycle, a second series of neuron output values are calculated from neuron input values stored in a second memory area of the memory unit. the transmitting step includes the steps of storing in the first memory area of the memory unit, neuron input values transmitted to the memory unit during the period immediately preceding the first period of each cycle; and storing in the second memory area of the memory unit neuron input values transmitted to the memory unit, during the first period of each cycle. dated 1990-11-27" 4975961,multi-layer neural network to which dynamic programming techniques are applicable,"in a neural network, input neuron units of an input layer are grouped into first through j-th input layer frames, where j represents a predetermined natural number. intermediate neuron units of an intermediate layer are grouped into first through j-th intermediate layer frames. an output layer comprises an output neuron unit. each intermediate neuron unit of a j-th intermediate layer frame is connected to the input neuron units of j'-th input layer frames, where j is variable between 1 and j and j' represents at least two consecutive integers, one of which is equal to j and at least one other of which is less than j. each output neuron unit is connected to the intermediate neuron units of the intermediate layer. for recognition of an input pattern represented by a time sequence of feature vectors, each consisting of k vector components, where k represents a predetermined positive integer, each input layer frame consists of k input neuron units. each intermediate layer frame consists of m intermediate neuron units, where m represents a positive integer which is less than k. the vector components of each feature vector are supplied to the respective input neuron units of one of the input layer frames that is preferably selected from three consecutively numbered input layer frames. the neural network is readily trained to make a predetermined one of the output neuron units produce an output signal indicative of the input pattern and can be implemented by a microprocessor.",1990-12-04,"The title of the patent is multi-layer neural network to which dynamic programming techniques are applicable and its abstract is in a neural network, input neuron units of an input layer are grouped into first through j-th input layer frames, where j represents a predetermined natural number. intermediate neuron units of an intermediate layer are grouped into first through j-th intermediate layer frames. an output layer comprises an output neuron unit. each intermediate neuron unit of a j-th intermediate layer frame is connected to the input neuron units of j'-th input layer frames, where j is variable between 1 and j and j' represents at least two consecutive integers, one of which is equal to j and at least one other of which is less than j. each output neuron unit is connected to the intermediate neuron units of the intermediate layer. for recognition of an input pattern represented by a time sequence of feature vectors, each consisting of k vector components, where k represents a predetermined positive integer, each input layer frame consists of k input neuron units. each intermediate layer frame consists of m intermediate neuron units, where m represents a positive integer which is less than k. the vector components of each feature vector are supplied to the respective input neuron units of one of the input layer frames that is preferably selected from three consecutively numbered input layer frames. the neural network is readily trained to make a predetermined one of the output neuron units produce an output signal indicative of the input pattern and can be implemented by a microprocessor. dated 1990-12-04" 4978990,exposure control apparatus for camera,"an optical image of an object is incident on a light-receiving unit of a two-dimensional matrix through a photographing lens. an output from the light-receiving unit is input to a first arithmetic logic unit to calculate an actual object brightness value in consideration of an aperture value of an aperture. an output from the first arithmetic logic unit is input to a multiplexer and a neural network. the neural network determines a main part of the object from brightness value pattern as a set of brightness values of the photoelectric transducer elements and outputs a position signal representing the main part. a multiplexer selectively passes only brightness values of the photoelectric transducer elements corresponding to the main part of the object from the outputs from the first arithmetic logic unit. an output from the multiplexer is supplied to a second arithmetic logic unit, and the second arithmetic logic unit calculates an apex calculation on the basis of a speed value, an aperture value, a time value, and a mode signal representing a shutter or aperture priority operation, thereby determining a shutter speed or an f-number.",1990-12-18,"The title of the patent is exposure control apparatus for camera and its abstract is an optical image of an object is incident on a light-receiving unit of a two-dimensional matrix through a photographing lens. an output from the light-receiving unit is input to a first arithmetic logic unit to calculate an actual object brightness value in consideration of an aperture value of an aperture. an output from the first arithmetic logic unit is input to a multiplexer and a neural network. the neural network determines a main part of the object from brightness value pattern as a set of brightness values of the photoelectric transducer elements and outputs a position signal representing the main part. a multiplexer selectively passes only brightness values of the photoelectric transducer elements corresponding to the main part of the object from the outputs from the first arithmetic logic unit. an output from the multiplexer is supplied to a second arithmetic logic unit, and the second arithmetic logic unit calculates an apex calculation on the basis of a speed value, an aperture value, a time value, and a mode signal representing a shutter or aperture priority operation, thereby determining a shutter speed or an f-number. dated 1990-12-18" 4979126,neural network with non-linear transformations,"a neural network system includes means for accomplishing artificial intelligence functions in three formerly divergent implementations. these functions include: supervised learning, unsupervised learning, and associative memory storage and retrieval. the subject neural network is created by addition of a non-linear layer to a more standard neural network architecture. the non-linear layer functions to expand a functional input space to a signal set including orthonormal elements, when the input signal is visualized as a vector representation. an input signal is selectively passed to a non-linear transform circuit, which outputs a transform signal therefrom. both the input signal and the transform signal are placed in communication with a first layer of a plurality of processing nodes. an improved hardware implementation of the subject system includes a highly parallel, hybrid analog/digital circuitry. included therein is a digitally addressed, random access memory means for storage and retrieval of an analog signal.",1990-12-18,"The title of the patent is neural network with non-linear transformations and its abstract is a neural network system includes means for accomplishing artificial intelligence functions in three formerly divergent implementations. these functions include: supervised learning, unsupervised learning, and associative memory storage and retrieval. the subject neural network is created by addition of a non-linear layer to a more standard neural network architecture. the non-linear layer functions to expand a functional input space to a signal set including orthonormal elements, when the input signal is visualized as a vector representation. an input signal is selectively passed to a non-linear transform circuit, which outputs a transform signal therefrom. both the input signal and the transform signal are placed in communication with a first layer of a plurality of processing nodes. an improved hardware implementation of the subject system includes a highly parallel, hybrid analog/digital circuitry. included therein is a digitally addressed, random access memory means for storage and retrieval of an analog signal. dated 1990-12-18" 4988891,semiconductor neural network including photosensitive coupling elements,"a semiconductor neural network constructed in accordance with models of vital nerve cells has photosensitive elements as coupling elements providing degrees of coupling between neurons which are modeled vital nerve cells. the conductance values of the photosensitive elements can be set by light. due to such structure, not only the degrees of coupling of all the coupling elements can be simultaneously programmed but signal lines for programming the degrees of coupling can be eliminated in the network, whereby a semiconductor neural network having a high degree of integration can be implemented without additional complicating fabrication steps.",1991-01-29,"The title of the patent is semiconductor neural network including photosensitive coupling elements and its abstract is a semiconductor neural network constructed in accordance with models of vital nerve cells has photosensitive elements as coupling elements providing degrees of coupling between neurons which are modeled vital nerve cells. the conductance values of the photosensitive elements can be set by light. due to such structure, not only the degrees of coupling of all the coupling elements can be simultaneously programmed but signal lines for programming the degrees of coupling can be eliminated in the network, whereby a semiconductor neural network having a high degree of integration can be implemented without additional complicating fabrication steps. dated 1991-01-29" 4990838,movement trajectory generating method of a dynamical system,"a movement trajectory generating system of a dynamical system uses neural network units (1, 2, 3) including cascade connection of a first layer (11, 21, 31), a second layer (12, 22, 32), a third layer (13, 23, 33) and a fourth layer (14, 24, 34), to learn a vector field of differential equations indicating forward dynamics of a controlled object (4). conditions concerning trajectories of a final point and a via-point of movement of the controlled object and locations of obstacles are given from a motor center (5). while smoothness of movement is ensured by couplings of electric synapses using errors with respect to those conditions as total energy, least dissipation of energy is attained, whereby trajectory formation and control input for realizing the trajectory are obtained simultaneously.",1991-02-05,"The title of the patent is movement trajectory generating method of a dynamical system and its abstract is a movement trajectory generating system of a dynamical system uses neural network units (1, 2, 3) including cascade connection of a first layer (11, 21, 31), a second layer (12, 22, 32), a third layer (13, 23, 33) and a fourth layer (14, 24, 34), to learn a vector field of differential equations indicating forward dynamics of a controlled object (4). conditions concerning trajectories of a final point and a via-point of movement of the controlled object and locations of obstacles are given from a motor center (5). while smoothness of movement is ensured by couplings of electric synapses using errors with respect to those conditions as total energy, least dissipation of energy is attained, whereby trajectory formation and control input for realizing the trajectory are obtained simultaneously. dated 1991-02-05" 4994982,neural network system and circuit for use therein,a neural network system comprises a memory for storing in binary code the synaptic coefficients indicative of the interconnections among the neurons. means are provided for simultaneously supplying all the synaptic coefficients associated with a given neuron. digital multipliers are provided for determining the product of the supplied synaptic coefficients and the relevant neuron states of the neurons connected to said given neuron. the multipliers deliver their results into an adder tree for determining the sum of the products. as a result of the parallel architecture of the system high operating speeds are attainable. the modular architecture enables extension of the system.,1991-02-19,The title of the patent is neural network system and circuit for use therein and its abstract is a neural network system comprises a memory for storing in binary code the synaptic coefficients indicative of the interconnections among the neurons. means are provided for simultaneously supplying all the synaptic coefficients associated with a given neuron. digital multipliers are provided for determining the product of the supplied synaptic coefficients and the relevant neuron states of the neurons connected to said given neuron. the multipliers deliver their results into an adder tree for determining the sum of the products. as a result of the parallel architecture of the system high operating speeds are attainable. the modular architecture enables extension of the system. dated 1991-02-19 4995088,super resolution,"data analysis systems are provided, especially target imaging and identification systems, which utilize a cam that associatively stores a plurality of known data sets such as target data sets in a synaptic interconnectivity matrix modeled upon the model of learning of neural networks. in accordance with preferred embodiments the systems are able to identify unknown objects when only a partial data set from the object is available. the system is robust and fast, utilizing parallel processing due to the massive interconnectivity of neural elements so that the image produced exhibits the properties of super-resolution. since the system is modeled after a neural network, it is fault tolerant and highly reliable.",1991-02-19,"The title of the patent is super resolution and its abstract is data analysis systems are provided, especially target imaging and identification systems, which utilize a cam that associatively stores a plurality of known data sets such as target data sets in a synaptic interconnectivity matrix modeled upon the model of learning of neural networks. in accordance with preferred embodiments the systems are able to identify unknown objects when only a partial data set from the object is available. the system is robust and fast, utilizing parallel processing due to the massive interconnectivity of neural elements so that the image produced exhibits the properties of super-resolution. since the system is modeled after a neural network, it is fault tolerant and highly reliable. dated 1991-02-19" 4996648,neural network using random binary code,"long and short term memory equations for neural networks are implemented by means of exchange of signals which carry information in the form of both binary and continuously modulated energy emissions. in one embodiment, array of parallel processors exhibits behavior of cooperative-competitive neural networks. parallel bus interconnections and digital and analog processing of analog information contained in the exchanged energy emissions are employed with generally local synchronization of the processors. energy emission and detection is modulated as a function of a random code.",1991-02-26,"The title of the patent is neural network using random binary code and its abstract is long and short term memory equations for neural networks are implemented by means of exchange of signals which carry information in the form of both binary and continuously modulated energy emissions. in one embodiment, array of parallel processors exhibits behavior of cooperative-competitive neural networks. parallel bus interconnections and digital and analog processing of analog information contained in the exchanged energy emissions are employed with generally local synchronization of the processors. energy emission and detection is modulated as a function of a random code. dated 1991-02-26" 4999525,exclusive-or cell for pattern matching employing floating gate devices,a semiconductor cell for producing an output current that is related to the match between an input vector pattern and a weighting pattern is described. the cell is particularly useful as a synapse cell within a neural network to perform pattern recognition tasks. the cell includes a pair of input lines for receiving a differential input vector element value and a pair of output lines for providing a difference current to a current summing neural amplifier. a plurality of floating gate devices each having a floating gate member are employed in the synapse cell to store charge in accordance with a predetermined weight pattern. each of the floating gate devices is uniquely coupled to a combination of an output current line and an input voltage line such that the difference current provided to the neural amplifier is related to the match between the input vector and the stored weight.,1991-03-12,The title of the patent is exclusive-or cell for pattern matching employing floating gate devices and its abstract is a semiconductor cell for producing an output current that is related to the match between an input vector pattern and a weighting pattern is described. the cell is particularly useful as a synapse cell within a neural network to perform pattern recognition tasks. the cell includes a pair of input lines for receiving a differential input vector element value and a pair of output lines for providing a difference current to a current summing neural amplifier. a plurality of floating gate devices each having a floating gate member are employed in the synapse cell to store charge in accordance with a predetermined weight pattern. each of the floating gate devices is uniquely coupled to a combination of an output current line and an input voltage line such that the difference current provided to the neural amplifier is related to the match between the input vector and the stored weight. dated 1991-03-12 5003490,neural network signal processor,""" a neural network signal processor (nsp) (20) that can accept, as input, unprocessed signals (32), such as those directly from a sensor. consecutive portions of the input waveform are directed simultaneously to input processing units, or """"neurons"""" (22). each portion of the input waveform (32) advances through the input neurons (22) until each neuron receives the entire waveform (32). during a training procedure, the nsp 20 receives a training waveform (30) and connective weights, or """"synapses"""" (28) between the neurons are adjusted until a desired output is produced. the nsp (20) is trained to produce a single response while each portion of the input waveform is received by the input neurons (22). once trained, when an unknown waveform (32) is received by the nsp (20), it will respond with the desired output when the unknown waveform (32) contains some form of the training waveform (30). """,1991-03-26,"The title of the patent is neural network signal processor and its abstract is "" a neural network signal processor (nsp) (20) that can accept, as input, unprocessed signals (32), such as those directly from a sensor. consecutive portions of the input waveform are directed simultaneously to input processing units, or """"neurons"""" (22). each portion of the input waveform (32) advances through the input neurons (22) until each neuron receives the entire waveform (32). during a training procedure, the nsp 20 receives a training waveform (30) and connective weights, or """"synapses"""" (28) between the neurons are adjusted until a desired output is produced. the nsp (20) is trained to produce a single response while each portion of the input waveform is received by the input neurons (22). once trained, when an unknown waveform (32) is received by the nsp (20), it will respond with the desired output when the unknown waveform (32) contains some form of the training waveform (30). "" dated 1991-03-26" 5004309,neural processor with holographic optical paths and nonlinear operating means,"an optical apparatus for simulating a highly interconnected neural network is disclosed as including a spatial light modulator (slm), an inputting device, a laser, a detecting device, and a page-oriented hologaphic component. the inputting device applies input signals to the slm. the holographic component optically interconnects n.sup.2 pixels defined on the spatial light modulator to n.sup.2 pixels defined on a detecting surface of the detecting device. the interconnections are made by n.sup.2 patterns of up to n.sup.2 interconnection weight encoded beams projected by n.sup.2 planar, or essentially two-dimensional, holograms arranged in a spatially localized array within the holographic component. the slm modulates the encoded beams and directs them onto the detecting surface wherein a parameter of the beams is evaluated at each pixel thereof. the evaluated parameter is transformed according to a nonlinear threshold function to provide transformed signals which can be fed back to the slm for further iterations.",1991-04-02,"The title of the patent is neural processor with holographic optical paths and nonlinear operating means and its abstract is an optical apparatus for simulating a highly interconnected neural network is disclosed as including a spatial light modulator (slm), an inputting device, a laser, a detecting device, and a page-oriented hologaphic component. the inputting device applies input signals to the slm. the holographic component optically interconnects n.sup.2 pixels defined on the spatial light modulator to n.sup.2 pixels defined on a detecting surface of the detecting device. the interconnections are made by n.sup.2 patterns of up to n.sup.2 interconnection weight encoded beams projected by n.sup.2 planar, or essentially two-dimensional, holograms arranged in a spatially localized array within the holographic component. the slm modulates the encoded beams and directs them onto the detecting surface wherein a parameter of the beams is evaluated at each pixel thereof. the evaluated parameter is transformed according to a nonlinear threshold function to provide transformed signals which can be fed back to the slm for further iterations. dated 1991-04-02" 5004932,unit circuit for constructing a neural network and a semiconductor integrated circuit having the same,""" a semiconductor integrated circuit for constructing a neural network model, comprising a differential amplifier which includes one output terminal and two input terminals, an excitatory synapse circuit which is connected to the noninverting input terminal of said differential amplifier, and an inhibitory synapse circuit which is connected to the inverting input terminal of said differential amplifier, wherein each of said excitatory and inhibitory synapse circuits includes a plurality of current switches, regulated current source ciruits which are equal in number to said current switches and which determine currents to flow through said current switches, and one load resistor which is connected to all of said current switches, input terminals of said each synapse circuit being constructed of terminals which turn """"on"""" and """"off"""" the respective current switches and to which external inputs or outputs of another neural circuit are connected, said each regulated current source circuit being constructed of a circuit whose current value can be increased or decreased by a voltage externally applied separately and as to which a value of the voltage for increasing or decreasing the current value corresponds to a synaptic weight. """,1991-04-02,"The title of the patent is unit circuit for constructing a neural network and a semiconductor integrated circuit having the same and its abstract is "" a semiconductor integrated circuit for constructing a neural network model, comprising a differential amplifier which includes one output terminal and two input terminals, an excitatory synapse circuit which is connected to the noninverting input terminal of said differential amplifier, and an inhibitory synapse circuit which is connected to the inverting input terminal of said differential amplifier, wherein each of said excitatory and inhibitory synapse circuits includes a plurality of current switches, regulated current source ciruits which are equal in number to said current switches and which determine currents to flow through said current switches, and one load resistor which is connected to all of said current switches, input terminals of said each synapse circuit being constructed of terminals which turn """"on"""" and """"off"""" the respective current switches and to which external inputs or outputs of another neural circuit are connected, said each regulated current source circuit being constructed of a circuit whose current value can be increased or decreased by a voltage externally applied separately and as to which a value of the voltage for increasing or decreasing the current value corresponds to a synaptic weight. "" dated 1991-04-02" 5005206,method of and arrangement for image data compression by means of a neural network,"method of and arrangement for image data compression by vector quantization in accordance with a precoding in blocks, thereafter comparing by means of a neural network precoded blocks with reference words stored in the form of a code book so as to transmit selected indices to a receiver. in accordance with the method, the neural network effects a learning phase with prescribed prototypes, thereafter with the aid of test vectors originating from the image generates an adaptive code book which is transmitted to the receiver. this adaptation utilizes attractors, which may be induced metastable states, of the neural network, and which are submitted to an optimizing procedure. the arrangement can process images with a view to their storage. it is also possible to utilize two devices which operate alternately, one device for generating the adaptive code book and the other one to utilize it with the object of processing television pictures in real time.",1991-04-02,"The title of the patent is method of and arrangement for image data compression by means of a neural network and its abstract is method of and arrangement for image data compression by vector quantization in accordance with a precoding in blocks, thereafter comparing by means of a neural network precoded blocks with reference words stored in the form of a code book so as to transmit selected indices to a receiver. in accordance with the method, the neural network effects a learning phase with prescribed prototypes, thereafter with the aid of test vectors originating from the image generates an adaptive code book which is transmitted to the receiver. this adaptation utilizes attractors, which may be induced metastable states, of the neural network, and which are submitted to an optimizing procedure. the arrangement can process images with a view to their storage. it is also possible to utilize two devices which operate alternately, one device for generating the adaptive code book and the other one to utilize it with the object of processing television pictures in real time. dated 1991-04-02" 5008833,parallel optoelectronic neural network processors,"several embodiments of neural processors implemented on a vlsi circuit chip are disclosed, all of which are capable of entering a matrix t into an array of photosensitive devices which may be charge coupled or charge injection devices (ccd or cid). using ccd's to receive and store the synapses of the matrix t from a spatial light modulator, or other optical means of projecting an array of pixels, semiparallel synchronous operation is achieved. using cid's, full parallel synchronous operation is achieved. and using phototransistors to receive the array of pixels, full parallel and asynchronous operation is achieved. in the latter case, the source of the pixel matrix must provide the memory necessary for the matrix t. in the other cases, the source of the pixel matrix may be turned off after the matrix t has been entered and stored by the ccd's or cid's.",1991-04-16,"The title of the patent is parallel optoelectronic neural network processors and its abstract is several embodiments of neural processors implemented on a vlsi circuit chip are disclosed, all of which are capable of entering a matrix t into an array of photosensitive devices which may be charge coupled or charge injection devices (ccd or cid). using ccd's to receive and store the synapses of the matrix t from a spatial light modulator, or other optical means of projecting an array of pixels, semiparallel synchronous operation is achieved. using cid's, full parallel synchronous operation is achieved. and using phototransistors to receive the array of pixels, full parallel and asynchronous operation is achieved. in the latter case, the source of the pixel matrix must provide the memory necessary for the matrix t. in the other cases, the source of the pixel matrix may be turned off after the matrix t has been entered and stored by the ccd's or cid's. dated 1991-04-16" 5010512,neural network having an associative memory that learns by example,"a neural network utilizing the threshold characteristics of a semiconductor device as the various memory elements of the network. each memory element comprises a complementary pair of mosfets in which the threshold voltage is adjusted as a function of the input voltage to the element. the network is able to learn by example using a local learning algorithm. the network includes a series of output amplifiers in which the output is provided by the sum of the outputs of a series of learning elements coupled to the amplifier. the output of each learning element is the difference between the input signal to each learning element and an individual learning threshold at each input. the learning is accomplished by charge trapping in the insulator of each individual input mosfet pair. the thresholds of each transistor automatically adjust to both the input and output voltages to learn the desired state. after input patterns have been learned by the network, the learning functions is set to zero so that the thresholds remain constant and the network will come to an equilibrium state under the influence of a test input pattern thereby providing, as an output, the learned pattern most closely resembling the test input pattern.",1991-04-23,"The title of the patent is neural network having an associative memory that learns by example and its abstract is a neural network utilizing the threshold characteristics of a semiconductor device as the various memory elements of the network. each memory element comprises a complementary pair of mosfets in which the threshold voltage is adjusted as a function of the input voltage to the element. the network is able to learn by example using a local learning algorithm. the network includes a series of output amplifiers in which the output is provided by the sum of the outputs of a series of learning elements coupled to the amplifier. the output of each learning element is the difference between the input signal to each learning element and an individual learning threshold at each input. the learning is accomplished by charge trapping in the insulator of each individual input mosfet pair. the thresholds of each transistor automatically adjust to both the input and output voltages to learn the desired state. after input patterns have been learned by the network, the learning functions is set to zero so that the thresholds remain constant and the network will come to an equilibrium state under the influence of a test input pattern thereby providing, as an output, the learned pattern most closely resembling the test input pattern. dated 1991-04-23" 5014096,optoelectronic integrated circuit with optical gate device and phototransistor,"an optoelectronic integrated circuit including an optical bistable circuit comprises: an optical gate device responsive to a current injected to an active layer thereof and to a first ray transmitted through the active layer for emitting first and second light rays and for controlling intensity of the first light ray in accordance with the current; and a first phototransistor serially connected with the optical gate device so arranged to receive the second light ray for causing the current to flow through the optical gate device in response to the second light ray and a set signal light ray, the first phototransistor holding flowing of the current when the second light ray is emitted. this circuit can control the first light ray incident to the optical gate device in response to a set signal light ray applied to the first phototransistor. a second phototransistor may be included for stopping emission of light by the optical gate device in response to a reset signal light ray. such a circuit can be used in an optical neural network as a light-switching device. the first light ray is applied to an optical gate device perpendicularly or horizontally with respect to the plane of the substrate thereof. the second light ray may be emitted by a light-emitting device serially connected with the optical gate.",1991-05-07,"The title of the patent is optoelectronic integrated circuit with optical gate device and phototransistor and its abstract is an optoelectronic integrated circuit including an optical bistable circuit comprises: an optical gate device responsive to a current injected to an active layer thereof and to a first ray transmitted through the active layer for emitting first and second light rays and for controlling intensity of the first light ray in accordance with the current; and a first phototransistor serially connected with the optical gate device so arranged to receive the second light ray for causing the current to flow through the optical gate device in response to the second light ray and a set signal light ray, the first phototransistor holding flowing of the current when the second light ray is emitted. this circuit can control the first light ray incident to the optical gate device in response to a set signal light ray applied to the first phototransistor. a second phototransistor may be included for stopping emission of light by the optical gate device in response to a reset signal light ray. such a circuit can be used in an optical neural network as a light-switching device. the first light ray is applied to an optical gate device perpendicularly or horizontally with respect to the plane of the substrate thereof. the second light ray may be emitted by a light-emitting device serially connected with the optical gate. dated 1991-05-07" 5014219,mask controled neural networks,"a mask neutral network for processing that allows an external source of control to continuously direct state transition of the neural network toward selected states and away from other states. the network, through externally controlled masking, can focus attention on selected attributes of observed data, solutions or results. the masking is appliciable across three major categories of networks in that it facilitates augmented recall, directed learning and constrained optimization.",1991-05-07,"The title of the patent is mask controled neural networks and its abstract is a mask neutral network for processing that allows an external source of control to continuously direct state transition of the neural network toward selected states and away from other states. the network, through externally controlled masking, can focus attention on selected attributes of observed data, solutions or results. the masking is appliciable across three major categories of networks in that it facilitates augmented recall, directed learning and constrained optimization. dated 1991-05-07" 5016188,discrete-time optimal control by neural network,"a neural network determines optimal control inputs for a linear quadratic discrete-time process at m sampling times, the process being characterized by a quadratic cost function, p state variables, and r control variables. the network includes n=(p+r)m neurons, a distinct neuron being assigned to represent the value of each state variable at each sampling time and a distinct neuron being assigned to represent the value of each control variable at each sampling time. an input bias connected to each neuron has a value determined by the quandratic cost function for the variable represented by the neuron. selected connections are provided between the output of each neuron and the input of selected other neurons in the network, each such connection and the strength of each such connection being determined by the relationship in the cost function between the variable represented by the connected output neuron and the variable represented by the connected input neuron, such that running the neural network for a sufficient time to minimize the cost function will produce optimum values for each control variable at each sampling time.",1991-05-14,"The title of the patent is discrete-time optimal control by neural network and its abstract is a neural network determines optimal control inputs for a linear quadratic discrete-time process at m sampling times, the process being characterized by a quadratic cost function, p state variables, and r control variables. the network includes n=(p+r)m neurons, a distinct neuron being assigned to represent the value of each state variable at each sampling time and a distinct neuron being assigned to represent the value of each control variable at each sampling time. an input bias connected to each neuron has a value determined by the quandratic cost function for the variable represented by the neuron. selected connections are provided between the output of each neuron and the input of selected other neurons in the network, each such connection and the strength of each such connection being determined by the relationship in the cost function between the variable represented by the connected output neuron and the variable represented by the connected input neuron, such that running the neural network for a sufficient time to minimize the cost function will produce optimum values for each control variable at each sampling time. dated 1991-05-14" 5016211,neural network implementation of a binary adder,"a binary adder is provided for adding-processing in a high speed parallel manner two n bit binary digits. the binary adder is implemented using neural network techniques and includes a number of amplifiers corresponding to the n bit output sum and a carry generation from the result of the adding process; an augend input-synapse group, an addend input-synapse group, a carry input-synapse group, a first bias-synapse group a second bias-synapse group an output feedback-synapse group and inverters. the binary adder is efficient and fast compared to conventional techniques.",1991-05-14,"The title of the patent is neural network implementation of a binary adder and its abstract is a binary adder is provided for adding-processing in a high speed parallel manner two n bit binary digits. the binary adder is implemented using neural network techniques and includes a number of amplifiers corresponding to the n bit output sum and a carry generation from the result of the adding process; an augend input-synapse group, an addend input-synapse group, a carry input-synapse group, a first bias-synapse group a second bias-synapse group an output feedback-synapse group and inverters. the binary adder is efficient and fast compared to conventional techniques. dated 1991-05-14" 5017375,method to prepare a neurotrophic composition,"the present invention is based on the discovery that amyotrophic lateral sclerosis (als), parkinson disease and alzheimer disease are due to lack of a disorder-specific neurotrophic hormone or factor. diagnosis is accomplished by assaying factors specific for a particular neuronal network or system; for example, dopamine neutotrophic hormones from striatum or caudate-putamen in the nigrostriatal dopaminergic neural system are used to diagnose and treat parkinsonism. with tissue culture, the presence or absence of spacific neurotrophic factos can be assessed in als, parkinsonism, and alzheimer disease. if there is a deficiency, extracted and purified neurotrophic factors specific to the particular neuronal network or system can be injected into a patient having als, alzheimer disease or parkinsonism for treatment of the disease.",1991-05-21,"The title of the patent is method to prepare a neurotrophic composition and its abstract is the present invention is based on the discovery that amyotrophic lateral sclerosis (als), parkinson disease and alzheimer disease are due to lack of a disorder-specific neurotrophic hormone or factor. diagnosis is accomplished by assaying factors specific for a particular neuronal network or system; for example, dopamine neutotrophic hormones from striatum or caudate-putamen in the nigrostriatal dopaminergic neural system are used to diagnose and treat parkinsonism. with tissue culture, the presence or absence of spacific neurotrophic factos can be assessed in als, parkinsonism, and alzheimer disease. if there is a deficiency, extracted and purified neurotrophic factors specific to the particular neuronal network or system can be injected into a patient having als, alzheimer disease or parkinsonism for treatment of the disease. dated 1991-05-21" 5021988,semiconductor neural network and method of driving the same,"a semiconductor neural network includes a plurality of data input line pairs to which complementary input data pairs are transmitted respectively, data output line pairs respectively deriving complementary output data pairs and a plurality of coupling elements arranged at respective crosspoints of the data input lines and the data output lines. the coupling elements are programmable in states, and couple corresponding data output lines and corresponding data input lines in accordance with the programmed states thereof. differential amplifiers formed by cross-coupled inverting amplifiers are provided in order to detect potentials on the data output lines. the differential amplifiers are provided for respective ones of the data output line pairs.",1991-06-04,"The title of the patent is semiconductor neural network and method of driving the same and its abstract is a semiconductor neural network includes a plurality of data input line pairs to which complementary input data pairs are transmitted respectively, data output line pairs respectively deriving complementary output data pairs and a plurality of coupling elements arranged at respective crosspoints of the data input lines and the data output lines. the coupling elements are programmable in states, and couple corresponding data output lines and corresponding data input lines in accordance with the programmed states thereof. differential amplifiers formed by cross-coupled inverting amplifiers are provided in order to detect potentials on the data output lines. the differential amplifiers are provided for respective ones of the data output line pairs. dated 1991-06-04" 5023045,plant malfunction diagnostic method,"a plant malfunction diagnostic method is characterized by determining by simulation a change in a plant state variable, forming a pattern among plant state variables obtained by autoregressive analysis of the change in plant state variable, inserting the formed pattern among the plant state variables in a neural network, performing learning until a preset precision is obtained, and identifying the cause of the malfunction by inserting, in the neural network, a pattern which indicates the pattern among plant state variables formed by data gathered from the plant. this makes possible early identification of the cause of a malfunction. plant rate of operation and safety are improved by allowing the operator to perform the appropriate recovery operation with a sufficient time margin.",1991-06-11,"The title of the patent is plant malfunction diagnostic method and its abstract is a plant malfunction diagnostic method is characterized by determining by simulation a change in a plant state variable, forming a pattern among plant state variables obtained by autoregressive analysis of the change in plant state variable, inserting the formed pattern among the plant state variables in a neural network, performing learning until a preset precision is obtained, and identifying the cause of the malfunction by inserting, in the neural network, a pattern which indicates the pattern among plant state variables formed by data gathered from the plant. this makes possible early identification of the cause of a malfunction. plant rate of operation and safety are improved by allowing the operator to perform the appropriate recovery operation with a sufficient time margin. dated 1991-06-11" 5023833,feed forward neural network for unary associative memory,"feed forward neural network models for associative content addressable memory utilize a first level matrix of resistor connections to store words and compare addressing cues with the stored words represented by connections of unit resistive value, and a winner-take-all circuit for producing a unary output signal corresponding to the word most closely matched in the first matrix. the unary output signal is converted to a binary output code, such as by a suitable matrix. cues are coded for the address input as binary 1=+v, binary 0=-v, and unknown =0v. two input amplifiers are employed with two input conductors for each input bit position, one noninverting and the other inverting, so that the winner-take-all circuit at the output of the first matrix may be organized to select the highest number of matches with stored words as the unary output signal. by inverting the cues at the input to the first matrix, and inverting the output of the first level matrix, the effect of resistor value imprecision in the first matrix is virtually obviated. by space coding, the first and second matrices may be expanded into multiple sets of matrices, each with its own winner-take-all circuit for producing unary output signals applied from the first set to the second set of matrices. the output conductors of the second set of matrices are grouped to provide a sparse output code that is then converted to a binary code corresponding to the word recalled.",1991-06-11,"The title of the patent is feed forward neural network for unary associative memory and its abstract is feed forward neural network models for associative content addressable memory utilize a first level matrix of resistor connections to store words and compare addressing cues with the stored words represented by connections of unit resistive value, and a winner-take-all circuit for producing a unary output signal corresponding to the word most closely matched in the first matrix. the unary output signal is converted to a binary output code, such as by a suitable matrix. cues are coded for the address input as binary 1=+v, binary 0=-v, and unknown =0v. two input amplifiers are employed with two input conductors for each input bit position, one noninverting and the other inverting, so that the winner-take-all circuit at the output of the first matrix may be organized to select the highest number of matches with stored words as the unary output signal. by inverting the cues at the input to the first matrix, and inverting the output of the first level matrix, the effect of resistor value imprecision in the first matrix is virtually obviated. by space coding, the first and second matrices may be expanded into multiple sets of matrices, each with its own winner-take-all circuit for producing unary output signals applied from the first set to the second set of matrices. the output conductors of the second set of matrices are grouped to provide a sparse output code that is then converted to a binary code corresponding to the word recalled. dated 1991-06-11" 5025282,color image forming apparatus,"the improved color image forming apparatus is so designed that the image forming condition computing means having a learning capability, such as a neural network having a back propagation learning algorithm, is caused to learn preliminarily those image forming conditions which are appropriate for the specific type of a documents (e.g. a reflection-type original or a transmission-type original such as a negative film or a reversal film) or the original image carried on the documents such as sea, mountains or a snow scene, examples of such image forming conditions being exposing conditions (e.g. the balance of three primary colors and their densities) and the conditions of developing, fixing and otherwise processing light-sensitive materials, and image is formed on a particular light-sensitive material under the image forming conditions computed by the computing means which has learned said appropriate conditions. the visible image reproduced with this apparatus always has a good color balance, is free from deterioration of image quality, has none of the unwanted color shades and is optimum for the particular document or original image. as a further advantage, even unskilled users can easily operate this apparatus to reproduce an image that meets the specific preference of the laboratory or the user.",1991-06-18,"The title of the patent is color image forming apparatus and its abstract is the improved color image forming apparatus is so designed that the image forming condition computing means having a learning capability, such as a neural network having a back propagation learning algorithm, is caused to learn preliminarily those image forming conditions which are appropriate for the specific type of a documents (e.g. a reflection-type original or a transmission-type original such as a negative film or a reversal film) or the original image carried on the documents such as sea, mountains or a snow scene, examples of such image forming conditions being exposing conditions (e.g. the balance of three primary colors and their densities) and the conditions of developing, fixing and otherwise processing light-sensitive materials, and image is formed on a particular light-sensitive material under the image forming conditions computed by the computing means which has learned said appropriate conditions. the visible image reproduced with this apparatus always has a good color balance, is free from deterioration of image quality, has none of the unwanted color shades and is optimum for the particular document or original image. as a further advantage, even unskilled users can easily operate this apparatus to reproduce an image that meets the specific preference of the laboratory or the user. dated 1991-06-18" 5027182,high-gain algaas/gaas double heterojunction darlington phototransistors for optical neural networks,"high-gain mocvd-grown (metal-organic chemical vapor deposition) algaas/gaas/algaas n-p-n double heterojunction bipolar transistors (dhbts) (14) and darlington phototransistor pairs (14, 16) are provided for use in optical neural networks and other optoelectronic integrated circuit applications. the reduced base (22) doping level used herein results in effective blockage of zn out-diffusion, enabling a current gain of 500, higher than most previously reported values for zn-diffused-base dhbts. darlington phototransistor pairs of this material can achieve a current gain of over 6,000, which satisfies the gain requirement for optical neural network designs, which advantageously may employ novel neurons (10) comprising the darlington phototransistor pair in series with a light source (12).",1991-06-25,"The title of the patent is high-gain algaas/gaas double heterojunction darlington phototransistors for optical neural networks and its abstract is high-gain mocvd-grown (metal-organic chemical vapor deposition) algaas/gaas/algaas n-p-n double heterojunction bipolar transistors (dhbts) (14) and darlington phototransistor pairs (14, 16) are provided for use in optical neural networks and other optoelectronic integrated circuit applications. the reduced base (22) doping level used herein results in effective blockage of zn out-diffusion, enabling a current gain of 500, higher than most previously reported values for zn-diffused-base dhbts. darlington phototransistor pairs of this material can achieve a current gain of over 6,000, which satisfies the gain requirement for optical neural network designs, which advantageously may employ novel neurons (10) comprising the darlington phototransistor pair in series with a light source (12). dated 1991-06-25" 5033006,self-extending neural-network,a self-extending shape neural-network is capable of a self-extending operation in accordance with the studying results. the self-extending shape neural-network has initially minimum number of the intermediate layers and the number of the nodes (units) within each layer by the self-extension of the network construction so as to shorten the studying time and the discriminating time. this studying may be effected efficiently by the studying being directed towards the focus when the studying is not focused.,1991-07-16,The title of the patent is self-extending neural-network and its abstract is a self-extending shape neural-network is capable of a self-extending operation in accordance with the studying results. the self-extending shape neural-network has initially minimum number of the intermediate layers and the number of the nodes (units) within each layer by the self-extension of the network construction so as to shorten the studying time and the discriminating time. this studying may be effected efficiently by the studying being directed towards the focus when the studying is not focused. dated 1991-07-16 5040134,neural network employing leveled summing scheme with blocked array,"a novel associative network architecture is described in which a neural network is subdivided into a plurality of smaller blocks. each block comprises an array of pattern matching cells which is used for calculating the relative match, or hamming distance, between an input pattern and a stored weight pattern. the cells are arranged in columns along one or more local summing lines. the total current flowing along the local summing lines for a given block corresponds to the match for that block. each of the blocks are coupled together using a plurality of global summing lines. the global summing lines sum the individual current contributions from the local summing lines of each associated block. coupling between the local column lines and the global summing lines is achieved by using a specialized coupling device which permits control of the coupling ratio between the lines. by selectively turning on or off various blocks a measure of the match for individual blocks or for groups of blocks representing a subset of the network, may be calculated. control over the coupling ratio within the blocks also prevents destructive levels of current from building up on the global summing lines.",1991-08-13,"The title of the patent is neural network employing leveled summing scheme with blocked array and its abstract is a novel associative network architecture is described in which a neural network is subdivided into a plurality of smaller blocks. each block comprises an array of pattern matching cells which is used for calculating the relative match, or hamming distance, between an input pattern and a stored weight pattern. the cells are arranged in columns along one or more local summing lines. the total current flowing along the local summing lines for a given block corresponds to the match for that block. each of the blocks are coupled together using a plurality of global summing lines. the global summing lines sum the individual current contributions from the local summing lines of each associated block. coupling between the local column lines and the global summing lines is achieved by using a specialized coupling device which permits control of the coupling ratio between the lines. by selectively turning on or off various blocks a measure of the match for individual blocks or for groups of blocks representing a subset of the network, may be calculated. control over the coupling ratio within the blocks also prevents destructive levels of current from building up on the global summing lines. dated 1991-08-13" 5040215,speech recognition apparatus using neural network and fuzzy logic,"a speech recognition apparatus has a speech input unit for inputting a speech; a speech analysis unit for analyzing the inputted speech to output the time series of a feature vector; a candidates selection unit for inputting the time series of a feature vector from the speech analysis unit to select a plurality of candidates of recognition result from the speech categories; and a discrimination processing unit for discriminating the selected candidates to obtain a final recognition result. the discrimination processing unit includes three components in the form of a pair generation unit for generating all of the two combinations of the n-number of candidates selected by said candidate selection unit, a pair discrimination unit for discriminating which of the candidates of the combinations is more certain for each of all .sub.n c.sub.2 -number of combinations (or pairs) on the basis of the extracted result of the acoustic feature intrinsic to each of said candidate speeches, and a final decision unit for collecting all the pair discrimination results obtained from the pair discrimination unit for each of all the .sub.n c.sub.2 -number of combinations (or pairs) to decide the final result. the pair discrimination unit handles the extracted result of the acoustic feature intrinsic to each of the candidate speeches as fuzzy information and accomplishes the discrimination processing on the basis of fuzzy logic algorithms, and the final decision unit accomplishes its collections on the basis of the fuzzy logic algorithms.",1991-08-13,"The title of the patent is speech recognition apparatus using neural network and fuzzy logic and its abstract is a speech recognition apparatus has a speech input unit for inputting a speech; a speech analysis unit for analyzing the inputted speech to output the time series of a feature vector; a candidates selection unit for inputting the time series of a feature vector from the speech analysis unit to select a plurality of candidates of recognition result from the speech categories; and a discrimination processing unit for discriminating the selected candidates to obtain a final recognition result. the discrimination processing unit includes three components in the form of a pair generation unit for generating all of the two combinations of the n-number of candidates selected by said candidate selection unit, a pair discrimination unit for discriminating which of the candidates of the combinations is more certain for each of all .sub.n c.sub.2 -number of combinations (or pairs) on the basis of the extracted result of the acoustic feature intrinsic to each of said candidate speeches, and a final decision unit for collecting all the pair discrimination results obtained from the pair discrimination unit for each of all the .sub.n c.sub.2 -number of combinations (or pairs) to decide the final result. the pair discrimination unit handles the extracted result of the acoustic feature intrinsic to each of the candidate speeches as fuzzy information and accomplishes the discrimination processing on the basis of fuzzy logic algorithms, and the final decision unit accomplishes its collections on the basis of the fuzzy logic algorithms. dated 1991-08-13" 5040230,associative pattern conversion system and adaptation method thereof,"an associative pattern conversion system is disclosed which may be used for image recognition. the system includes an image input portion, an image processing portion and a recognition portion. the image processing portion includes a process unit for extracting characteristics and a frame memory for holding image data. the recognition portion, which includes a component for the learning of data to be associated, obtains the extracted characteristics from the image processing portion and performs associative pattern conversion from the image input portion. the system of the present invention may be applied to any neutral network, preferably a matrix calculation type neural network.",1991-08-13,"The title of the patent is associative pattern conversion system and adaptation method thereof and its abstract is an associative pattern conversion system is disclosed which may be used for image recognition. the system includes an image input portion, an image processing portion and a recognition portion. the image processing portion includes a process unit for extracting characteristics and a frame memory for holding image data. the recognition portion, which includes a component for the learning of data to be associated, obtains the extracted characteristics from the image processing portion and performs associative pattern conversion from the image input portion. the system of the present invention may be applied to any neutral network, preferably a matrix calculation type neural network. dated 1991-08-13" 5041916,color image data compression and recovery apparatus based on neural networks,"a data compression and recovery apparatus compresses picture element data of a color image by expressing two primary color values of each picture element as a set of parameter values of a neural network in conjunction with reference color data values of a corresponding block of picture elements. date recovery is achieved by inputting each block of reference color values to a neural network while establishing the corresponding set of parameter values in the network, to thereby obtain the original pair of encoded primary color values for each of successive picture elements. the third primary color can be used as the reference color.",1991-08-20,"The title of the patent is color image data compression and recovery apparatus based on neural networks and its abstract is a data compression and recovery apparatus compresses picture element data of a color image by expressing two primary color values of each picture element as a set of parameter values of a neural network in conjunction with reference color data values of a corresponding block of picture elements. date recovery is achieved by inputting each block of reference color values to a neural network while establishing the corresponding set of parameter values in the network, to thereby obtain the original pair of encoded primary color values for each of successive picture elements. the third primary color can be used as the reference color. dated 1991-08-20" 5041976,diagnostic system using pattern recognition for electronic automotive control systems,"a system is disclosed for diagnosing faults in electronic control systems wherein a large volume of information is exchanged between the electronic control processor and a mechanical system under its control. the data is acquired such that parameter vectors describing the system operation are formed. the vectors are provided to a pattern recognition system such as a neural network for classification according to the operating condition of the electronically controlled system. for diagnosis of electronically controlled engine operation, the parameters included in the vectors correspond to individual firing events occurring in the engine operating under a predetermined condition. the diagnostic system can be implemented as a service tool in an automotive service bay or can be implemented within the on-board electronic control system itself.",1991-08-20,"The title of the patent is diagnostic system using pattern recognition for electronic automotive control systems and its abstract is a system is disclosed for diagnosing faults in electronic control systems wherein a large volume of information is exchanged between the electronic control processor and a mechanical system under its control. the data is acquired such that parameter vectors describing the system operation are formed. the vectors are provided to a pattern recognition system such as a neural network for classification according to the operating condition of the electronically controlled system. for diagnosis of electronically controlled engine operation, the parameters included in the vectors correspond to individual firing events occurring in the engine operating under a predetermined condition. the diagnostic system can be implemented as a service tool in an automotive service bay or can be implemented within the on-board electronic control system itself. dated 1991-08-20" 5043913,neural network,"input signals inputted in respective unit circuits forming a synapse array pass through variable connector elements to be integrated into one analog signal, which in turn is converted into a binary associated corresponding signal by an amplifier. two control signals are produced on the basis of the associated corresponding signal and an educator signal. the two control signals are fed back to the respective unit circuits, to control degrees of electrical coupling of the variable connector elements in the respective unit circuits. thus, learning of the respective unit circuits is performed.",1991-08-27,"The title of the patent is neural network and its abstract is input signals inputted in respective unit circuits forming a synapse array pass through variable connector elements to be integrated into one analog signal, which in turn is converted into a binary associated corresponding signal by an amplifier. two control signals are produced on the basis of the associated corresponding signal and an educator signal. the two control signals are fed back to the respective unit circuits, to control degrees of electrical coupling of the variable connector elements in the respective unit circuits. thus, learning of the respective unit circuits is performed. dated 1991-08-27" 5045713,multi-feedback circuit apparatus,a multi-feedback circuit apparatus is provided which can prevent undesired oscillation or chaos phenomena that inevitably arise when the hopfield model is realized by electronic circuits. the apparatus can also reduce the number of synapse nodes in the neural network model.,1991-09-03,The title of the patent is multi-feedback circuit apparatus and its abstract is a multi-feedback circuit apparatus is provided which can prevent undesired oscillation or chaos phenomena that inevitably arise when the hopfield model is realized by electronic circuits. the apparatus can also reduce the number of synapse nodes in the neural network model. dated 1991-09-03 5046019,fuzzy data comparator with neural network postprocessor,"a fuzzy data comparator receives a fuzzy data digital data bit stream and compares each frame thereof with multiple sets of differing known data stored in a plurality of pattern memories, using a selected comparison metric. the results of the comparisons are accumulated as error values. a first neural postprocessing network ranks error values less than a preselected threshold. a second neural network receives the first neural network solutions and provides an expansion bus for interconnecting to additional comparators.",1991-09-03,"The title of the patent is fuzzy data comparator with neural network postprocessor and its abstract is a fuzzy data comparator receives a fuzzy data digital data bit stream and compares each frame thereof with multiple sets of differing known data stored in a plurality of pattern memories, using a selected comparison metric. the results of the comparisons are accumulated as error values. a first neural postprocessing network ranks error values less than a preselected threshold. a second neural network receives the first neural network solutions and provides an expansion bus for interconnecting to additional comparators. dated 1991-09-03" 5047655,programmable analog neural network,"the neural network of the invention, of the type with a cartesian matrix, has a first column of addition of the input signals, and at each intersection of the m lines and n columns it comprises a synapse constituted of a simple logic gate.",1991-09-10,"The title of the patent is programmable analog neural network and its abstract is the neural network of the invention, of the type with a cartesian matrix, has a first column of addition of the input signals, and at each intersection of the m lines and n columns it comprises a synapse constituted of a simple logic gate. dated 1991-09-10" 5048097,optical character recognition neural network system for machine-printed characters,"character images which are to be sent to a neural network trained to recognize a predetermined set of symbols are first processed by an optical character recognition pre-processor which normalizes the character images. the output of the neural network is processed by an optical character recognition post-processor. the post-processor corrects erroneous symbol identifications made by the neural network. the post-processor identifies special symbols and symbol cases not identifiable by the neural network following character normalization. for characters identified by the neural network with low scores, the post-processor attempts to find and separate adjacent characters which are kerned and characters which are touching. the touching characters are separated in one of nine successively initiated processes depending upon the geometric parameters of the image. when all else fails, the post-processor selects either the second or third highest scoring symbol identified by the neural network based upon the likelihood of the second or third highest scoring symbol being confused with the highest scoring symbol.",1991-09-10,"The title of the patent is optical character recognition neural network system for machine-printed characters and its abstract is character images which are to be sent to a neural network trained to recognize a predetermined set of symbols are first processed by an optical character recognition pre-processor which normalizes the character images. the output of the neural network is processed by an optical character recognition post-processor. the post-processor corrects erroneous symbol identifications made by the neural network. the post-processor identifies special symbols and symbol cases not identifiable by the neural network following character normalization. for characters identified by the neural network with low scores, the post-processor attempts to find and separate adjacent characters which are kerned and characters which are touching. the touching characters are separated in one of nine successively initiated processes depending upon the geometric parameters of the image. when all else fails, the post-processor selects either the second or third highest scoring symbol identified by the neural network based upon the likelihood of the second or third highest scoring symbol being confused with the highest scoring symbol. dated 1991-09-10" 5048100,self organizing neural network method and system for general classification of patterns,a neural network system and method that can adaptively recognize each of many pattern configurations from a set. the system learns and maintains accurate associations between signal pattern configurations and pattern classes with training from a teaching mechanism. the classifying system consists of a distributed input processor and an adaptive association processor. the input processor decomposes an input pattern into modules of localized contextual elements. these elements in turn are mapped onto pattern classes using a self-organizing associative neural scheme. the associative mapping determines which pattern class best represents the input pattern. the computation is done through gating elements that correspond to the contextual elements. learning is achieved by modifying the gating elements from a true/false response to the computed probabilities for all classes in the set. the system is a parallel and fault tolerant process. it can easily be extended to accommodate an arbitrary number of patterns at an arbitrary degree of precision. the classifier can be applied to automated recognition and inspection of many different types of signals and patterns.,1991-09-10,The title of the patent is self organizing neural network method and system for general classification of patterns and its abstract is a neural network system and method that can adaptively recognize each of many pattern configurations from a set. the system learns and maintains accurate associations between signal pattern configurations and pattern classes with training from a teaching mechanism. the classifying system consists of a distributed input processor and an adaptive association processor. the input processor decomposes an input pattern into modules of localized contextual elements. these elements in turn are mapped onto pattern classes using a self-organizing associative neural scheme. the associative mapping determines which pattern class best represents the input pattern. the computation is done through gating elements that correspond to the contextual elements. learning is achieved by modifying the gating elements from a true/false response to the computed probabilities for all classes in the set. the system is a parallel and fault tolerant process. it can easily be extended to accommodate an arbitrary number of patterns at an arbitrary degree of precision. the classifier can be applied to automated recognition and inspection of many different types of signals and patterns. dated 1991-09-10 5050095,neural network auto-associative memory with two rules for varying the weights,"a neural network associative memory which has a single layer of primatives and which utilizes a variant of the generalized delta for calculating the connection weights between the primatives. the delta rule is characterized by its utilization of predetermined values for the primitive and an error index which compares, during iterations, the predetermined primative values with actual primative values until the delta factor becomes a predetermined minimum value.",1991-09-17,"The title of the patent is neural network auto-associative memory with two rules for varying the weights and its abstract is a neural network associative memory which has a single layer of primatives and which utilizes a variant of the generalized delta for calculating the connection weights between the primatives. the delta rule is characterized by its utilization of predetermined values for the primitive and an error index which compares, during iterations, the predetermined primative values with actual primative values until the delta factor becomes a predetermined minimum value. dated 1991-09-17" 5050096,path cost computing neural network,""" the operation of an electronic neural computer is described. this electronic neural computer solves for the optimal path in a space of """"cost functions"""" which are represented as delays at the nodes of a grid (in two, three, four, or more dimensions). time gating by delays lets the optimal solution thread the maze of the network first. the neural computer starts to compute all possible paths through the cost function field and shuts down after the first (optimal solution) emerges at the target node. the cost function delays are set from outside the neural computer architecture. """,1991-09-17,"The title of the patent is path cost computing neural network and its abstract is "" the operation of an electronic neural computer is described. this electronic neural computer solves for the optimal path in a space of """"cost functions"""" which are represented as delays at the nodes of a grid (in two, three, four, or more dimensions). time gating by delays lets the optimal solution thread the maze of the network first. the neural computer starts to compute all possible paths through the cost function field and shuts down after the first (optimal solution) emerges at the target node. the cost function delays are set from outside the neural computer architecture. "" dated 1991-09-17" 5052043,neural network with back propagation controlled through an output confidence measure,"apparatus, and an accompanying method, for a neural network, particularly one suited for use in optical character recognition (ocr) systems, which through controlling back propagation and adjustment of neural weight and bias values through an output confidence measure, smoothly, rapidly and accurately adapts its response to actual changing input data (characters). specifically, the results of appropriate actual unknown input characters, which have been recognized with an output confidence measure that lies within a pre-defined range, are used to adaptively re-train the network during pattern recognition. by limiting the maximum value of the output confidence measure at which this re-training will occur, the network re-trains itself only when the input characters have changed by a sufficient margin from initial training data such that this re-training is likely to produce a subsequent noticeable increase in the recognition accuracy provided by the network. output confidence is measured as a ratio between the highest and next highest values produced by output neurons in the network. by broadening the entire base of training data to include actual dynamically changing input characters, the inventive neural network provides more robust performance than which heretofore occurs in neural networks known in the art.",1991-09-24,"The title of the patent is neural network with back propagation controlled through an output confidence measure and its abstract is apparatus, and an accompanying method, for a neural network, particularly one suited for use in optical character recognition (ocr) systems, which through controlling back propagation and adjustment of neural weight and bias values through an output confidence measure, smoothly, rapidly and accurately adapts its response to actual changing input data (characters). specifically, the results of appropriate actual unknown input characters, which have been recognized with an output confidence measure that lies within a pre-defined range, are used to adaptively re-train the network during pattern recognition. by limiting the maximum value of the output confidence measure at which this re-training will occur, the network re-trains itself only when the input characters have changed by a sufficient margin from initial training data such that this re-training is likely to produce a subsequent noticeable increase in the recognition accuracy provided by the network. output confidence is measured as a ratio between the highest and next highest values produced by output neurons in the network. by broadening the entire base of training data to include actual dynamically changing input characters, the inventive neural network provides more robust performance than which heretofore occurs in neural networks known in the art. dated 1991-09-24" 5054094,rotationally impervious feature extraction for optical character recognition,"a feature-based optical character recognition system, employing a feature-based recognition device such as a neural network or an absolute distance measure device, extracts a set of features from segmented character images in a document, at least some of the extracted features being at least nearly impervious to rotation or skew of the document image, so as to enhance the reliability of the system. one rotationally invariant feature extracted by the system is the number of intercepts between boundary transitions in the image with at least a selected one of a plurality of radii centered at the centroid of the character in the image.",1991-10-01,"The title of the patent is rotationally impervious feature extraction for optical character recognition and its abstract is a feature-based optical character recognition system, employing a feature-based recognition device such as a neural network or an absolute distance measure device, extracts a set of features from segmented character images in a document, at least some of the extracted features being at least nearly impervious to rotation or skew of the document image, so as to enhance the reliability of the system. one rotationally invariant feature extracted by the system is the number of intercepts between boundary transitions in the image with at least a selected one of a plurality of radii centered at the centroid of the character in the image. dated 1991-10-01" 5055897,semiconductor cell for neural network and the like,"a cell employing floating gate storage device particularly suited for neural networks. the floating gate from the floating gate device extends to and becomes part of a second, field effect device. current through the second device is affected by the charge on the floating gate. the weighting factor for the cell is determined by the amount of charge on the floating gate. by charging the floating gate to various levels, a continuum of weighting factors is obtained. multiplication is obtained since the current through the second device is a function of the weighting factor.",1991-10-08,"The title of the patent is semiconductor cell for neural network and the like and its abstract is a cell employing floating gate storage device particularly suited for neural networks. the floating gate from the floating gate device extends to and becomes part of a second, field effect device. current through the second device is affected by the charge on the floating gate. the weighting factor for the cell is determined by the amount of charge on the floating gate. by charging the floating gate to various levels, a continuum of weighting factors is obtained. multiplication is obtained since the current through the second device is a function of the weighting factor. dated 1991-10-08" 5056037,analog hardware for learning neural networks,"this is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. that connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.",1991-10-08,"The title of the patent is analog hardware for learning neural networks and its abstract is this is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. that connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection. dated 1991-10-08" 5056897,spatial light modulating element and neural network circuit,"a spatial light modulator and a neural network circuit are disclosed. the modulator is used in pattern recognition and has an arrangement in which a photoconductive layer held between conductive electrodes is connected in series to a liquid crystal cell including a liquid crystal layer held between two opposite electrodes. setting the rate between the area of the photoconductive layer and the area of at least one of the opposite electrodes between which the liquid crystal layer is disposed, provides a highly efficient reflective and transmissive spatial light modulator of a simple structure. both reflective and transmissive spatial light modulating elements are applied to a neurocomputer or the like.",1991-10-15,"The title of the patent is spatial light modulating element and neural network circuit and its abstract is a spatial light modulator and a neural network circuit are disclosed. the modulator is used in pattern recognition and has an arrangement in which a photoconductive layer held between conductive electrodes is connected in series to a liquid crystal cell including a liquid crystal layer held between two opposite electrodes. setting the rate between the area of the photoconductive layer and the area of at least one of the opposite electrodes between which the liquid crystal layer is disposed, provides a highly efficient reflective and transmissive spatial light modulator of a simple structure. both reflective and transmissive spatial light modulating elements are applied to a neurocomputer or the like. dated 1991-10-15" 5058034,digital neural network with discrete point rule space,this application discloses a system that optimizes a neural network by generating all of the discrete weights for a given neural node by creating a normalized weight vector for each possible weight combination. the normalized vectors for each node define the weight space for that node. this complete set of weight vectors for each node is searched using a direct search method during the learning phase to optimize the network. the search evaluates a node cost function to determine a base point from which a pattern more within the weight space is made. around the pattern mode point exploratory moves are made which are cost function evaluated. the pattern move is performed by eliminating from the search vectors with lower commonality.,1991-10-15,The title of the patent is digital neural network with discrete point rule space and its abstract is this application discloses a system that optimizes a neural network by generating all of the discrete weights for a given neural node by creating a normalized weight vector for each possible weight combination. the normalized vectors for each node define the weight space for that node. this complete set of weight vectors for each node is searched using a direct search method during the learning phase to optimize the network. the search evaluates a node cost function to determine a base point from which a pattern more within the weight space is made. around the pattern mode point exploratory moves are made which are cost function evaluated. the pattern move is performed by eliminating from the search vectors with lower commonality. dated 1991-10-15 5058180,neural network apparatus and method for pattern recognition,"a self-organizing neural network having input and output neurons mutually coupled via bottom-up and top-down adaptive weight matrics performs pattern recognition while using substantially fewer neurons and being substantially immune from pattern distortion or rotation. the network is first trained in accordance with the adaptive resonance theory by inputting reference pattern data into the input neurons for clustering within the output neurons. the input neurons then receive subject pattern data which are transferred via a bottom-up adaptive weight matrix to a set of output neurons. vigilance testing is performed and multiple computed vigilance parameters are generated. a predetermined, but selectively variable, reference vigilance parameter is compared individually against each computed vigilance parameter and adjusted with each comparison until each computed vigilance parameter equals or exceeds the adjusted reference vigilance parameter, thereby producing an adjusted reference vigilance parameter for each output neuron. the input pattern is classified according to the output neuron corresponding to the maximum adjusted reference vigilance parameter. alternatively, the original computed vigilance parameters can be used by classifying the input pattern according to the output neuron corresponding to the maximum computer vigilance parameter.",1991-10-15,"The title of the patent is neural network apparatus and method for pattern recognition and its abstract is a self-organizing neural network having input and output neurons mutually coupled via bottom-up and top-down adaptive weight matrics performs pattern recognition while using substantially fewer neurons and being substantially immune from pattern distortion or rotation. the network is first trained in accordance with the adaptive resonance theory by inputting reference pattern data into the input neurons for clustering within the output neurons. the input neurons then receive subject pattern data which are transferred via a bottom-up adaptive weight matrix to a set of output neurons. vigilance testing is performed and multiple computed vigilance parameters are generated. a predetermined, but selectively variable, reference vigilance parameter is compared individually against each computed vigilance parameter and adjusted with each comparison until each computed vigilance parameter equals or exceeds the adjusted reference vigilance parameter, thereby producing an adjusted reference vigilance parameter for each output neuron. the input pattern is classified according to the output neuron corresponding to the maximum adjusted reference vigilance parameter. alternatively, the original computed vigilance parameters can be used by classifying the input pattern according to the output neuron corresponding to the maximum computer vigilance parameter. dated 1991-10-15" 5058184,hierachical information processing system,"plural efferent signal paths paired with plural conventional afferent signal paths respectively are provided between lower order cell-layers and higher order cell-layers of a neural network model. once an output response has been derived from the higher order cell-layer, an efferent signal is transmitted through the efferent signal path paired with the afferent signal path concerned in the output response. under the control of which efferent signal, the afferent signal path contributing to the output response of the higher order cell-layer is affected by an excitatory effect, while the afferent signal path not contributing to the same is affected by an inhibitory effect. hence the information processing consisting of both the associative memory and the pattern recognition provided with the faculty of segmentation can be attained despite deformation and positional error of the input pattern.",1991-10-15,"The title of the patent is hierachical information processing system and its abstract is plural efferent signal paths paired with plural conventional afferent signal paths respectively are provided between lower order cell-layers and higher order cell-layers of a neural network model. once an output response has been derived from the higher order cell-layer, an efferent signal is transmitted through the efferent signal path paired with the afferent signal path concerned in the output response. under the control of which efferent signal, the afferent signal path contributing to the output response of the higher order cell-layer is affected by an excitatory effect, while the afferent signal path not contributing to the same is affected by an inhibitory effect. hence the information processing consisting of both the associative memory and the pattern recognition provided with the faculty of segmentation can be attained despite deformation and positional error of the input pattern. dated 1991-10-15" 5060276,technique for object orientation detection using a feed-forward neural network,""" the present invention relates to a technique in the form of an exemplary computer vision system for detecting the orientation of text or features on an object of manufacture. in the present system, an image of the features or text is used to extract lines using horizontal bitmap sums, and then individual symbols using vertical bitmap sums, using thresholds with each of the sums. the separated symbols are then appropriately trimmed and sealed to provide individual normalized symbols. a decision module comprising a feed-forward neural network and a sequential decision arrangement determines the """"up"""", """"down"""" or """"indeterminate"""" orientation of the text after a variable number of symbols have been processed. the system can then compare the determined orientation with a database to further determine if the object is in the """"right-side up"""" """"upside down"""" or """"indeterminate"""" orientation. """,1991-10-22,"The title of the patent is technique for object orientation detection using a feed-forward neural network and its abstract is "" the present invention relates to a technique in the form of an exemplary computer vision system for detecting the orientation of text or features on an object of manufacture. in the present system, an image of the features or text is used to extract lines using horizontal bitmap sums, and then individual symbols using vertical bitmap sums, using thresholds with each of the sums. the separated symbols are then appropriately trimmed and sealed to provide individual normalized symbols. a decision module comprising a feed-forward neural network and a sequential decision arrangement determines the """"up"""", """"down"""" or """"indeterminate"""" orientation of the text after a variable number of symbols have been processed. the system can then compare the determined orientation with a database to further determine if the object is in the """"right-side up"""" """"upside down"""" or """"indeterminate"""" orientation. "" dated 1991-10-22" 5060278,pattern recognition apparatus using a neural network system,"a pattern recognition apparatus includes a pattern input unit inputting pattern data and learning data, and a neural network system including a plurality of neural networks, each of the plurality of neural networks being assigned a corresponding one of a plurality of identification classes and having only two output units of a first unit (uo1) and a second unit (uo2). learning for each of the plurality of neural networks is performed by using the learning data. the image recognition apparatus also includes judgment unit judging which one of the identification classes the pattern data input from the image reading unit belongs to on the basis of output values a and b from the two output units (uo1) and (uo2) of all neural networks.",1991-10-22,"The title of the patent is pattern recognition apparatus using a neural network system and its abstract is a pattern recognition apparatus includes a pattern input unit inputting pattern data and learning data, and a neural network system including a plurality of neural networks, each of the plurality of neural networks being assigned a corresponding one of a plurality of identification classes and having only two output units of a first unit (uo1) and a second unit (uo2). learning for each of the plurality of neural networks is performed by using the learning data. the image recognition apparatus also includes judgment unit judging which one of the identification classes the pattern data input from the image reading unit belongs to on the basis of output values a and b from the two output units (uo1) and (uo2) of all neural networks. dated 1991-10-22" 5061866,"analog, continuous time vector scalar multiplier circuits and programmable feedback neural network using them","a four quadrant, analog multiplier circuit useful for mos implementation of feedback/feedforward neural networks. the multiplier circuit uses only one op-amp and one pair of input mos fets. it becomes a multiplier/summer by the addition of only one additional pair of input fets for each additional product to be summed and achieves the vector scalar product of 2 n-tuple vector inputs using only 2(n+1) mos transistors.",1991-10-29,"The title of the patent is analog, continuous time vector scalar multiplier circuits and programmable feedback neural network using them and its abstract is a four quadrant, analog multiplier circuit useful for mos implementation of feedback/feedforward neural networks. the multiplier circuit uses only one op-amp and one pair of input mos fets. it becomes a multiplier/summer by the addition of only one additional pair of input fets for each additional product to be summed and achieves the vector scalar product of 2 n-tuple vector inputs using only 2(n+1) mos transistors. dated 1991-10-29" 5063521,neuram: neural network with ram,"a random access memory (ram) circuit is provided wherein an input signal matrix forming an identifiable original pattern is learned and stored such that a distorted facsimile thereof may be applied to generate an output signal matrix forming a replication of the original pattern having improved recognizable features over the distorted facsimile. the input signal matrix is logically divided into a plurality of predetermined subsets comprising a unique element of the input signal matrix and the elements in the neighborhood thereof. each predetermined subset is quantized into a first digital address and applied at the address inputs of a memory circuit for retrieving data stored in the addressed memory location, while one signal of the predetermined subset is digitized and weighted and combined with the data retrieved from the addressed memory location for storage in the same addressed memory location. next, a plurality of second digital addresses is generated including predetermined combinations of the first digital address perturbed at least one bit and sequentially applied at the address inputs of the memory circuit whereby the steps of digitizing and weighting one signal of the predetermined subset of the input signal matrix, combining the digitized and weighted signal with the data retrieved from the addressed memory location, and storing the combination back into the addressed memory location are repeated for the second digital addresses.",1991-11-05,"The title of the patent is neuram: neural network with ram and its abstract is a random access memory (ram) circuit is provided wherein an input signal matrix forming an identifiable original pattern is learned and stored such that a distorted facsimile thereof may be applied to generate an output signal matrix forming a replication of the original pattern having improved recognizable features over the distorted facsimile. the input signal matrix is logically divided into a plurality of predetermined subsets comprising a unique element of the input signal matrix and the elements in the neighborhood thereof. each predetermined subset is quantized into a first digital address and applied at the address inputs of a memory circuit for retrieving data stored in the addressed memory location, while one signal of the predetermined subset is digitized and weighted and combined with the data retrieved from the addressed memory location for storage in the same addressed memory location. next, a plurality of second digital addresses is generated including predetermined combinations of the first digital address perturbed at least one bit and sequentially applied at the address inputs of the memory circuit whereby the steps of digitizing and weighting one signal of the predetermined subset of the input signal matrix, combining the digitized and weighted signal with the data retrieved from the addressed memory location, and storing the combination back into the addressed memory location are repeated for the second digital addresses. dated 1991-11-05" 5063531,optical neural net trainable in rapid time,"among light emitting and sensitive element pairs arranged along rows and columns of a matrix in each of first and second layers of an optical computer operable as a neural network with one-to-one correspondence kept between the pairs in the first layer and the pairs in the second layer, the light emitting elements and the light sensitive elements are connected along the rows in the first layer and along the columns in the second layer. optical intensity controlling elements of a panel are placed in optical paths defined by the pairs in the first layer and the pairs which correspond in the second layer to the pairs of the first layer, respectively. when the light emitting element rows are driven, optical beams are emitted by the light emitting elements of the first layer and controlled by the respective controlling elements to have first-layer controlled amounts of light, respectively. in response to the controlled amounts of light, the light sensitive element columns of the second layer produce second-layer output signals. it is possible to use the second-layer output signals in controlling the controlling elements and thereby to train the optical computer. if desired, the light emitting element columns of the second layer are driven by the second-layer output signals to make the light sensitive element rows of the first layer produce first-layer output signals and to use the first-layer output signals in controlling the controlling elements.",1991-11-05,"The title of the patent is optical neural net trainable in rapid time and its abstract is among light emitting and sensitive element pairs arranged along rows and columns of a matrix in each of first and second layers of an optical computer operable as a neural network with one-to-one correspondence kept between the pairs in the first layer and the pairs in the second layer, the light emitting elements and the light sensitive elements are connected along the rows in the first layer and along the columns in the second layer. optical intensity controlling elements of a panel are placed in optical paths defined by the pairs in the first layer and the pairs which correspond in the second layer to the pairs of the first layer, respectively. when the light emitting element rows are driven, optical beams are emitted by the light emitting elements of the first layer and controlled by the respective controlling elements to have first-layer controlled amounts of light, respectively. in response to the controlled amounts of light, the light sensitive element columns of the second layer produce second-layer output signals. it is possible to use the second-layer output signals in controlling the controlling elements and thereby to train the optical computer. if desired, the light emitting element columns of the second layer are driven by the second-layer output signals to make the light sensitive element rows of the first layer produce first-layer output signals and to use the first-layer output signals in controlling the controlling elements. dated 1991-11-05" 5063601,fast-learning neural network system for adaptive pattern recognition apparatus,a neural network for an adaptive pattern recognition apparatus includes a plurality of comparators coupled to an input signal. each comparators compares the input to a different offset voltage. the comparator output is fed to scaling multipliers and then summed to generate an output. the scaling multipliers receive weighing factors generated by using a specific equation selected to insure a fat-learning neural network.,1991-11-05,The title of the patent is fast-learning neural network system for adaptive pattern recognition apparatus and its abstract is a neural network for an adaptive pattern recognition apparatus includes a plurality of comparators coupled to an input signal. each comparators compares the input to a different offset voltage. the comparator output is fed to scaling multipliers and then summed to generate an output. the scaling multipliers receive weighing factors generated by using a specific equation selected to insure a fat-learning neural network. dated 1991-11-05 5065040,reverse flow neuron,"a neural network is provided for performing bi-directional signal transformations through a matrix of synapses by alternately sending and receiving signal vectors therethrough via switchable driver circuits. in the forward direction, the input signal is transformed according to the weighting elements of the synapses for providing an output signal. the drive direction of the switchable driver circuits may be reversed allowing the output signal to flow back through the same synapses thereby performing a reverse transformation, which may actually be an improved estimate of the original input signal. sample and hold circuits are provided for latching the output signals of the switchable driver circuits back to the inputs thereof for repeated forward and reverse signal transformations until an acceptable transformation of the original input signal is realized, thereby achieving an improved estimate of the input signal and corresponding output transformation. more generally, a first input signal may be transformed in one direction through the synapses, while a second input signal, possibly independent and unrelated to the first input signal, may be reverse transformed in the opposite direction using the same synapses as the first direction.",1991-11-12,"The title of the patent is reverse flow neuron and its abstract is a neural network is provided for performing bi-directional signal transformations through a matrix of synapses by alternately sending and receiving signal vectors therethrough via switchable driver circuits. in the forward direction, the input signal is transformed according to the weighting elements of the synapses for providing an output signal. the drive direction of the switchable driver circuits may be reversed allowing the output signal to flow back through the same synapses thereby performing a reverse transformation, which may actually be an improved estimate of the original input signal. sample and hold circuits are provided for latching the output signals of the switchable driver circuits back to the inputs thereof for repeated forward and reverse signal transformations until an acceptable transformation of the original input signal is realized, thereby achieving an improved estimate of the input signal and corresponding output transformation. more generally, a first input signal may be transformed in one direction through the synapses, while a second input signal, possibly independent and unrelated to the first input signal, may be reverse transformed in the opposite direction using the same synapses as the first direction. dated 1991-11-12" 5065339,orthogonal row-column neural processor,"the neural computing paradigm is characterized as a dynamic and highly parallel computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture called snap which uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. each neuron generating a neuron value from a selected set of input function elements and communicating said neuron value back to said set of input function elements. the total connectivity of each neuron to all neurons is accomplished by an orthogonal row-column relationship of neurons where a given multiplier element operates during a first cycle as a row element within an input function to a column neuron, and during a second cycle as a column element within an input function to a row neuron.",1991-11-12,"The title of the patent is orthogonal row-column neural processor and its abstract is the neural computing paradigm is characterized as a dynamic and highly parallel computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture called snap which uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. each neuron generating a neuron value from a selected set of input function elements and communicating said neuron value back to said set of input function elements. the total connectivity of each neuron to all neurons is accomplished by an orthogonal row-column relationship of neurons where a given multiplier element operates during a first cycle as a row element within an input function to a column neuron, and during a second cycle as a column element within an input function to a row neuron. dated 1991-11-12" 5067095,spann: sequence processing artificial neural network,"an artificial neural network is provided using a modular, self-organizing approach wherein a separate neural field is contained within each module for recognition and synthesis of particular characteristics of respective input and output signals thereby allowing several of these modules to be interconnected to perform a variety of operations. the first output and second input of one module is respectively coupled to the first input and second output of a second module allowing each module to perform a bi-directional transformation of the information content of the first and second input signals for creating first and second output signals having different levels of information content with respect thereto. in the upward direction, the first low-level input signal of each module is systematically delayed to create a temporal spatial vector from which a lower frequency, high-level first output signal is provided symbolic of the incoming information content. since the first output signal contains the same relevant information as the first input signal while operating at a lower frequency, the information content of the latter is said to be compressed into a first high-level output signal. in the downward direction, a second output signal having a low-level of information content is synthesized from a second input signal having a high-level of information content. the second input signal is the best prediction of the first output signal available from the knowledge base of the module, while similarly the second output signal is the prediction of the first input signal.",1991-11-19,"The title of the patent is spann: sequence processing artificial neural network and its abstract is an artificial neural network is provided using a modular, self-organizing approach wherein a separate neural field is contained within each module for recognition and synthesis of particular characteristics of respective input and output signals thereby allowing several of these modules to be interconnected to perform a variety of operations. the first output and second input of one module is respectively coupled to the first input and second output of a second module allowing each module to perform a bi-directional transformation of the information content of the first and second input signals for creating first and second output signals having different levels of information content with respect thereto. in the upward direction, the first low-level input signal of each module is systematically delayed to create a temporal spatial vector from which a lower frequency, high-level first output signal is provided symbolic of the incoming information content. since the first output signal contains the same relevant information as the first input signal while operating at a lower frequency, the information content of the latter is said to be compressed into a first high-level output signal. in the downward direction, a second output signal having a low-level of information content is synthesized from a second input signal having a high-level of information content. the second input signal is the best prediction of the first output signal available from the knowledge base of the module, while similarly the second output signal is the prediction of the first input signal. dated 1991-11-19" 5067164,hierarchical constrained automatic learning neural network for character recognition,"highly accurate, reliable optical character recognition is afforded by a layered network having several layers of constrained feature detection wherein each layer of constrained feature detection includes a plurality of constrained feature maps and a corresponding plurality of feature reduction maps. each feature reduction map is connected to only one constrained feature map in the same layer for undersampling that constrained feature map. units in each constrained feature map of the first constrained feature detection layer respond as a function of a corresponding kernel and of different portions of the pixel image of the character captured in a receptive field associated with the unit. units in each feature map of the second constrained feature detection layer respond as a function of a corresponding kernel and of different portions of an individual feature reduction map or a combination of several feature reduction maps in the first constrained feature detection layer as captured in a receptive field of the unit. the feature reduction maps of the second constrained feature detection layer are fully connected to each unit in the final character classification layer. kernels are automatically learned by constrained back propagation during network initialization or training.",1991-11-19,"The title of the patent is hierarchical constrained automatic learning neural network for character recognition and its abstract is highly accurate, reliable optical character recognition is afforded by a layered network having several layers of constrained feature detection wherein each layer of constrained feature detection includes a plurality of constrained feature maps and a corresponding plurality of feature reduction maps. each feature reduction map is connected to only one constrained feature map in the same layer for undersampling that constrained feature map. units in each constrained feature map of the first constrained feature detection layer respond as a function of a corresponding kernel and of different portions of the pixel image of the character captured in a receptive field associated with the unit. units in each feature map of the second constrained feature detection layer respond as a function of a corresponding kernel and of different portions of an individual feature reduction map or a combination of several feature reduction maps in the first constrained feature detection layer as captured in a receptive field of the unit. the feature reduction maps of the second constrained feature detection layer are fully connected to each unit in the final character classification layer. kernels are automatically learned by constrained back propagation during network initialization or training. dated 1991-11-19" 5068662,neural network analog-to-digital converter,"an asynchronous, rapid, neural network analog-to-digital converter. this converter requires only two different resistance values in r2r resistor ladders, and does not require both positive and negative biases. an average of n/2 steps is required for an n-bit conversion.",1991-11-26,"The title of the patent is neural network analog-to-digital converter and its abstract is an asynchronous, rapid, neural network analog-to-digital converter. this converter requires only two different resistance values in r2r resistor ladders, and does not require both positive and negative biases. an average of n/2 steps is required for an n-bit conversion. dated 1991-11-26" 5068801,"optical interconnector and highly interconnected, learning neural network incorporating optical interconnector therein","a variable weight optical interconnector is disclosed to include a projecting device and an interconnection weighting device remote from the projecting device. the projecting device projects a distribution of interconnecting light beams when illuminated by a spatially-modulated light pattern. the weighting device includes a photosensitive screen provided in optical alignment with the projecting device to independently control the intensity of each projected interconnecting beam to thereby assign an interconnection weight to each such beam. further in accordance with the present invention, a highly-interconnected optical neural network having learning capability is disclosed as including a spatial light modulator, a detecting device, an interconnector according to the present invention, and a device responsive to detection signals generated by the detecting device to modify the interconnection weights assigned by the photosensitive screen of the interconnector.",1991-11-26,"The title of the patent is optical interconnector and highly interconnected, learning neural network incorporating optical interconnector therein and its abstract is a variable weight optical interconnector is disclosed to include a projecting device and an interconnection weighting device remote from the projecting device. the projecting device projects a distribution of interconnecting light beams when illuminated by a spatially-modulated light pattern. the weighting device includes a photosensitive screen provided in optical alignment with the projecting device to independently control the intensity of each projected interconnecting beam to thereby assign an interconnection weight to each such beam. further in accordance with the present invention, a highly-interconnected optical neural network having learning capability is disclosed as including a spatial light modulator, a detecting device, an interconnector according to the present invention, and a device responsive to detection signals generated by the detecting device to modify the interconnection weights assigned by the photosensitive screen of the interconnector. dated 1991-11-26" 5071231,bidirectional spatial light modulator for neural network computers,"digital data processing unit includes two slms assembled back-to-back with a common photoreceptor, to form a bidirectional spatial light modulator (bslm) which facilitates the flow of data in the forward and reverse directions. an image can be written from the left side of the bslm and read from the left or right side of the unit. an image can also be written from the right side and read from the right or left or both sides of the unit. the photoreceptor sums the light image intensities when data is concurrently written from both sides into the photoreceptor.",1991-12-10,"The title of the patent is bidirectional spatial light modulator for neural network computers and its abstract is digital data processing unit includes two slms assembled back-to-back with a common photoreceptor, to form a bidirectional spatial light modulator (bslm) which facilitates the flow of data in the forward and reverse directions. an image can be written from the left side of the bslm and read from the left or right side of the unit. an image can also be written from the right side and read from the right or left or both sides of the unit. the photoreceptor sums the light image intensities when data is concurrently written from both sides into the photoreceptor. dated 1991-12-10" 5073867,digital neural network processing elements,"a preprocessing device is disclosed which performs a linear transformation or power series expansion transformation on the input signals to a neural network node. the outputs of the preprocessing device are combined as a product of these linear transformations and compared to a threshold. this processing element configuration, combining a transformation with a product and threshold comparison, performs non-linear transformations between input data and output results. as a result, this processing element will, by itself, produce both linearly and non-linearly separable boolean logic functions. when this processing element is configured in a network, a two layer neural network can be created which will solve any arbitrary decision making function. this element can be configured in a probability based binary tree neural network which is validatable and verifiable in which the threshold comparison operation can be eliminated. the element can also be implemented in binary logic for ultra high speed. if the linkage element performs the power series expansion, a universal or general purpose element is created.",1991-12-17,"The title of the patent is digital neural network processing elements and its abstract is a preprocessing device is disclosed which performs a linear transformation or power series expansion transformation on the input signals to a neural network node. the outputs of the preprocessing device are combined as a product of these linear transformations and compared to a threshold. this processing element configuration, combining a transformation with a product and threshold comparison, performs non-linear transformations between input data and output results. as a result, this processing element will, by itself, produce both linearly and non-linearly separable boolean logic functions. when this processing element is configured in a network, a two layer neural network can be created which will solve any arbitrary decision making function. this element can be configured in a probability based binary tree neural network which is validatable and verifiable in which the threshold comparison operation can be eliminated. the element can also be implemented in binary logic for ultra high speed. if the linkage element performs the power series expansion, a universal or general purpose element is created. dated 1991-12-17" 5075868,memory modification of artificial neural networks,"an artificial neural network, which has a plurality of neurons each receiving a plurality of inputs whose effect is determined by adjust able weights at synapses individually connecting the inputs to the neuron to provide a sum signal to a sigmoidal function generator determining the output of the neuron, undergoes memory modification by a steepest-descent method in which individual variations in the outputs of the neurons are successively generated by small perturbations imposed on the sum signals. as each variation is generated on the output of a neuron, an overall error of all the neuron outputs in relation to their desired values is measured and compared to this error prior to the perturbation. the difference in these errors, with adjustments which may be changed as the neuron outputs converge toward their desired values, is used to modify each weight of the neuron presently subjected to the perturbation.",1991-12-24,"The title of the patent is memory modification of artificial neural networks and its abstract is an artificial neural network, which has a plurality of neurons each receiving a plurality of inputs whose effect is determined by adjust able weights at synapses individually connecting the inputs to the neuron to provide a sum signal to a sigmoidal function generator determining the output of the neuron, undergoes memory modification by a steepest-descent method in which individual variations in the outputs of the neurons are successively generated by small perturbations imposed on the sum signals. as each variation is generated on the output of a neuron, an overall error of all the neuron outputs in relation to their desired values is measured and compared to this error prior to the perturbation. the difference in these errors, with adjustments which may be changed as the neuron outputs converge toward their desired values, is used to modify each weight of the neuron presently subjected to the perturbation. dated 1991-12-24" 5075869,neural network exhibiting improved tolerance to temperature and power supply variations,"an analog neural network is described which provides a means for reducing the sensitivity of the network to temperature and power supply variations. a first circuit is utilized for generating a signal which exhibits a dependence on temperature corresponding to the variation normally experienced by the network in response to a change in temperature. a second circuit is employed to generate another signal which exhibits a similar dependence, except on power supply variations. by coupling these signals as inputs to the neural network the sensitivity of the network to temperature and power supply fluctuations is essentially nulified.",1991-12-24,"The title of the patent is neural network exhibiting improved tolerance to temperature and power supply variations and its abstract is an analog neural network is described which provides a means for reducing the sensitivity of the network to temperature and power supply variations. a first circuit is utilized for generating a signal which exhibits a dependence on temperature corresponding to the variation normally experienced by the network in response to a change in temperature. a second circuit is employed to generate another signal which exhibits a similar dependence, except on power supply variations. by coupling these signals as inputs to the neural network the sensitivity of the network to temperature and power supply fluctuations is essentially nulified. dated 1991-12-24" 5075871,variable gain neural network image processing system,"a neural-simulating system for an image processing system includes a plurality of networks arranged in a plurality of layers, the output signals of ones of the layers provide input signals to the others of the layers. each of the plurality of layers include a plurality of neurons operating in parallel on the input signals to the layers. the plurality of neurons within a layer are arrange in groups. each of the neurons within a group operate in parallel on the input signals. each neuron within a group of neuron operates to extract a specific feature of an area of the image being processed. each of the neutrons derives output signals from the input signals representing the relative weight of the input signal and a gain weight associated with each of the neurons applied thereto based upon a continuously differential transfer function for each function.",1991-12-24,"The title of the patent is variable gain neural network image processing system and its abstract is a neural-simulating system for an image processing system includes a plurality of networks arranged in a plurality of layers, the output signals of ones of the layers provide input signals to the others of the layers. each of the plurality of layers include a plurality of neurons operating in parallel on the input signals to the layers. the plurality of neurons within a layer are arrange in groups. each of the neurons within a group operate in parallel on the input signals. each neuron within a group of neuron operates to extract a specific feature of an area of the image being processed. each of the neutrons derives output signals from the input signals representing the relative weight of the input signal and a gain weight associated with each of the neurons applied thereto based upon a continuously differential transfer function for each function. dated 1991-12-24" 5075889,arrangement of data cells and neural network system utilizing such an arrangement,"an arrangement of data cells which stores at least one matrix of data words which are arranged in rows and columns, the matrix being distributed in the arrangement in order to deliver/receive, via a single bus, permuted data words which correspond either to a row or to a column of the matrix. each data cell is connected to the single bus via series-connected switches which are associated with a respective addressing mode, the switches which address a same word of a same mode being directly controlled by a same selection signal. circulation members enable the original order of the data on the bus to be restored. an arrangement of this kind is used in a layered neural network system for executing the error backpropagation algorithm. application: calculator, microprocessors, processor, neural network system. reference: fig. 4.",1991-12-24,"The title of the patent is arrangement of data cells and neural network system utilizing such an arrangement and its abstract is an arrangement of data cells which stores at least one matrix of data words which are arranged in rows and columns, the matrix being distributed in the arrangement in order to deliver/receive, via a single bus, permuted data words which correspond either to a row or to a column of the matrix. each data cell is connected to the single bus via series-connected switches which are associated with a respective addressing mode, the switches which address a same word of a same mode being directly controlled by a same selection signal. circulation members enable the original order of the data on the bus to be restored. an arrangement of this kind is used in a layered neural network system for executing the error backpropagation algorithm. application: calculator, microprocessors, processor, neural network system. reference: fig. 4. dated 1991-12-24" 5077677,probabilistic inference gate,"the present system performs linear transformations on input probabilities and produces outputs which indicate the likelihood of one or more events. the transformation performed is a product of linear transforms such as p.sub.o =[a.sub.j p.sub.j +b.sub.j ].multidot.[a.sub.k p.sub.k +b.sub.k ] where p.sub.j and p.sub.k are input probabilities, p.sub.o is an output event probability and a.sub.j, b.sub.j, a.sub.k and b.sub.k are transformation constants. the system includes a basic processing unit or computational unit which performs a probabilistic gate operation to convert two input probability signals into one output probability signal where the output probability is equal to the product of linear transformations of the input probabilities. by appropriate selection of transformation constants logical and probabilistic gates performing the functions of and, nand, or, nor, xor, not, implies and not implies can be created. the basic unit can include three multipliers and two adders if a discrete component hardwired version is needed for speed or a single multiplier/adder, associated storage and multiplex circuits can be used to accomplish the functions of the hardwired version for economy. this basic unit can also be provided as a software implementation, can be implemented as a hardwired decision tree element implementation or implemented as a universal probabilistic processor and provided with a bus communication structure to create expert systems or neural networks suitable for specific tasks. the basic units can be combined to produce a virtual basic building block which has more virtual processors than physical processors to improve processor utilization. the building blocks can be combined into an array to produce either a high speed expert system or a high speed neural network.",1991-12-31,"The title of the patent is probabilistic inference gate and its abstract is the present system performs linear transformations on input probabilities and produces outputs which indicate the likelihood of one or more events. the transformation performed is a product of linear transforms such as p.sub.o =[a.sub.j p.sub.j +b.sub.j ].multidot.[a.sub.k p.sub.k +b.sub.k ] where p.sub.j and p.sub.k are input probabilities, p.sub.o is an output event probability and a.sub.j, b.sub.j, a.sub.k and b.sub.k are transformation constants. the system includes a basic processing unit or computational unit which performs a probabilistic gate operation to convert two input probability signals into one output probability signal where the output probability is equal to the product of linear transformations of the input probabilities. by appropriate selection of transformation constants logical and probabilistic gates performing the functions of and, nand, or, nor, xor, not, implies and not implies can be created. the basic unit can include three multipliers and two adders if a discrete component hardwired version is needed for speed or a single multiplier/adder, associated storage and multiplex circuits can be used to accomplish the functions of the hardwired version for economy. this basic unit can also be provided as a software implementation, can be implemented as a hardwired decision tree element implementation or implemented as a universal probabilistic processor and provided with a bus communication structure to create expert systems or neural networks suitable for specific tasks. the basic units can be combined to produce a virtual basic building block which has more virtual processors than physical processors to improve processor utilization. the building blocks can be combined into an array to produce either a high speed expert system or a high speed neural network. dated 1991-12-31" 5080464,optical neural network apparatus using primary processing,"for inputting a two-dimensional image into an optical neural network apparatus, a primary processing device is used to extract the characteristic feature of an object pattern. thereafter, compressed information as a result of the above processing is inputted into the input of the all-optical type optical neural network apparatus, that implements parallel processings adaptively through optical computing, at individual points on the input of the same. therefore, the primary processing device that was capable of dealing with only logical input information until now can process even vague input information by the use of the optical neural network apparatus located on the later stage. on the other hand, the use of the primary processing device on the previous stage of the optical neural network apparatus enables a limited input range of the optical neural network apparatus to be expanded together with the assurance of higher degree processing by inputting into the optical neural network apparatus results of the characteristic feature extraction from an original image.",1992-01-14,"The title of the patent is optical neural network apparatus using primary processing and its abstract is for inputting a two-dimensional image into an optical neural network apparatus, a primary processing device is used to extract the characteristic feature of an object pattern. thereafter, compressed information as a result of the above processing is inputted into the input of the all-optical type optical neural network apparatus, that implements parallel processings adaptively through optical computing, at individual points on the input of the same. therefore, the primary processing device that was capable of dealing with only logical input information until now can process even vague input information by the use of the optical neural network apparatus located on the later stage. on the other hand, the use of the primary processing device on the previous stage of the optical neural network apparatus enables a limited input range of the optical neural network apparatus to be expanded together with the assurance of higher degree processing by inputting into the optical neural network apparatus results of the characteristic feature extraction from an original image. dated 1992-01-14" 5083285,matrix-structured neural network with learning circuitry,"a multi-layer perceptron circuit device using integrated configuration which is capable of incorporating self-learning function and which is easily extendable. the device includes: at least one synapse blocks containing: a plurality of synapses for performing weight calculation on input signals to obtain output signals, which are arranged in planar array defined by a first and a second directions; input signal lines for transmitting the input signals to the synapses, arranged along the first direction; and output signal lines for transmitting the output signal from the synapses, arranged along the second direction not identical to the first direction; at least one input neuron blocks containing a plurality of neurons to be connected with the input signal lines; and at least one output neuron blocks containing a plurality of neurons to be connected with the output signal lines.",1992-01-21,"The title of the patent is matrix-structured neural network with learning circuitry and its abstract is a multi-layer perceptron circuit device using integrated configuration which is capable of incorporating self-learning function and which is easily extendable. the device includes: at least one synapse blocks containing: a plurality of synapses for performing weight calculation on input signals to obtain output signals, which are arranged in planar array defined by a first and a second directions; input signal lines for transmitting the input signals to the synapses, arranged along the first direction; and output signal lines for transmitting the output signal from the synapses, arranged along the second direction not identical to the first direction; at least one input neuron blocks containing a plurality of neurons to be connected with the input signal lines; and at least one output neuron blocks containing a plurality of neurons to be connected with the output signal lines. dated 1992-01-21" 5086405,floating point adder circuit using neural network,a floating point adder circuit using neural network concepts and having high speed operation is obtained by a controlling circuit using a comparator and an operating circuit using an adder and a subtractor.,1992-02-04,The title of the patent is floating point adder circuit using neural network and its abstract is a floating point adder circuit using neural network concepts and having high speed operation is obtained by a controlling circuit using a comparator and an operating circuit using an adder and a subtractor. dated 1992-02-04 5086479,information processing system using neural network learning function,"an information processing apparatus using a neural network learning function has, in one embodiment, a computer system and a pattern recognition apparatus associated with each other via a communication cable. the computer system includes a learning section having a first neural network and serves to adjust the weights of connection therein as a result of learning with a learning data signal supplied thereto from the pattern recognition apparatus via the communication cable. the pattern recognition apparatus includes an associative output section having a second neural network and receives data on the adjusted weights from the learning section via the communication cable to reconstruct the second neural network with the data on the adjusted weights. the pattern recognition apparatus with the associative output section having the reconstructed second neural network performs pattern recognition independently of the computer system with the communication cable being brought into an electrical isolation mode.",1992-02-04,"The title of the patent is information processing system using neural network learning function and its abstract is an information processing apparatus using a neural network learning function has, in one embodiment, a computer system and a pattern recognition apparatus associated with each other via a communication cable. the computer system includes a learning section having a first neural network and serves to adjust the weights of connection therein as a result of learning with a learning data signal supplied thereto from the pattern recognition apparatus via the communication cable. the pattern recognition apparatus includes an associative output section having a second neural network and receives data on the adjusted weights from the learning section via the communication cable to reconstruct the second neural network with the data on the adjusted weights. the pattern recognition apparatus with the associative output section having the reconstructed second neural network performs pattern recognition independently of the computer system with the communication cable being brought into an electrical isolation mode. dated 1992-02-04" 5087826,multi-layer neural network employing multiplexed output neurons,"a multi-layer electrically trainable analog neural network employing multiplexed output neurons having inputs organized into two groups, external and recurrent (i.e., feedback). each layer of the network comprises a matrix of synapse cells which implement a matrix multiplication between an input vector and a weight matrix. in normal operation, an external input vector coupled to the first synaptic array generates a sigmoid response at the output of a set of neurons. this output is then fed back to the next and subsequent layers of the network as a recurrent input vector. the output of second layer processing is generated by the same neurons used in first layer processing. thus, the neural network of the present invention can handle n-layer operation by using recurrent connections and a single set of multiplexed output neurons.",1992-02-11,"The title of the patent is multi-layer neural network employing multiplexed output neurons and its abstract is a multi-layer electrically trainable analog neural network employing multiplexed output neurons having inputs organized into two groups, external and recurrent (i.e., feedback). each layer of the network comprises a matrix of synapse cells which implement a matrix multiplication between an input vector and a weight matrix. in normal operation, an external input vector coupled to the first synaptic array generates a sigmoid response at the output of a set of neurons. this output is then fed back to the next and subsequent layers of the network as a recurrent input vector. the output of second layer processing is generated by the same neurons used in first layer processing. thus, the neural network of the present invention can handle n-layer operation by using recurrent connections and a single set of multiplexed output neurons. dated 1992-02-11" 5089862,monocrystalline three-dimensional integrated circuit,""" a monocrystalline monolith contains a 3-d array of interconnected lattice-matched devices (which may be of one kind exclusively, or that kind in combination with one or more other kinds) performing digital, analog, image-processing, or neural-network functions, singly or in combination. localized inclusions of lattice-matched metal and (or) insulator can exist in the monolith, but monolith-wide layers of insulator are avoided. the devices may be self-isolated, junction-isolated, or insulator-isolated, and may include but not be limited to mosfets, bjts, jfets, mfets, ccds, resistors, and capacitors. the monolith is fabricated in a single apparatus using a process such as mbe or sputter epitaxy executed in a continuous or quasicontinuous manner under automatic control, and supplanting hundreds of discrete steps with handling and storage steps interpolated. """"writing"""" on the growing crystal is done during crystal growth by methods that may include but not be limited to ion beams, laser beams, patterned light exposures, and physical masks. the interior volume of the fabrication apparatus is far cleaner and more highly controlled than that of a clean room. the apparatus is highly replicated and is amenable to mass production. the product has unprecedented volumetric function density, and high performance stems from short signal paths, low parasitic loading, and 3-d architecture. high reliability stems from contamination-free fabrication, small signal-arrival skew, and generous noise margins. economy stems from mass-produced factory apparatus, automatic ic manufacture, and high ic yield. among the ic products are fast and efficient memories with equally fast and efficient error-correction abilities, crosstalk-free operational amplifiers, and highly paralleled and copiously interconnected neural networks. """,1992-02-18,"The title of the patent is monocrystalline three-dimensional integrated circuit and its abstract is "" a monocrystalline monolith contains a 3-d array of interconnected lattice-matched devices (which may be of one kind exclusively, or that kind in combination with one or more other kinds) performing digital, analog, image-processing, or neural-network functions, singly or in combination. localized inclusions of lattice-matched metal and (or) insulator can exist in the monolith, but monolith-wide layers of insulator are avoided. the devices may be self-isolated, junction-isolated, or insulator-isolated, and may include but not be limited to mosfets, bjts, jfets, mfets, ccds, resistors, and capacitors. the monolith is fabricated in a single apparatus using a process such as mbe or sputter epitaxy executed in a continuous or quasicontinuous manner under automatic control, and supplanting hundreds of discrete steps with handling and storage steps interpolated. """"writing"""" on the growing crystal is done during crystal growth by methods that may include but not be limited to ion beams, laser beams, patterned light exposures, and physical masks. the interior volume of the fabrication apparatus is far cleaner and more highly controlled than that of a clean room. the apparatus is highly replicated and is amenable to mass production. the product has unprecedented volumetric function density, and high performance stems from short signal paths, low parasitic loading, and 3-d architecture. high reliability stems from contamination-free fabrication, small signal-arrival skew, and generous noise margins. economy stems from mass-produced factory apparatus, automatic ic manufacture, and high ic yield. among the ic products are fast and efficient memories with equally fast and efficient error-correction abilities, crosstalk-free operational amplifiers, and highly paralleled and copiously interconnected neural networks. "" dated 1992-02-18" 5091780,a trainable security system emthod for the same,"a security system comprised of a device for monitoring an area under surveillance. the monitoring device produces images of the area. the security system is also comprised of a device for processing the images to determine whether the area is in a desired state or an undesired state. the processing device is trainable to learn the difference between the desired state and the undesired state. in a preferred embodiment, the monitoring device includes a video camera which produces video images of the area and the processing device includes a computer simulating a neural network. a method for determining whether an area under surveillance is in a desired state or an undesired state. the method comprises the steps of collecting data in a computer about the area which defines when the area is in the desired state or the undesired state. next, training the computer from the collected data to essentially correctly identify when the area is in the desired state or in the undesired state while the area is under surveillance. next, performing surveillance of the area with a computer such that the computer determines whether the area is in a desired state or the undesired state.",1992-02-25,"The title of the patent is a trainable security system emthod for the same and its abstract is a security system comprised of a device for monitoring an area under surveillance. the monitoring device produces images of the area. the security system is also comprised of a device for processing the images to determine whether the area is in a desired state or an undesired state. the processing device is trainable to learn the difference between the desired state and the undesired state. in a preferred embodiment, the monitoring device includes a video camera which produces video images of the area and the processing device includes a computer simulating a neural network. a method for determining whether an area under surveillance is in a desired state or an undesired state. the method comprises the steps of collecting data in a computer about the area which defines when the area is in the desired state or the undesired state. next, training the computer from the collected data to essentially correctly identify when the area is in the desired state or in the undesired state while the area is under surveillance. next, performing surveillance of the area with a computer such that the computer determines whether the area is in a desired state or the undesired state. dated 1992-02-25" 5091864,systolic processor elements for a neural network,"a neural net signal processor provided with a single layer neural net constituted of n neuron circuits which sums the results of the multiplication of each of n input signals xj(j=1 to n) by a coefficient mij to produce a multiply-accumulate value ##equ1## thereof, in which input signals xj(j=1 to n) for input to the single layer neural net are input as serial input data, comprising: a multiplicity of systolic processor elements spe-1(i=1 to m), each comprised of a two-state input data delay latch; a coefficient memory; means for multiplying and summing for multiply-accumulate output operations; an accumulator; a multiplexor for selecting a preceding stage multiply-accumulate output sk(k=1 to i-1) and the multiply-accumulate product si computed by the said circuit; wherein the multiplicity of systolic processor elements are serially connected to form an element array and element multiply-accumulate output operations are executed sequentially to obtain the serial multiply-accumulate outputs si(i=1 to m) of one layer from the element array.",1992-02-25,"The title of the patent is systolic processor elements for a neural network and its abstract is a neural net signal processor provided with a single layer neural net constituted of n neuron circuits which sums the results of the multiplication of each of n input signals xj(j=1 to n) by a coefficient mij to produce a multiply-accumulate value ##equ1## thereof, in which input signals xj(j=1 to n) for input to the single layer neural net are input as serial input data, comprising: a multiplicity of systolic processor elements spe-1(i=1 to m), each comprised of a two-state input data delay latch; a coefficient memory; means for multiplying and summing for multiply-accumulate output operations; an accumulator; a multiplexor for selecting a preceding stage multiply-accumulate output sk(k=1 to i-1) and the multiply-accumulate product si computed by the said circuit; wherein the multiplicity of systolic processor elements are serially connected to form an element array and element multiply-accumulate output operations are executed sequentially to obtain the serial multiply-accumulate outputs si(i=1 to m) of one layer from the element array. dated 1992-02-25" 5091965,video image processing apparatus,"a video image processing apparatus in which an analog video image can be satisfactorily converted to a binary value by calculating theshold values of respective neurons and coupling coefficients of respective synapses of a neural network circuit on the basis of the input analog video image and a pre-determined function. by arranging so a difference component e between the input analog video image and the binary value video image is defined as ##equ1## where .alpha. is the coefficient, u.sub.(i) is the value which results from converting the input analog video image into the binary value, and p.sub.(i,j) is the value obtained from the function and g.sub.(i) is the value which is obtained from the function and the input analog video image, it is possible to convert the video image into the binary value by the use of the neural network circuit. further, by setting that the function to have a frequency characteristic of a human's eyes, it is possible to obtain a binary value video image which is excellent from a human's visual standpoint. furthermore, if the input analog video image is a computer hologram and the function is a window function which indicates a range of a desired video image in a reproduced image which results from fourier-transforming the computer hologram, the noise in the range of the desired video image in the reproduced video image can be reduced to provide an excellent reproduced image.",1992-02-25,"The title of the patent is video image processing apparatus and its abstract is a video image processing apparatus in which an analog video image can be satisfactorily converted to a binary value by calculating theshold values of respective neurons and coupling coefficients of respective synapses of a neural network circuit on the basis of the input analog video image and a pre-determined function. by arranging so a difference component e between the input analog video image and the binary value video image is defined as ##equ1## where .alpha. is the coefficient, u.sub.(i) is the value which results from converting the input analog video image into the binary value, and p.sub.(i,j) is the value obtained from the function and g.sub.(i) is the value which is obtained from the function and the input analog video image, it is possible to convert the video image into the binary value by the use of the neural network circuit. further, by setting that the function to have a frequency characteristic of a human's eyes, it is possible to obtain a binary value video image which is excellent from a human's visual standpoint. furthermore, if the input analog video image is a computer hologram and the function is a window function which indicates a range of a desired video image in a reproduced image which results from fourier-transforming the computer hologram, the noise in the range of the desired video image in the reproduced video image can be reduced to provide an excellent reproduced image. dated 1992-02-25" 5092343,waveform analysis apparatus and method using neural network techniques,"a waveform analysis assembly (10) includes a sensor (12) for detecting physiological electrical and mechanical signals produced by the body. an extraction neural network (22, 22') will learn a repetitive waveform of the electrical signal, store the waveform in memory (18), extract the waveform from the electrical signal, store the location times of occurrences of the waveform, and subtract the waveform from the electrical signal. each significantly different waveform in the electrical signal is learned and extracted. a single or multilayer layer neural network (22, 22') accomplishes the learning and extraction with either multiple passes over the electrical signal or accomplishes the learning and extraction of all waveforms in a single pass over the electrical signal. a reducer (20) receives the stored waveforms and times and reduces them into features characterizing the waveforms. a classifier neural network (36) analyzes the features by classifying them through nonliner mapping techniques within the network representing diseased states and produces results of diseased states based on learned features of the normal and patient groups.",1992-03-03,"The title of the patent is waveform analysis apparatus and method using neural network techniques and its abstract is a waveform analysis assembly (10) includes a sensor (12) for detecting physiological electrical and mechanical signals produced by the body. an extraction neural network (22, 22') will learn a repetitive waveform of the electrical signal, store the waveform in memory (18), extract the waveform from the electrical signal, store the location times of occurrences of the waveform, and subtract the waveform from the electrical signal. each significantly different waveform in the electrical signal is learned and extracted. a single or multilayer layer neural network (22, 22') accomplishes the learning and extraction with either multiple passes over the electrical signal or accomplishes the learning and extraction of all waveforms in a single pass over the electrical signal. a reducer (20) receives the stored waveforms and times and reduces them into features characterizing the waveforms. a classifier neural network (36) analyzes the features by classifying them through nonliner mapping techniques within the network representing diseased states and produces results of diseased states based on learned features of the normal and patient groups. dated 1992-03-03" 5093781,cellular network assignment processor using minimum/maximum convergence technique,"a cellular network assignment processor (10) for solving optimization problems utilizing a neural network architecture having a matrix of simple processing cells (12) that are highly interconnected in a regular structure. the cells (12) accept as input, costs in an assignment problem. the position of each cell (12) corresponds to the position of the cost in the associated constraint space of the assignment problem. each cell (12) is capable of receiving, storing and transmitting cost values and is also capable of determining if it is the maximum or the minimum of cells (12) to which it's connected. operating on one row of cells (12) at a time the processor (10) determines if a conflict exists between selected connected cells (12) until a cell (12) with no conflict is found in each row. the end result is a chosen cell (12), in each row, the chosen cells (12) together representing a valid solution to the assignment problem.",1992-03-03,"The title of the patent is cellular network assignment processor using minimum/maximum convergence technique and its abstract is a cellular network assignment processor (10) for solving optimization problems utilizing a neural network architecture having a matrix of simple processing cells (12) that are highly interconnected in a regular structure. the cells (12) accept as input, costs in an assignment problem. the position of each cell (12) corresponds to the position of the cost in the associated constraint space of the assignment problem. each cell (12) is capable of receiving, storing and transmitting cost values and is also capable of determining if it is the maximum or the minimum of cells (12) to which it's connected. operating on one row of cells (12) at a time the processor (10) determines if a conflict exists between selected connected cells (12) until a cell (12) with no conflict is found in each row. the end result is a chosen cell (12), in each row, the chosen cells (12) together representing a valid solution to the assignment problem. dated 1992-03-03" 5093792,combustion prediction and discrimination apparatus for an internal combustion engine and control apparatus therefor,"an apparatus for predicting and discriminating whether or not misfire, knocking and the like will occur from the cylinder pressure before the occurrence of the misfire, the knocking and the like by the use of a three layered neural network. the cylinder pressure signal detected by a cylinder pressure sensor is sampled and input to each of the elements of the input layer. the signal then is modulated corresponding to the strength (weight) of the connection between each of the elements, and transmitted to the hidden and output layers. the magnitude of signal from the elements of the output layer represents the prediction and discrimination results. the weight is learned and determined by a back propagation method.",1992-03-03,"The title of the patent is combustion prediction and discrimination apparatus for an internal combustion engine and control apparatus therefor and its abstract is an apparatus for predicting and discriminating whether or not misfire, knocking and the like will occur from the cylinder pressure before the occurrence of the misfire, the knocking and the like by the use of a three layered neural network. the cylinder pressure signal detected by a cylinder pressure sensor is sampled and input to each of the elements of the input layer. the signal then is modulated corresponding to the strength (weight) of the connection between each of the elements, and transmitted to the hidden and output layers. the magnitude of signal from the elements of the output layer represents the prediction and discrimination results. the weight is learned and determined by a back propagation method. dated 1992-03-03" 5093899,neural network with normalized learning constant for high-speed stable learning,"the present invention is concerned with a signal processing system having a learning function pursuant to the back-propagation learning rule by the neural network, in which the learning rate is dynamically changed as a function of input values to effect high-speed stable learning. the signal processing system of the present invention is so arranged that, by executing signal processing for the input signals by the recurrent network formed by units each corresponding to a neuron, the features of the sequential time series pattern such as voice signals fluctuating on the time axis can be extracted through learning the coupling state of the recurrent network. the present invention modifies the prior art weight change algorithm .delta.w.sub.ji(n+1) =.eta...delta..sub.pi.+.alpha..w.sub.ji(n) into .delta.w.sub.ji(n+1) =.eta...beta.(.alpha..sub.pj o.sub.pi)+.alpha..w.sub.ji(n) where .beta..sub.j =1/(.sigma..sub.i o.sub.pi.sup.2 +1) is used to normalize the learning constant.",1992-03-03,"The title of the patent is neural network with normalized learning constant for high-speed stable learning and its abstract is the present invention is concerned with a signal processing system having a learning function pursuant to the back-propagation learning rule by the neural network, in which the learning rate is dynamically changed as a function of input values to effect high-speed stable learning. the signal processing system of the present invention is so arranged that, by executing signal processing for the input signals by the recurrent network formed by units each corresponding to a neuron, the features of the sequential time series pattern such as voice signals fluctuating on the time axis can be extracted through learning the coupling state of the recurrent network. the present invention modifies the prior art weight change algorithm .delta.w.sub.ji(n+1) =.eta...delta..sub.pi.+.alpha..w.sub.ji(n) into .delta.w.sub.ji(n+1) =.eta...beta.(.alpha..sub.pj o.sub.pi)+.alpha..w.sub.ji(n) where .beta..sub.j =1/(.sigma..sub.i o.sub.pi.sup.2 +1) is used to normalize the learning constant. dated 1992-03-03" 5093900,reconfigurable neural network,"realization of a reconfigurable neuron for use in a neural network has been achieved using analog techniques. in the reconfigurable neuron, digital input data are multiplied by programmable digital weights in a novel connection structure whose output permits straightforward summation of the products thereby forming a sum signal. the sum signal is multiplied by a programmable scalar, in general, 1, when the input data and the digital weights are binary. when the digital input data and the digital weights are multilevel, the scalar in each reconfigurable neuron is programmed to be a fraction which corresponds to the bit position in the digital data representation, that is, a programmable scalar of 1/2, 1/4, 1/8, and so on. the signal formed by scalar multiplication is passed through a programmable build out circuit which permits neural network reconfiguration by interconnection of one neuron to one or more other neurons. following the build out circuit, the output signal therefrom is supplied to one input of a differential comparator for the reconfigurable neuron. the differential comparator receives its other input from a supplied reference potential. in general, the comparator and reference potential level are designed to generate the nonlinearity for the neuron. one common nonlinearity is a hard limiter function. the present neuron offers the capability of synthesizing other nonlinear transfer functions by utilizing several reference potential levels connected through a controllable switching circuit.",1992-03-03,"The title of the patent is reconfigurable neural network and its abstract is realization of a reconfigurable neuron for use in a neural network has been achieved using analog techniques. in the reconfigurable neuron, digital input data are multiplied by programmable digital weights in a novel connection structure whose output permits straightforward summation of the products thereby forming a sum signal. the sum signal is multiplied by a programmable scalar, in general, 1, when the input data and the digital weights are binary. when the digital input data and the digital weights are multilevel, the scalar in each reconfigurable neuron is programmed to be a fraction which corresponds to the bit position in the digital data representation, that is, a programmable scalar of 1/2, 1/4, 1/8, and so on. the signal formed by scalar multiplication is passed through a programmable build out circuit which permits neural network reconfiguration by interconnection of one neuron to one or more other neurons. following the build out circuit, the output signal therefrom is supplied to one input of a differential comparator for the reconfigurable neuron. the differential comparator receives its other input from a supplied reference potential. in general, the comparator and reference potential level are designed to generate the nonlinearity for the neuron. one common nonlinearity is a hard limiter function. the present neuron offers the capability of synthesizing other nonlinear transfer functions by utilizing several reference potential levels connected through a controllable switching circuit. dated 1992-03-03" 5095443,plural neural network system having a successive approximation learning method,"a neural network structure includes input units for receiving input data, and a plurality of neural networks connected in parallel and connected to the input units. the plurality of neural networks learn in turn correspondence between the input data and teacher data so that the difference between the input data and the teacher becomes small. the neural network structure further includes output units connected to the plurality of neural networks, for outputting a result of learning on the basis of the results of learning in the plurality of neural networks.",1992-03-10,"The title of the patent is plural neural network system having a successive approximation learning method and its abstract is a neural network structure includes input units for receiving input data, and a plurality of neural networks connected in parallel and connected to the input units. the plurality of neural networks learn in turn correspondence between the input data and teacher data so that the difference between the input data and the teacher becomes small. the neural network structure further includes output units connected to the plurality of neural networks, for outputting a result of learning on the basis of the results of learning in the plurality of neural networks. dated 1992-03-10" 5095459,optical neural network,"an optical neural network which imitates a biological neural network, to provide an associative and/or pattern recognition function, is made of light emitting elements to represent an input neuron state vector, a correlation matrix which modulates light according to stored vector information, light receiving elements, an accumulator and a comparator to perform a threshold function. a stored vector closest to an input vector can be found from a large amount of information without increasing the system size by dividing the correlation matrix and the input neuron state vector with time division techniques, frequency modulation or phase modulation techniques. positive and negative valves can also be provided with similar techniques.",1992-03-10,"The title of the patent is optical neural network and its abstract is an optical neural network which imitates a biological neural network, to provide an associative and/or pattern recognition function, is made of light emitting elements to represent an input neuron state vector, a correlation matrix which modulates light according to stored vector information, light receiving elements, an accumulator and a comparator to perform a threshold function. a stored vector closest to an input vector can be found from a large amount of information without increasing the system size by dividing the correlation matrix and the input neuron state vector with time division techniques, frequency modulation or phase modulation techniques. positive and negative valves can also be provided with similar techniques. dated 1992-03-10" 5099114,optical wavelength demultiplexer,"an optical wavelength demultiplexer including an optical conversion device which converts a difference in wavelengths of a plurality of input signals into a difference in spatial power distribution of the input light signals, and a pattern recognition element for recognizing patterns of the spatial power distribution and taking out output signals. at the output portion of the optical conversion device, spatial power distributions are formed which are different for different wavelengths. after converting the spatial power distributions by the pattern recognition element into electrical signals, pattern recognition of the signals is performed to regenerate the original input signals with their respective wavelengths. the optical conversion device uses a diffractive grating or a combination of an optical multimode circuit, an optical multimode fiber, and a plurality of optical wavelengths. the pattern recognition element is constructed by a combination of a photo-detector array and a neural network, or a combination of a hologram element, a photo-detector array and a neural network.",1992-03-24,"The title of the patent is optical wavelength demultiplexer and its abstract is an optical wavelength demultiplexer including an optical conversion device which converts a difference in wavelengths of a plurality of input signals into a difference in spatial power distribution of the input light signals, and a pattern recognition element for recognizing patterns of the spatial power distribution and taking out output signals. at the output portion of the optical conversion device, spatial power distributions are formed which are different for different wavelengths. after converting the spatial power distributions by the pattern recognition element into electrical signals, pattern recognition of the signals is performed to regenerate the original input signals with their respective wavelengths. the optical conversion device uses a diffractive grating or a combination of an optical multimode circuit, an optical multimode fiber, and a plurality of optical wavelengths. the pattern recognition element is constructed by a combination of a photo-detector array and a neural network, or a combination of a hologram element, a photo-detector array and a neural network. dated 1992-03-24" 5099434,continuous-time optical neural network,"an all-optical, continuous-time, recurrent neural network is disclosed which is capable of executing a broad class of energy-minimizing neural net algorithms. the network is a resonator which contains a saturable, two-beam amplifier; two volume holograms; and a linear, two-beam amplifier. the saturable amplifier permits, through the use of a spatially patterned signal beam, the realization of a two-dimensional optical neuron array; the two volume holograms provide adaptive, global network interconnectivity; and the linear amplifier supplies sufficient resonator gain to permit convergent operation of the network.",1992-03-24,"The title of the patent is continuous-time optical neural network and its abstract is an all-optical, continuous-time, recurrent neural network is disclosed which is capable of executing a broad class of energy-minimizing neural net algorithms. the network is a resonator which contains a saturable, two-beam amplifier; two volume holograms; and a linear, two-beam amplifier. the saturable amplifier permits, through the use of a spatially patterned signal beam, the realization of a two-dimensional optical neuron array; the two volume holograms provide adaptive, global network interconnectivity; and the linear amplifier supplies sufficient resonator gain to permit convergent operation of the network. dated 1992-03-24" 5103431,apparatus for detecting sonar signals embedded in noise,"apparatus for detecting sonar signals embedded in noise includes a neural network trained to detect signals in response to the slope of amplitude rank ordered noise corrected powers. a detector detects an analog waveform. means samples and digitizes the analog waveform to obtain digital samples which in turn are passed through a cosine window. the digital samples are fourier transformed into conjugate sets of complex numbers representing amplitude and phase. one conjugate set of the complex numbers are discarded, and the remaining complex numbers ranked according to frequency. the sum of the square of the real and imaginary component of each of the remaining complex numbers in a frequency band are provided to obtain a corresponding series of representing estimated power ranked by frequency over the band. the noise contained in subbands of the band is estimated. each estimated power is then divided by the estimated noise of the subband containing the estimated power to obtain corresponding noise corrected powers, which are ranked ordered according to amplitude. the amplitude rank ordered noise powers are provided to corresponding inputs of the neural network.",1992-04-07,"The title of the patent is apparatus for detecting sonar signals embedded in noise and its abstract is apparatus for detecting sonar signals embedded in noise includes a neural network trained to detect signals in response to the slope of amplitude rank ordered noise corrected powers. a detector detects an analog waveform. means samples and digitizes the analog waveform to obtain digital samples which in turn are passed through a cosine window. the digital samples are fourier transformed into conjugate sets of complex numbers representing amplitude and phase. one conjugate set of the complex numbers are discarded, and the remaining complex numbers ranked according to frequency. the sum of the square of the real and imaginary component of each of the remaining complex numbers in a frequency band are provided to obtain a corresponding series of representing estimated power ranked by frequency over the band. the noise contained in subbands of the band is estimated. each estimated power is then divided by the estimated noise of the subband containing the estimated power to obtain corresponding noise corrected powers, which are ranked ordered according to amplitude. the amplitude rank ordered noise powers are provided to corresponding inputs of the neural network. dated 1992-04-07" 5103488,method of and device for moving image contour recognition,the recognition method is applied to visual telephony image coding. matrices of digital samples relevant to the individual frames of the video transmission are submitted to a first processing whereby the foreground region containing the figure is identified. the information concerning the elements of such a region is then processed by edge recognition algorithms to detect a group of elements possibly belonging to the contour. the group of elements is analyzed to select a sequence of elements distributed on the average along a line. the sequency of elements is processed by a neural network to build up the continuous contour which is then coded.,1992-04-07,The title of the patent is method of and device for moving image contour recognition and its abstract is the recognition method is applied to visual telephony image coding. matrices of digital samples relevant to the individual frames of the video transmission are submitted to a first processing whereby the foreground region containing the figure is identified. the information concerning the elements of such a region is then processed by edge recognition algorithms to detect a group of elements possibly belonging to the contour. the group of elements is analyzed to select a sequence of elements distributed on the average along a line. the sequency of elements is processed by a neural network to build up the continuous contour which is then coded. dated 1992-04-07 5103496,artificial neural network system for memory modification,"an artificial neural network, which has a plurality of neurons each receiving a plurality of inputs whose effect is determined by adjust able weights at synapses individually connecting the inputs to the neuron to provide a sum signal to a sigmoidal function generator determining the output of the neuron, undergoes memory modification by a steepest-descent method in which individual variations in the outputs of the neurons are successively generated by small perturbations imposed on the sum signals. as each variation is generated on the output of a neuron, an overall error of all the neuron outputs in relation to their desired values is measured and compared to this error prior to the perturbation. the difference in these errors, with adjustments which may be changed as the neuron outputs converge toward their desired values, is used to modify each weight of the neuron presently subjected to the perturbation.",1992-04-07,"The title of the patent is artificial neural network system for memory modification and its abstract is an artificial neural network, which has a plurality of neurons each receiving a plurality of inputs whose effect is determined by adjust able weights at synapses individually connecting the inputs to the neuron to provide a sum signal to a sigmoidal function generator determining the output of the neuron, undergoes memory modification by a steepest-descent method in which individual variations in the outputs of the neurons are successively generated by small perturbations imposed on the sum signals. as each variation is generated on the output of a neuron, an overall error of all the neuron outputs in relation to their desired values is measured and compared to this error prior to the perturbation. the difference in these errors, with adjustments which may be changed as the neuron outputs converge toward their desired values, is used to modify each weight of the neuron presently subjected to the perturbation. dated 1992-04-07" 5105468,time delay neural network for printed and cursive handwritten character recognition,"a time delay neural network is defined having feature detection layers which are constrained for extracting features and subsampling a sequence of feature vectors input to the particular feature detection layer. output from the network for both digit and uppercase letters is provided by an output classification layer which is fully connected to the final feature detection layer. each feature vector relates to coordinate information about the original character preserved in a temporal order together with additional information related to the original character at the particular coordinate point. such additional information may include local geometric information, local pen information, and phantom stroke coordinate information relating to connecting segments between the end point of one stroke and the beginning point of another stroke. the network is also defined to increase the number of feature elements in each feature vector from one feature detection layer to the next. that is, as the network is reducing its dependence on temporally related features, it is increasing its dependence on more features and more complex features.",1992-04-14,"The title of the patent is time delay neural network for printed and cursive handwritten character recognition and its abstract is a time delay neural network is defined having feature detection layers which are constrained for extracting features and subsampling a sequence of feature vectors input to the particular feature detection layer. output from the network for both digit and uppercase letters is provided by an output classification layer which is fully connected to the final feature detection layer. each feature vector relates to coordinate information about the original character preserved in a temporal order together with additional information related to the original character at the particular coordinate point. such additional information may include local geometric information, local pen information, and phantom stroke coordinate information relating to connecting segments between the end point of one stroke and the beginning point of another stroke. the network is also defined to increase the number of feature elements in each feature vector from one feature detection layer to the next. that is, as the network is reducing its dependence on temporally related features, it is increasing its dependence on more features and more complex features. dated 1992-04-14" 5107442,adaptive neural network image processing system,"a neural-simulating system for processing input stimuli includes a plurality of layers, each layer receives layer input signals and generates layer output signals, the layer input signals include signals from the input stimuli and ones of the layer output signals from only previous layers within the plurality of layers. each of the plurality of layers includes a plurality of neurons operating in parallel on the layer input signals applied to the plurality of layers. each of the neurons derives neuron output signals from a continuously differentiable transfer function for each of the neurons based upon a combination of sets of weights associated with the neurons and the layer input signals. an adaptive network is associated with each neuron for generating weight correction signals based upon gradient estimate signals and convergence factors signals of each neuron and for processing the weight correction signals to thereby modify the weights associated with each neuron. an error measuring circuit generates relative powered error signals for use in generating the gradient estimate signals and the convergence factors signals.",1992-04-21,"The title of the patent is adaptive neural network image processing system and its abstract is a neural-simulating system for processing input stimuli includes a plurality of layers, each layer receives layer input signals and generates layer output signals, the layer input signals include signals from the input stimuli and ones of the layer output signals from only previous layers within the plurality of layers. each of the plurality of layers includes a plurality of neurons operating in parallel on the layer input signals applied to the plurality of layers. each of the neurons derives neuron output signals from a continuously differentiable transfer function for each of the neurons based upon a combination of sets of weights associated with the neurons and the layer input signals. an adaptive network is associated with each neuron for generating weight correction signals based upon gradient estimate signals and convergence factors signals of each neuron and for processing the weight correction signals to thereby modify the weights associated with each neuron. an error measuring circuit generates relative powered error signals for use in generating the gradient estimate signals and the convergence factors signals. dated 1992-04-21" 5107454,pattern associative memory system,"in a pattern associative memory system, an error correcting circuit is constructed in the form of a neural network. a memory condition of the error correcting circuit is established according to a back propagation method. if a memory pattern is recollected, an output from the error correcting circuit is again inputted to the error correcting circuit for feedback, thereby repeatedly performing error correction calculations of a pattern as the basis of recollection.",1992-04-21,"The title of the patent is pattern associative memory system and its abstract is in a pattern associative memory system, an error correcting circuit is constructed in the form of a neural network. a memory condition of the error correcting circuit is established according to a back propagation method. if a memory pattern is recollected, an output from the error correcting circuit is again inputted to the error correcting circuit for feedback, thereby repeatedly performing error correction calculations of a pattern as the basis of recollection. dated 1992-04-21" 5108170,perimetric instrument,"the present invention relates to a perimetric instrument for measuring a range of a visual field of an eye of a man, and more particularly to a perimetric instrument which includes an abnormal visual field pattern analogical inferring section using a multilayer neural network and can analogically infer an abnormal visual field pattern of a measurement object person. further, the present invention provides a perimetric instrument which can automatically make a determination of an additional target. in particular, according to the present invention, a multilayer neural network including an input layer, hidden layers and an output layer introduces a neural weight ratio which is determined based on responses when the visual field is normal and abnormal, and as a response from a responding section is inputted to the input layer while an output from the output layer is sent out to an analogical inferring section, the analogical inferring section can analogically infer an abnormal visual field pattern of the measurement object person. accordingly, it is possible to help judgment of an abnormal visual field pattern by a measurer, and since also labor, time and so forth of the measurer are reduced, burdens to the measurer and them easurement object person can be reduced remarkably.",1992-04-28,"The title of the patent is perimetric instrument and its abstract is the present invention relates to a perimetric instrument for measuring a range of a visual field of an eye of a man, and more particularly to a perimetric instrument which includes an abnormal visual field pattern analogical inferring section using a multilayer neural network and can analogically infer an abnormal visual field pattern of a measurement object person. further, the present invention provides a perimetric instrument which can automatically make a determination of an additional target. in particular, according to the present invention, a multilayer neural network including an input layer, hidden layers and an output layer introduces a neural weight ratio which is determined based on responses when the visual field is normal and abnormal, and as a response from a responding section is inputted to the input layer while an output from the output layer is sent out to an analogical inferring section, the analogical inferring section can analogically infer an abnormal visual field pattern of the measurement object person. accordingly, it is possible to help judgment of an abnormal visual field pattern by a measurer, and since also labor, time and so forth of the measurer are reduced, burdens to the measurer and them easurement object person can be reduced remarkably. dated 1992-04-28" 5109275,printing signal correction and printer operation control apparatus utilizing neural network,"an apparatus for printing signal correction and printer operation control, for use in applications such as color copiers, utilizes a neural network to convert input image signals, derived for example by scanning and analyzing an original image, into printing density signals which are supplied to a printer. in addition, a detection signal expressing at least one internal environmental condition of the printer, such as temperature, is inputted to the neural network, so that the output printing density signals are automatically compensated for changes in internal environment of the printer.",1992-04-28,"The title of the patent is printing signal correction and printer operation control apparatus utilizing neural network and its abstract is an apparatus for printing signal correction and printer operation control, for use in applications such as color copiers, utilizes a neural network to convert input image signals, derived for example by scanning and analyzing an original image, into printing density signals which are supplied to a printer. in addition, a detection signal expressing at least one internal environmental condition of the printer, such as temperature, is inputted to the neural network, so that the output printing density signals are automatically compensated for changes in internal environment of the printer. dated 1992-04-28" 5111516,apparatus for visual recognition,"a basic image of objects is extracted from a two-dimensional image of objects. geometrical elements of the objects are extracted from the extracted basic image. the objects to be recognized are identified by searching a combination of the geometrical elements which match a geometrical model and then utilizing candidate position/orientation of the objects to be recognized, said candidate position/orientation being determined from a relationship in relative position between the combination of geometrical elements and the geometrical model. mesh cells fixed to the geometrical model are mapped on the basic image based on the candidate position/orientation. in addition, verification is made as to whether an image of the geometrical model mapped by the candidate position/orientation is accurately matched with an image of one of the objects to be recognized, through a neural network to which values got from the basic image included in the individual mesh cells are to be applied as input values. combination weight factors employed in the neural network are learned according to the verified results. it is also possible to recognize the multi-purpose objects according to how to learn the combination weight factors.",1992-05-05,"The title of the patent is apparatus for visual recognition and its abstract is a basic image of objects is extracted from a two-dimensional image of objects. geometrical elements of the objects are extracted from the extracted basic image. the objects to be recognized are identified by searching a combination of the geometrical elements which match a geometrical model and then utilizing candidate position/orientation of the objects to be recognized, said candidate position/orientation being determined from a relationship in relative position between the combination of geometrical elements and the geometrical model. mesh cells fixed to the geometrical model are mapped on the basic image based on the candidate position/orientation. in addition, verification is made as to whether an image of the geometrical model mapped by the candidate position/orientation is accurately matched with an image of one of the objects to be recognized, through a neural network to which values got from the basic image included in the individual mesh cells are to be applied as input values. combination weight factors employed in the neural network are learned according to the verified results. it is also possible to recognize the multi-purpose objects according to how to learn the combination weight factors. dated 1992-05-05" 5111531,process control using neural network,a control system and method for a continuous process in which a trained neural network predicts the value of an indirectly controlled process variable and the values of directly controlled process variables are changed to cause the predicted value to approach a desired value.,1992-05-05,The title of the patent is process control using neural network and its abstract is a control system and method for a continuous process in which a trained neural network predicts the value of an indirectly controlled process variable and the values of directly controlled process variables are changed to cause the predicted value to approach a desired value. dated 1992-05-05 5113482,neural network model for reaching a goal state,""" an object, such as a robot, is located at an initial state in a finite state space area and moves under the control of the unsupervised neural network model of the invention. the network instructs the object to move in one of several directions from the initial state. upon reaching another state, the model again instructs the object to move in one of several directions. these instructions continue until either: a) the object has completed a cycle by ending up back at a state it has been to previously during this cycle, or b) the object has completed a cycle by reaching the goal state. if the object ends up back at a state it has been to previously during this cycle, the neural network model ends the cycle and immediately begins a new cycle from the present location. when the object reaches the goal state, the neural network model learns that this path is productive towards reaching the goal state, and is given delayed reinforcement in the form of a """"reward"""". upon reaching a state, the neural network model calculates a level of satisfaction with its progress towards reaching the goal state. if the level of satisfaction is low, the neural network model is more likely to override what has been learned thus far and deviate from a path known to lead to the goal state to experiment with new and possibly better paths. if the level of satisfaction is high, the neural network model is much less likely to experiment with new paths. the object is guaranteed to eventually find the best path to the goal state from any starting location, assuming that the level of satisfaction does not exceed a threshold point where learning ceases. """,1992-05-12,"The title of the patent is neural network model for reaching a goal state and its abstract is "" an object, such as a robot, is located at an initial state in a finite state space area and moves under the control of the unsupervised neural network model of the invention. the network instructs the object to move in one of several directions from the initial state. upon reaching another state, the model again instructs the object to move in one of several directions. these instructions continue until either: a) the object has completed a cycle by ending up back at a state it has been to previously during this cycle, or b) the object has completed a cycle by reaching the goal state. if the object ends up back at a state it has been to previously during this cycle, the neural network model ends the cycle and immediately begins a new cycle from the present location. when the object reaches the goal state, the neural network model learns that this path is productive towards reaching the goal state, and is given delayed reinforcement in the form of a """"reward"""". upon reaching a state, the neural network model calculates a level of satisfaction with its progress towards reaching the goal state. if the level of satisfaction is low, the neural network model is more likely to override what has been learned thus far and deviate from a path known to lead to the goal state to experiment with new and possibly better paths. if the level of satisfaction is high, the neural network model is much less likely to experiment with new paths. the object is guaranteed to eventually find the best path to the goal state from any starting location, assuming that the level of satisfaction does not exceed a threshold point where learning ceases. "" dated 1992-05-12" 5113483,neural network with semi-localized non-linear mapping of the input space,a neural network includes an input layer comprising a plurality of input units (24) interconnected to a hidden layer with a plurality of hidden units (26) disposed therein through an interconnection matrix (28). each of the hidden units (26) is a single output that is connected to output units (32) in an output layer through an interconnection matrix (30). each of the interconnections between one of the hidden units (26) to one of the output units (32) has a weight associated therewith. each of the hidden units (26) has an activation in the i'th dimension and extending across all the other dimensions in a non-localized manner in accordance with the following equation: ##equ1## that the network learns by the back propagation method to vary the output weights and the parameters of the activation function .mu..sub.hi and .sigma..sub.hi.,1992-05-12,The title of the patent is neural network with semi-localized non-linear mapping of the input space and its abstract is a neural network includes an input layer comprising a plurality of input units (24) interconnected to a hidden layer with a plurality of hidden units (26) disposed therein through an interconnection matrix (28). each of the hidden units (26) is a single output that is connected to output units (32) in an output layer through an interconnection matrix (30). each of the interconnections between one of the hidden units (26) to one of the output units (32) has a weight associated therewith. each of the hidden units (26) has an activation in the i'th dimension and extending across all the other dimensions in a non-localized manner in accordance with the following equation: ##equ1## that the network learns by the back propagation method to vary the output weights and the parameters of the activation function .mu..sub.hi and .sigma..sub.hi. dated 1992-05-12 5113484,rank filter using neural newwork,"a rank filter is provided which can be used for improving an image signal degraded by noise, while at the same time maintaining edge information. the rank filter is implemented by using a neural network and obtains a high processing speed with a simple circuit arrangement, as compared to conventional rank filters, hpfs, lpfs and average filters. the rank filter using the concept of a neural network includes decoder devices, a comparison device and a counter.",1992-05-12,"The title of the patent is rank filter using neural newwork and its abstract is a rank filter is provided which can be used for improving an image signal degraded by noise, while at the same time maintaining edge information. the rank filter is implemented by using a neural network and obtains a high processing speed with a simple circuit arrangement, as compared to conventional rank filters, hpfs, lpfs and average filters. the rank filter using the concept of a neural network includes decoder devices, a comparison device and a counter. dated 1992-05-12" 5113485,optical neural network system,"an optical system of an optical neural network model for parallel data processing is disclosed. taking advantage of the fact that an auto-correlation matrix is symmetric with respect to a main diagonal and the weights for modulating the values of diagonals of the auto-correlation matrix are equal to each other, the configuration of an optical modulation unit is simplified by a one-dimensional modulation array on the one hand, and both positive and negative weights are capable of being computed at the same time on the other hand. in particular, the optical system makes up a second-order neural network exhibiting invariant characteristics against the translation and scale.",1992-05-12,"The title of the patent is optical neural network system and its abstract is an optical system of an optical neural network model for parallel data processing is disclosed. taking advantage of the fact that an auto-correlation matrix is symmetric with respect to a main diagonal and the weights for modulating the values of diagonals of the auto-correlation matrix are equal to each other, the configuration of an optical modulation unit is simplified by a one-dimensional modulation array on the one hand, and both positive and negative weights are capable of being computed at the same time on the other hand. in particular, the optical system makes up a second-order neural network exhibiting invariant characteristics against the translation and scale. dated 1992-05-12" 5115492,digital correlators incorporating analog neural network structures operated on a bit-sliced basis,"plural-bit digital input signals to be subjected to weighted summation are bit-sliced; and a number n of respective first through n.sup.th weighted summations of the bits of the digital input signals in each bit slice are performed, resulting in a respective set of first through n.sup.th partial weighted summation results. each weighted summation of a bit slice of the digital input signals is performed using a capacitive network that generates partial weighted summation results in the analog regime; and analog-to-digital conversion circuitry digitizes the partial weighted summation results. weighted summations of the digitized partial weighted summation results of similar ordinal number are then performed to generate first through n.sup.th final weighted summation results in digital form, which results are respective correlations of the pattern of the digital input signals with the patterns of weights established by the capacitive networks. a neural net layer can be formed by combining such weighted summation circuitry with digital circuits processing each final weighted summation result non-linearly, with a system function that is sigmoidal.",1992-05-19,"The title of the patent is digital correlators incorporating analog neural network structures operated on a bit-sliced basis and its abstract is plural-bit digital input signals to be subjected to weighted summation are bit-sliced; and a number n of respective first through n.sup.th weighted summations of the bits of the digital input signals in each bit slice are performed, resulting in a respective set of first through n.sup.th partial weighted summation results. each weighted summation of a bit slice of the digital input signals is performed using a capacitive network that generates partial weighted summation results in the analog regime; and analog-to-digital conversion circuitry digitizes the partial weighted summation results. weighted summations of the digitized partial weighted summation results of similar ordinal number are then performed to generate first through n.sup.th final weighted summation results in digital form, which results are respective correlations of the pattern of the digital input signals with the patterns of weights established by the capacitive networks. a neural net layer can be formed by combining such weighted summation circuitry with digital circuits processing each final weighted summation result non-linearly, with a system function that is sigmoidal. dated 1992-05-19" 5119438,recognizing apparatus,"a recognizing apparatus is provided for recognizing a class to which an inputted characteristic pattern belongs from among a plurality of classes to be discriminated using a neural network. the classes are classified into a plurality of categories. the apparatus includes a network selecting portion for selecting a category to which the inputted characteristic pattern belongs and for selecting a neural network for use in discriminating the class to which the inputted characteristic pattern belongs in the selected category. the apparatus further includes a network memory portion, a network setting portion and a details discriminating portion. the network memory portion stores structures of a plurality of neural networks which have finished learning for respective categories, weights of the neural networks set by the learning and a plurality of discriminating algorithms to be used when the classes are discriminated by the neural networks. the network setting portion sets the structure and weights of a neural network selected by the network selecting portion and a discriminating alogrithm appropriate to the selected category. the details discriminating portion recognizes the class to which the inputted characteristic pattern belongs by performing the details discriminating operation using the neural network set by the neural network setting portion.",1992-06-02,"The title of the patent is recognizing apparatus and its abstract is a recognizing apparatus is provided for recognizing a class to which an inputted characteristic pattern belongs from among a plurality of classes to be discriminated using a neural network. the classes are classified into a plurality of categories. the apparatus includes a network selecting portion for selecting a category to which the inputted characteristic pattern belongs and for selecting a neural network for use in discriminating the class to which the inputted characteristic pattern belongs in the selected category. the apparatus further includes a network memory portion, a network setting portion and a details discriminating portion. the network memory portion stores structures of a plurality of neural networks which have finished learning for respective categories, weights of the neural networks set by the learning and a plurality of discriminating algorithms to be used when the classes are discriminated by the neural networks. the network setting portion sets the structure and weights of a neural network selected by the network selecting portion and a discriminating alogrithm appropriate to the selected category. the details discriminating portion recognizes the class to which the inputted characteristic pattern belongs by performing the details discriminating operation using the neural network set by the neural network setting portion. dated 1992-06-02" 5119469,neural network with weight adjustment based on prior history of input signals,a dynamically stable associative learning neural network system include a plurality of synapses and a non-linear function circuit and includes an adaptive weight circuit for adjusting the weight of each synapse based upon the present signal and the prior history of signals applied to the input of the particular synapse and the present signal and the prior history of signals applied to the input of a predetermined set of other collateral synapses. a flow-through neuron circuit embodiment includes a flow-through synapse having a predetermined fixed weight. a neural network is formed employing neuron circuits of both the above types. a set of flow-through neuron circuits are connected by flow-through synapses to form separate paths between each input terminal and a corresponding output terminal. other neuron circuits having only adjustable weight synapses are included within the network. this neuron network is initialized by setting the adjustable synapses at some value near the minimum weight. the neural network is taught by successively application of sets of inputs signals to the input terminals until a dynamic equilibrium is reached.,1992-06-02,The title of the patent is neural network with weight adjustment based on prior history of input signals and its abstract is a dynamically stable associative learning neural network system include a plurality of synapses and a non-linear function circuit and includes an adaptive weight circuit for adjusting the weight of each synapse based upon the present signal and the prior history of signals applied to the input of the particular synapse and the present signal and the prior history of signals applied to the input of a predetermined set of other collateral synapses. a flow-through neuron circuit embodiment includes a flow-through synapse having a predetermined fixed weight. a neural network is formed employing neuron circuits of both the above types. a set of flow-through neuron circuits are connected by flow-through synapses to form separate paths between each input terminal and a corresponding output terminal. other neuron circuits having only adjustable weight synapses are included within the network. this neuron network is initialized by setting the adjustable synapses at some value near the minimum weight. the neural network is taught by successively application of sets of inputs signals to the input terminals until a dynamic equilibrium is reached. dated 1992-06-02 5121467,neural network/expert system process control system and method,"a neural network/expert system process control system and method combines the decision-making capabilities of expert systems with the predictive capabilities of neural networks for improved process control. neural networks provide predictions of measurements which are difficult to make, or supervisory or regulatory control changes which are difficult to implement using classical control techniques. expert systems make decisions automatically based on knowledge which is well-known and can be expressed in rules or other knowledge representation forms. sensor and laboratory data is effictively used. in one approach, the output data from the neural network can be used by the controller in controlling the process, and the expert system can make a decision using sensor or lab data to control the controller(s). in another approach, the output data of the neural network can be used by the expert system in making its decision, and control of the process carried out using lab or sensor data. in another approach, the output data can be used both to control the process and to make decisions.",1992-06-09,"The title of the patent is neural network/expert system process control system and method and its abstract is a neural network/expert system process control system and method combines the decision-making capabilities of expert systems with the predictive capabilities of neural networks for improved process control. neural networks provide predictions of measurements which are difficult to make, or supervisory or regulatory control changes which are difficult to implement using classical control techniques. expert systems make decisions automatically based on knowledge which is well-known and can be expressed in rules or other knowledge representation forms. sensor and laboratory data is effictively used. in one approach, the output data from the neural network can be used by the controller in controlling the process, and the expert system can make a decision using sensor or lab data to control the controller(s). in another approach, the output data of the neural network can be used by the expert system in making its decision, and control of the process carried out using lab or sensor data. in another approach, the output data can be used both to control the process and to make decisions. dated 1992-06-09" 5124918,neural-based autonomous robotic system,"a system for achieving ambulatory control of a multi-legged system employs stimulus and response-based modeling. a adapted neural network-based system is employed for dictating motion characteristics of a plurality of leg members. rhythmic movements necessary to accomplish motion are provided by a series of signal generators. a first signal generator functions as a pacemaker governing overall system characteristics. one or more axis control signals are provided to a plurality of leg controllers, which axis control signals work in concert with a system coordination signal from the pacemaker. a sensory mechanism is also employed to govern ambulatory system responses.",1992-06-23,"The title of the patent is neural-based autonomous robotic system and its abstract is a system for achieving ambulatory control of a multi-legged system employs stimulus and response-based modeling. a adapted neural network-based system is employed for dictating motion characteristics of a plurality of leg members. rhythmic movements necessary to accomplish motion are provided by a series of signal generators. a first signal generator functions as a pacemaker governing overall system characteristics. one or more axis control signals are provided to a plurality of leg controllers, which axis control signals work in concert with a system coordination signal from the pacemaker. a sensory mechanism is also employed to govern ambulatory system responses. dated 1992-06-23" 5129037,neural network for performing beta-token partitioning in a rete network,"a method and system for beta-token partitioning a target expert system program. the target expert system program is first compiled to form a rete network for execution on a single processor, the compilation including directives for collecting selected processing statistics. the target expert system program is then executed on a single processor, generating during execution processing statistics in connection with each node of the rete network. the processing statistics are then applied to a programmed neural network to identify nodes in the rete network for beta-token partitioning, and the target expert system program is then recompiled to form a rete network for execution on multiple processors, the rete network being beta-token partitioned at nodes identified by the neural network.",1992-07-07,"The title of the patent is neural network for performing beta-token partitioning in a rete network and its abstract is a method and system for beta-token partitioning a target expert system program. the target expert system program is first compiled to form a rete network for execution on a single processor, the compilation including directives for collecting selected processing statistics. the target expert system program is then executed on a single processor, generating during execution processing statistics in connection with each node of the rete network. the processing statistics are then applied to a programmed neural network to identify nodes in the rete network for beta-token partitioning, and the target expert system program is then recompiled to form a rete network for execution on multiple processors, the rete network being beta-token partitioned at nodes identified by the neural network. dated 1992-07-07" 5129038,neural network with selective error reduction to increase learning speed,"an improved iterative learning machine having a plurality of multi-input/single-output signal processing units connected in a hierarchical structure includes a weight coefficient change control unit which controls weight change quantities for those multi-input/single-output signal processing units having iteratively reduced errors thereby increasing the learning speed, contrary to conventional learning machines which perform a learning operation in order to minimize a square error of multi-input/single-output signal processing units in the highest hierarchy of the hierarchical structure.",1992-07-07,"The title of the patent is neural network with selective error reduction to increase learning speed and its abstract is an improved iterative learning machine having a plurality of multi-input/single-output signal processing units connected in a hierarchical structure includes a weight coefficient change control unit which controls weight change quantities for those multi-input/single-output signal processing units having iteratively reduced errors thereby increasing the learning speed, contrary to conventional learning machines which perform a learning operation in order to minimize a square error of multi-input/single-output signal processing units in the highest hierarchy of the hierarchical structure. dated 1992-07-07" 5129039,recurrent neural network with variable size intermediate layer,"the present invention is concerned with a signal processing system having a learning function pursuant to the back-propagation learning rule by the neural network, in which the learning rate is dynamically changed as a function of input values to effect high-speed stable learning. the signal processing system of the present invention is so arranged that, by executing signal processing for the input signals by the recurrent network formed by units each corresponding to a neuron, the features of the sequential time series pattern such as voice signals fluctuating on the time axis can be extracted through learning the coupling state of the recurrent network. the present invention is also concerned with a learning processing system adapted to cause the signal processing section formed by a neural network to undergo signal processing pursuant to the back-propagation learning rule, wherein the local minimum state in the course of the learning processing may be avoided by learning the coefficient of coupling strength while simultaneously increasing the number of the unit of the intermediate layer.",1992-07-07,"The title of the patent is recurrent neural network with variable size intermediate layer and its abstract is the present invention is concerned with a signal processing system having a learning function pursuant to the back-propagation learning rule by the neural network, in which the learning rate is dynamically changed as a function of input values to effect high-speed stable learning. the signal processing system of the present invention is so arranged that, by executing signal processing for the input signals by the recurrent network formed by units each corresponding to a neuron, the features of the sequential time series pattern such as voice signals fluctuating on the time axis can be extracted through learning the coupling state of the recurrent network. the present invention is also concerned with a learning processing system adapted to cause the signal processing section formed by a neural network to undergo signal processing pursuant to the back-propagation learning rule, wherein the local minimum state in the course of the learning processing may be avoided by learning the coefficient of coupling strength while simultaneously increasing the number of the unit of the intermediate layer. dated 1992-07-07" 5129040,neural network system for image processing,"a visual information processing device has a pair of neural networks which respectively comprise an upper layer and a lower layer of the device. each of the pair of neural networks comprises a semiconductor integrated circuit having a plurality of neuron circuit regions which are disposed in a matrix form, each of the neuron circuit regions performing a neuron function; a molecule film having a photoelectric function and provided on the semiconductor integrated circuit, the molecule film having (i) a plurality of t.sub.ij signal input sections each performing a wiring function among the plurality of neuron circuit regions, in each of which a t.sub.ij signal representing the bonding strength among the plurality of neuron circuit regions is optically written, and (ii) a plurality of video input sections each performing a sensor function of sensing a visual image in which one pixel corresponds to one neuron circuit region; and a wiring for electrically connecting the semiconductor integrated circuit and the molecule film. each of the plurality of neuron circuit regions is bonded with the neighboring neuron circuit regions in each of the pair of neural networks comprising the upper and lower layers, and each of the plurality of neuron circuit regions is bonded with the corresponding one between the pair of neural networks.",1992-07-07,"The title of the patent is neural network system for image processing and its abstract is a visual information processing device has a pair of neural networks which respectively comprise an upper layer and a lower layer of the device. each of the pair of neural networks comprises a semiconductor integrated circuit having a plurality of neuron circuit regions which are disposed in a matrix form, each of the neuron circuit regions performing a neuron function; a molecule film having a photoelectric function and provided on the semiconductor integrated circuit, the molecule film having (i) a plurality of t.sub.ij signal input sections each performing a wiring function among the plurality of neuron circuit regions, in each of which a t.sub.ij signal representing the bonding strength among the plurality of neuron circuit regions is optically written, and (ii) a plurality of video input sections each performing a sensor function of sensing a visual image in which one pixel corresponds to one neuron circuit region; and a wiring for electrically connecting the semiconductor integrated circuit and the molecule film. each of the plurality of neuron circuit regions is bonded with the neighboring neuron circuit regions in each of the pair of neural networks comprising the upper and lower layers, and each of the plurality of neuron circuit regions is bonded with the corresponding one between the pair of neural networks. dated 1992-07-07" 5129041,optical neural network processing element with multiple holographic element interconnects,"a neural network processing element uses primarily optical components to model a biological neuron having both spatial and temporal dependence. the neural network processing element includes a switch-controlled laser source, a multiple holographic lens, a spatial/temporal light modulator, and a photodetector array. laser beam control may be optical, electrical or acoustical, or a combination of these.",1992-07-07,"The title of the patent is optical neural network processing element with multiple holographic element interconnects and its abstract is a neural network processing element uses primarily optical components to model a biological neuron having both spatial and temporal dependence. the neural network processing element includes a switch-controlled laser source, a multiple holographic lens, a spatial/temporal light modulator, and a photodetector array. laser beam control may be optical, electrical or acoustical, or a combination of these. dated 1992-07-07" 5129042,sorting circuit using neural network,"a sorting circuit for arranging data in sequence according to the magnitudes of the data values, uses the concept of a neural network. the sorting circuit is constructed of shift registers, magnitude comparators, binary counters, binary bit separators and registers.",1992-07-07,"The title of the patent is sorting circuit using neural network and its abstract is a sorting circuit for arranging data in sequence according to the magnitudes of the data values, uses the concept of a neural network. the sorting circuit is constructed of shift registers, magnitude comparators, binary counters, binary bit separators and registers. dated 1992-07-07" 5130563,optoelectronic sensory neural network,"a neural network for processing sensory information. the network comprise one or more layers including interconnecting cells having individual states. each cell is connected to one or more neighboring cells. sensory signals and signals from interconnected neighboring cells control a current or a conductance within a cell to influence the cell's state. in some embodiments, the current or conductance of a cell can be controlled by a signal arising externally of the layer. each cell can comprise an electrical circuit which receives an input signal and causes a current corresponding to the signal to pass through a variable conductance. the conductance is a function of the states of the one or more interconnecting neighboring cells. proper interconnection of the cells on a layer can produce a neural network which is sensitive to predetermined patterns or the passage of such patterns across a sensor array whose signals are input into the network. the layers in the network can be made sensitive to distinct sensory parameters, so that networks which are sensitive to different wavelengths or polarizations of light energy can be produced.",1992-07-14,"The title of the patent is optoelectronic sensory neural network and its abstract is a neural network for processing sensory information. the network comprise one or more layers including interconnecting cells having individual states. each cell is connected to one or more neighboring cells. sensory signals and signals from interconnected neighboring cells control a current or a conductance within a cell to influence the cell's state. in some embodiments, the current or conductance of a cell can be controlled by a signal arising externally of the layer. each cell can comprise an electrical circuit which receives an input signal and causes a current corresponding to the signal to pass through a variable conductance. the conductance is a function of the states of the one or more interconnecting neighboring cells. proper interconnection of the cells on a layer can produce a neural network which is sensitive to predetermined patterns or the passage of such patterns across a sensor array whose signals are input into the network. the layers in the network can be made sensitive to distinct sensory parameters, so that networks which are sensitive to different wavelengths or polarizations of light energy can be produced. dated 1992-07-14" 5130936,method and apparatus for diagnostic testing including a neural network for determining testing sufficiency,"a diagnostic tester evaluates at least one inputted test signal corresponding to test data relating to at least one predetermined parameter of a system being tested, to produce first and second candidate signals corresponding respectively to first and second possible diagnoses of the condition of the system respectively having the first and second highest levels of certainty of being valid, and first and second certainty signals corresponding respectively to values of the first and second highest levels of certainty. the diagnostic tester further determines the sufficiency of the testing that has taken place responsive to the first and second certainty signals, and produces an output signal indicative of whether sufficient test data has been evaluated to declare a diagnosis. preferably, an uncertainty signal corresponding to a measure of the uncertainty that the evaluated at least one test signal can be validly evaluated is also produced and used to produce the output signal.",1992-07-14,"The title of the patent is method and apparatus for diagnostic testing including a neural network for determining testing sufficiency and its abstract is a diagnostic tester evaluates at least one inputted test signal corresponding to test data relating to at least one predetermined parameter of a system being tested, to produce first and second candidate signals corresponding respectively to first and second possible diagnoses of the condition of the system respectively having the first and second highest levels of certainty of being valid, and first and second certainty signals corresponding respectively to values of the first and second highest levels of certainty. the diagnostic tester further determines the sufficiency of the testing that has taken place responsive to the first and second certainty signals, and produces an output signal indicative of whether sufficient test data has been evaluated to declare a diagnosis. preferably, an uncertainty signal corresponding to a measure of the uncertainty that the evaluated at least one test signal can be validly evaluated is also produced and used to produce the output signal. dated 1992-07-14" 5130944,divider circuit adopting a neural network architecture to increase division processing speed and reduce hardware components,"a divider circuit for efficiently and quickly performing a hardware implemented division by adopting a neural network architecture. the circuit includes a series of cascaded subtracter components that complement the divisor input and effectively perform an adder function. the subtracters include a synaptic configuration consisting of pmos transistors, nmos transistors, and cmos inverters. the components are arranged in accordance with the predetermined connection strength assigned to each of the transistors and its respective position in the neural type network arrangement.",1992-07-14,"The title of the patent is divider circuit adopting a neural network architecture to increase division processing speed and reduce hardware components and its abstract is a divider circuit for efficiently and quickly performing a hardware implemented division by adopting a neural network architecture. the circuit includes a series of cascaded subtracter components that complement the divisor input and effectively perform an adder function. the subtracters include a synaptic configuration consisting of pmos transistors, nmos transistors, and cmos inverters. the components are arranged in accordance with the predetermined connection strength assigned to each of the transistors and its respective position in the neural type network arrangement. dated 1992-07-14" 5131072,neurocomputer with analog signal bus,"an analogue neuron processor (anp) performs an operation of sum-of-products of a time divisional analog input signal sequentially input from an analog signal bus and weight data and output an analog signal to an analog signal bus through a nonlinear circuit. a layered type or a feedback type neural network is formed of anps. the neural network reads necessary control data from a control pattern memory under the control of micro sequencer and reads the necessary weight data from the weight memory thereby realizing a neuron computer. the neuron computer connects a plurality of anps by using a single analog bus, thereby greatly decreasing the number of the wires used for the neural network and also decreasing the size of the circuit. a plurality of anps in a single layer simultaneously receives analog signal from an analog bus and carries out a parallel operation in the same time period and anps in different layers perform a parallel operation in a pipeline manner, thereby increasing a speed of an operation. accordingly, the prsent invention can provide a neuron computer with a high practicality.",1992-07-14,"The title of the patent is neurocomputer with analog signal bus and its abstract is an analogue neuron processor (anp) performs an operation of sum-of-products of a time divisional analog input signal sequentially input from an analog signal bus and weight data and output an analog signal to an analog signal bus through a nonlinear circuit. a layered type or a feedback type neural network is formed of anps. the neural network reads necessary control data from a control pattern memory under the control of micro sequencer and reads the necessary weight data from the weight memory thereby realizing a neuron computer. the neuron computer connects a plurality of anps by using a single analog bus, thereby greatly decreasing the number of the wires used for the neural network and also decreasing the size of the circuit. a plurality of anps in a single layer simultaneously receives analog signal from an analog bus and carries out a parallel operation in the same time period and anps in different layers perform a parallel operation in a pipeline manner, thereby increasing a speed of an operation. accordingly, the prsent invention can provide a neuron computer with a high practicality. dated 1992-07-14" 5132813,neural processor with holographic optical paths and nonlinear operating means,"an optical apparatus for simulating a highly interconnected neural network is disclosed as including a spatial light modulator (slm), an inputting device, a laser, a detecting device, and a page-oriented holographic component. the inputting device applies input signals to the slm. the holographic component optically interconnects n.sup.2 pixels defined on the spatial light modulator to n.sup.2 pixels defined on a detecting surface of the detecting device. the interconnections are made by n.sup.2 patterns of up to n.sup.2 interconnection weight encoded beams projected by n.sup.2 planar, or essentially two-dimensional, holograms arranged in a spatially localized array within the holographic component. the slm modulates the encoded beams and directs them onto the detecting surface wherein a parameter of the beams is evaluated at each pixel thereof. the evaluated parameter is transformed according to a nonlinear threshold function to provide transformed signals which can be fed back to the slm for further iterations.",1992-07-21,"The title of the patent is neural processor with holographic optical paths and nonlinear operating means and its abstract is an optical apparatus for simulating a highly interconnected neural network is disclosed as including a spatial light modulator (slm), an inputting device, a laser, a detecting device, and a page-oriented holographic component. the inputting device applies input signals to the slm. the holographic component optically interconnects n.sup.2 pixels defined on the spatial light modulator to n.sup.2 pixels defined on a detecting surface of the detecting device. the interconnections are made by n.sup.2 patterns of up to n.sup.2 interconnection weight encoded beams projected by n.sup.2 planar, or essentially two-dimensional, holograms arranged in a spatially localized array within the holographic component. the slm modulates the encoded beams and directs them onto the detecting surface wherein a parameter of the beams is evaluated at each pixel thereof. the evaluated parameter is transformed according to a nonlinear threshold function to provide transformed signals which can be fed back to the slm for further iterations. dated 1992-07-21" 5132835,continuous-time optical neural network process,"an all-optical, continuous-time, recurrent neural network is disclosed which is capable of executing a broad class of energy-minimizing neural net algorithms. the network is a resonator which contains a saturable, two-beam amplifier; two volume holograms; and a linear, two-beam amplifier. the saturable amplifier permits, through the use of a spatially patterned signal beam, the realization of a two-dimensional optical neuron array; the two volume holograms provide adaptive, global network interconnectivity; and the linear amplifier supplies sufficient resonator gain to permit convergent operation of the network.",1992-07-21,"The title of the patent is continuous-time optical neural network process and its abstract is an all-optical, continuous-time, recurrent neural network is disclosed which is capable of executing a broad class of energy-minimizing neural net algorithms. the network is a resonator which contains a saturable, two-beam amplifier; two volume holograms; and a linear, two-beam amplifier. the saturable amplifier permits, through the use of a spatially patterned signal beam, the realization of a two-dimensional optical neuron array; the two volume holograms provide adaptive, global network interconnectivity; and the linear amplifier supplies sufficient resonator gain to permit convergent operation of the network. dated 1992-07-21" 5133021,system for self-organization of stable category recognition codes for analog input patterns,"a neural network includes a feature representation field which receives input patterns. signals from the feature representative field select a category from a category representation field through a first adaptive filter. based on the selected category, a template pattern is applied to the feature representation field, and a match between the template and the input is determined. if the angle between the template vector and a vector within the representation field is too great, the selected category is reset. otherwise the category selection and template pattern are adapted to the input pattern as well as the previously stored template. a complex representation field includes signals normalized relative to signals across the field and feedback for pattern contrast enhancement.",1992-07-21,"The title of the patent is system for self-organization of stable category recognition codes for analog input patterns and its abstract is a neural network includes a feature representation field which receives input patterns. signals from the feature representative field select a category from a category representation field through a first adaptive filter. based on the selected category, a template pattern is applied to the feature representation field, and a match between the template and the input is determined. if the angle between the template vector and a vector within the representation field is too great, the selected category is reset. otherwise the category selection and template pattern are adapted to the input pattern as well as the previously stored template. a complex representation field includes signals normalized relative to signals across the field and feedback for pattern contrast enhancement. dated 1992-07-21" 5134396,method and apparatus for encoding and decoding data utilizing data compression and neural networks,"a method structure for the compression of data utilizes an encoder which effects a transform with the aid of a coding neural network, and a decoder which includes a matched decoding neural network with effects almost the inverse transform of the encoder. the method puts in competition m coding neural networks (30.sub.1 to 30.sub.m) wherein m>1 positioned at the transmission end which effects a same type of transform and the encoded data of one of which are transmitted, after selection (32, 33) at a given instant, towards a matched decoding neural network which forms part of a set of several matched neural networks (60.sub.1 to 60.sub.q) provided at the receiver end. learning is effected on the basis of predetermined samples. the encoder may comprise, in addition to the coding neural network (30.sub.1 to 30.sub.m), a matched decoding neural network (35.sub.1 to 35.sub.m) so as to effect the selection (32, 33) of the best coding neural network in accordance with an error criterion.",1992-07-28,"The title of the patent is method and apparatus for encoding and decoding data utilizing data compression and neural networks and its abstract is a method structure for the compression of data utilizes an encoder which effects a transform with the aid of a coding neural network, and a decoder which includes a matched decoding neural network with effects almost the inverse transform of the encoder. the method puts in competition m coding neural networks (30.sub.1 to 30.sub.m) wherein m>1 positioned at the transmission end which effects a same type of transform and the encoded data of one of which are transmitted, after selection (32, 33) at a given instant, towards a matched decoding neural network which forms part of a set of several matched neural networks (60.sub.1 to 60.sub.q) provided at the receiver end. learning is effected on the basis of predetermined samples. the encoder may comprise, in addition to the coding neural network (30.sub.1 to 30.sub.m), a matched decoding neural network (35.sub.1 to 35.sub.m) so as to effect the selection (32, 33) of the best coding neural network in accordance with an error criterion. dated 1992-07-28" 5134685,"neural node, a netowrk and a chaotic annealing optimization method for the network","the present invention is a node for a network that combines a hopfield and tank type neuron, having a sigmoid type transfer function, with a nonmonotonic neuron, having a transfer function such as a parabolic transfer function, to produce a neural node with a deterministic chaotic response suitable for quickly and globally solving optimizatioin problems and avoiding local minima. the node can be included in a completely connected single layer network. the hopfield neuron operates continuously while the nonmonotonic neuron operates periodically to prevent the network from getting stuck in a local optimum solution. the node can also be included in a local area architecture where local areas can be linked together in a hierarchy of nonmonotonic neurons.",1992-07-28,"The title of the patent is neural node, a netowrk and a chaotic annealing optimization method for the network and its abstract is the present invention is a node for a network that combines a hopfield and tank type neuron, having a sigmoid type transfer function, with a nonmonotonic neuron, having a transfer function such as a parabolic transfer function, to produce a neural node with a deterministic chaotic response suitable for quickly and globally solving optimizatioin problems and avoiding local minima. the node can be included in a completely connected single layer network. the hopfield neuron operates continuously while the nonmonotonic neuron operates periodically to prevent the network from getting stuck in a local optimum solution. the node can also be included in a local area architecture where local areas can be linked together in a hierarchy of nonmonotonic neurons. dated 1992-07-28" 5138695,systolic array image processing system,"a systolic array of processing elements is connected to receive weight inputs and multiplexed data inputs for operation in feedforward, partially-- or fully-connected neural network mode or in cooperative, competitive neural network mode. feature vector or two-dimensional image data is retrieved from external data memory and is transformed via input look-up table to input data for the systolic array that performs a convolution with kernal values as weight inputs. the convoluted image or neuron outputs from the systolic array are scaled and transformed via output look-up table for storage in the external data memory.",1992-08-11,"The title of the patent is systolic array image processing system and its abstract is a systolic array of processing elements is connected to receive weight inputs and multiplexed data inputs for operation in feedforward, partially-- or fully-connected neural network mode or in cooperative, competitive neural network mode. feature vector or two-dimensional image data is retrieved from external data memory and is transformed via input look-up table to input data for the systolic array that performs a convolution with kernal values as weight inputs. the convoluted image or neuron outputs from the systolic array are scaled and transformed via output look-up table for storage in the external data memory. dated 1992-08-11" 5138924,electronic musical instrument utilizing a neural network,"a musical tone parameter generating method and a musical tone generating device of this invention feature that when data inputted by a player is inputted into a neural network as input pattern, the neural network infers the parameters necessary to specify a musical tone wave form to be formed. this makes it possible to get parameters other than those stored in a memory by inferring, which increases variation of the musical tone to be generated.",1992-08-18,"The title of the patent is electronic musical instrument utilizing a neural network and its abstract is a musical tone parameter generating method and a musical tone generating device of this invention feature that when data inputted by a player is inputted into a neural network as input pattern, the neural network infers the parameters necessary to specify a musical tone wave form to be formed. this makes it possible to get parameters other than those stored in a memory by inferring, which increases variation of the musical tone to be generated. dated 1992-08-18" 5138928,rhythm pattern learning apparatus,"a rhythm pattern generating apparatus is provided having a layered neural network to perform learning with feedback to generate an output pattern signal indicative of a musical sound pattern. the output pattern signal is generated by the layered neural network with feedback in response to a performance operation of a player. the layered neural network generates the output pattern signal indicative of the musical sound pattern based on both an input pattern signal and a weight signal. the output pattern signal is fed back by the feedback circuit to the layered neural network to perform the learning process. a drum pad can be used to provide an input to the rhythm pattern generating apparatus or, specifically, to gate an input pattern selector for selecting input pattern signals. the layered neural network with the feedback can perform the learning process using a back propagation method. in the present invention, when a new rhythm pattern is input by a musician, an output pattern signal is generated through an analogy with the rhythm style of the musician.",1992-08-18,"The title of the patent is rhythm pattern learning apparatus and its abstract is a rhythm pattern generating apparatus is provided having a layered neural network to perform learning with feedback to generate an output pattern signal indicative of a musical sound pattern. the output pattern signal is generated by the layered neural network with feedback in response to a performance operation of a player. the layered neural network generates the output pattern signal indicative of the musical sound pattern based on both an input pattern signal and a weight signal. the output pattern signal is fed back by the feedback circuit to the layered neural network to perform the learning process. a drum pad can be used to provide an input to the rhythm pattern generating apparatus or, specifically, to gate an input pattern selector for selecting input pattern signals. the layered neural network with the feedback can perform the learning process using a back propagation method. in the present invention, when a new rhythm pattern is input by a musician, an output pattern signal is generated through an analogy with the rhythm style of the musician. dated 1992-08-18" 5140523,neural network for predicting lightning,"a system and method are provided for the automated prediction of lightning strikes in a set of different spatial regions for different times in the future. in a preferred embodiment, the system utilizes measurements of many weather phenomena. the types of measurements that can be utilized in approximately the same geographical region as that for which the strike predictions are made. this embodiment utilizes a correlation network to relate these weather measurements to future lightning strikes.",1992-08-18,"The title of the patent is neural network for predicting lightning and its abstract is a system and method are provided for the automated prediction of lightning strikes in a set of different spatial regions for different times in the future. in a preferred embodiment, the system utilizes measurements of many weather phenomena. the types of measurements that can be utilized in approximately the same geographical region as that for which the strike predictions are made. this embodiment utilizes a correlation network to relate these weather measurements to future lightning strikes. dated 1992-08-18" 5140530,genetic algorithm synthesis of neural networks,the disclosure relates to the use of genetic learning techniques to evolve neural network architectures for specific applications in which a general representation of neural network architecture is linked with a genetic learning strategy to create a very flexible environment for the construction of custom neural networks.,1992-08-18,The title of the patent is genetic algorithm synthesis of neural networks and its abstract is the disclosure relates to the use of genetic learning techniques to evolve neural network architectures for specific applications in which a general representation of neural network architecture is linked with a genetic learning strategy to create a very flexible environment for the construction of custom neural networks. dated 1992-08-18 5140531,analog neural nets supplied digital synapse signals on a bit-slice basis,"plural-bit digital input signals to be subjected to weighted summation in a neural net layer are bit-sliced; and a number n of respective first through n.sup.th weighted summations of the bits of the digital input signals in each bit slice are performed, resulting in a respective set of first through n.sup.th partial weighted summation results. weighted summations of the partial weighted summation results of similar ordinal number are then performed to generate first through n.sup.th final weighted summation results. each weighted summation of a bit slice of the digital input signals is performed using a capacitive network that generates partial weighted summation results in the analog regime. in this capacitive network each weight is determined by the difference in the capacitances of a respective pair of capacitive elements. the weighted summation to generate a final weighted summation result also is advantageously done in the analog regime, since this facilitates the analog final weighted summation result being non-linearly processed in an analog amplifier with sigmoidal response. this non-linear processing generates an analog axonal output response for a neural net layer, which analog axonal output response can then be digitized.",1992-08-18,"The title of the patent is analog neural nets supplied digital synapse signals on a bit-slice basis and its abstract is plural-bit digital input signals to be subjected to weighted summation in a neural net layer are bit-sliced; and a number n of respective first through n.sup.th weighted summations of the bits of the digital input signals in each bit slice are performed, resulting in a respective set of first through n.sup.th partial weighted summation results. weighted summations of the partial weighted summation results of similar ordinal number are then performed to generate first through n.sup.th final weighted summation results. each weighted summation of a bit slice of the digital input signals is performed using a capacitive network that generates partial weighted summation results in the analog regime. in this capacitive network each weight is determined by the difference in the capacitances of a respective pair of capacitive elements. the weighted summation to generate a final weighted summation result also is advantageously done in the analog regime, since this facilitates the analog final weighted summation result being non-linearly processed in an analog amplifier with sigmoidal response. this non-linear processing generates an analog axonal output response for a neural net layer, which analog axonal output response can then be digitized. dated 1992-08-18" 5140670,cellular neural network,"a novel class of information-processing systems called a cellular neural network is discussed. like a neural network, it is a large-scale nonlinear analog circuit which processes signals in real time. like cellular automata, it is made of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly only through its nearest neighbors. each cell is made of a linear capacitor, a nonlinear voltage-controlled current source, and a few resistive linear circuit elements. cellular neural networks share the best features of both worlds; its continuous time feature allows real-time signal processing found within the digital domain and its local interconnection feature makes it tailor made for vlsi implementation. cellular neural networks are uniquely suited for high-speed parallel signal processing.",1992-08-18,"The title of the patent is cellular neural network and its abstract is a novel class of information-processing systems called a cellular neural network is discussed. like a neural network, it is a large-scale nonlinear analog circuit which processes signals in real time. like cellular automata, it is made of a massive aggregate of regularly spaced circuit clones, called cells, which communicate with each other directly only through its nearest neighbors. each cell is made of a linear capacitor, a nonlinear voltage-controlled current source, and a few resistive linear circuit elements. cellular neural networks share the best features of both worlds; its continuous time feature allows real-time signal processing found within the digital domain and its local interconnection feature makes it tailor made for vlsi implementation. cellular neural networks are uniquely suited for high-speed parallel signal processing. dated 1992-08-18" 5142612,computer neural network supervisory process control system and method,"a neural network for adjusting a setpoint in process control replaces a human operator. the neural network operates in three modes: training, operation, and retraining. in operation, the neural network is trained using training input data along with input data. the input data is from the sensor(s) monitoring the process. the input data is used by the neural network to develop output data. the training input data are the setpoint adjustments made by a human operator. the output data is compared with the training input data to produce error data, which is used to adjust the weights of the neural network so as to train it. after training has been completed, the neural network enters the operation mode. in this mode, the present invention uses the input data to predict output data used to adjust the setpoint supplied to the regulatory controller. thus, the operator is effectively replaced. the present invention in the retraining mode utilizes new training input data to retrain the neural network by adjusting the weight(s).",1992-08-25,"The title of the patent is computer neural network supervisory process control system and method and its abstract is a neural network for adjusting a setpoint in process control replaces a human operator. the neural network operates in three modes: training, operation, and retraining. in operation, the neural network is trained using training input data along with input data. the input data is from the sensor(s) monitoring the process. the input data is used by the neural network to develop output data. the training input data are the setpoint adjustments made by a human operator. the output data is compared with the training input data to produce error data, which is used to adjust the weights of the neural network so as to train it. after training has been completed, the neural network enters the operation mode. in this mode, the present invention uses the input data to predict output data used to adjust the setpoint supplied to the regulatory controller. thus, the operator is effectively replaced. the present invention in the retraining mode utilizes new training input data to retrain the neural network by adjusting the weight(s). dated 1992-08-25" 5142665,neural network shell for application programs,"a neural network shell has a defined interface to an application program. by interfacing with the neural network shell, any application program becomes a neural network application program. the neural network shell contains a set of utility programs that transfers data into and out of a neural network data structure. this set of utility programs allows an application program to define a new neural network model, create a neural network data structure, train a neural network, and run a neural network. once trained, the neural network data structure can be transported to other computer systems or to application programs written in different computing languages running on similar or different computer systems.",1992-08-25,"The title of the patent is neural network shell for application programs and its abstract is a neural network shell has a defined interface to an application program. by interfacing with the neural network shell, any application program becomes a neural network application program. the neural network shell contains a set of utility programs that transfers data into and out of a neural network data structure. this set of utility programs allows an application program to define a new neural network model, create a neural network data structure, train a neural network, and run a neural network. once trained, the neural network data structure can be transported to other computer systems or to application programs written in different computing languages running on similar or different computer systems. dated 1992-08-25" 5142666,learning system in a neuron computer,"a learning system in a neuron computer includes a neural network for receiving an analog signal from a first analog bus through an analog input port in a time divisional manner and performing a sum-of-the-products operation, and outputting an analog output signal to a second analog bus. a control pattern memory stores a pattern of a signal for controlling the neural network. a sequencer produces an address of the control pattern memory and a weight memory. the weight memory stores weight data of the neural network. a digital control unit controls the neural network, control pattern memory, sequencer, and weight memory, and executes a learning algorithm. the learning system further includes an input control unit provided on the input side of the neural network for selecting an input signal for executing the learning algorithm input from the digital control unit or an analog input signal input from the analog input port.",1992-08-25,"The title of the patent is learning system in a neuron computer and its abstract is a learning system in a neuron computer includes a neural network for receiving an analog signal from a first analog bus through an analog input port in a time divisional manner and performing a sum-of-the-products operation, and outputting an analog output signal to a second analog bus. a control pattern memory stores a pattern of a signal for controlling the neural network. a sequencer produces an address of the control pattern memory and a weight memory. the weight memory stores weight data of the neural network. a digital control unit controls the neural network, control pattern memory, sequencer, and weight memory, and executes a learning algorithm. the learning system further includes an input control unit provided on the input side of the neural network for selecting an input signal for executing the learning algorithm input from the digital control unit or an analog input signal input from the analog input port. dated 1992-08-25" 5144642,interference detection and characterization method and apparatus,"the interference detection and characterization system of this invention supports modern communication systems which have to contend with a variety of intentional and unintentional interference souces. as an add-on to existing communications equipment, the invention employs novel signal processing techniques to automatically detect the presence of communications channel irregularities in near real-time and alert the attending operator. information provided to the operator through a user-friendly interface is used to characterize the type of interference and its degree of severity. once characterized, the information is used by the operator to take corrective actions including the activation of alternative communication plans or, in some instances, mitigation of the interference. since output from the system of this invention lends itself well to expert system and neural network environments, such systems could be employed to further aid the operator. the unique interference signal measurements provided by the system makes it useful in applications well beyond those for which it was originally intended. other uses for which the invention has shown great potential include bit error rate estimators, communication channel scanners, and as laboratory test equipment to support receiver development and performance verification.",1992-09-01,"The title of the patent is interference detection and characterization method and apparatus and its abstract is the interference detection and characterization system of this invention supports modern communication systems which have to contend with a variety of intentional and unintentional interference souces. as an add-on to existing communications equipment, the invention employs novel signal processing techniques to automatically detect the presence of communications channel irregularities in near real-time and alert the attending operator. information provided to the operator through a user-friendly interface is used to characterize the type of interference and its degree of severity. once characterized, the information is used by the operator to take corrective actions including the activation of alternative communication plans or, in some instances, mitigation of the interference. since output from the system of this invention lends itself well to expert system and neural network environments, such systems could be employed to further aid the operator. the unique interference signal measurements provided by the system makes it useful in applications well beyond those for which it was originally intended. other uses for which the invention has shown great potential include bit error rate estimators, communication channel scanners, and as laboratory test equipment to support receiver development and performance verification. dated 1992-09-01" 5146420,communicating adder tree system for neural array processor,"the neural computing paradigm is characterized as a dynamic and highly computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture for a scalable neural array process (snap) which uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. the array processor uses a special type of adder tree which computes in a first direction and communicates in a second direction. the adder tree is thus responsive to a compute state and a communication state. the adder tree has the ability to provide a first driver responsive to a compute state for communicating an adder output to a data path and a second driver responsive to the communication state for connecting the data path to the neuron inputs.",1992-09-08,"The title of the patent is communicating adder tree system for neural array processor and its abstract is the neural computing paradigm is characterized as a dynamic and highly computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture for a scalable neural array process (snap) which uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. the array processor uses a special type of adder tree which computes in a first direction and communicates in a second direction. the adder tree is thus responsive to a compute state and a communication state. the adder tree has the ability to provide a first driver responsive to a compute state for communicating an adder output to a data path and a second driver responsive to the communication state for connecting the data path to the neuron inputs. dated 1992-09-08" 5146541,signal phase pattern sensitive neural network system and method,"a signal phase pattern sensitive neural network system can discern persist patterns of phase in a time varying or oscillatory signal. the system employs duplicate inputs from each of its sensors to the processing elements of a first layer of its neural network, with the exception that one input is phase shifted relative to the other. the system also employs a modification of a conventional kohonen competitive learning rule which is applied by the processing and learning elements of a second layer of its neural network.",1992-09-08,"The title of the patent is signal phase pattern sensitive neural network system and method and its abstract is a signal phase pattern sensitive neural network system can discern persist patterns of phase in a time varying or oscillatory signal. the system employs duplicate inputs from each of its sensors to the processing elements of a first layer of its neural network, with the exception that one input is phase shifted relative to the other. the system also employs a modification of a conventional kohonen competitive learning rule which is applied by the processing and learning elements of a second layer of its neural network. dated 1992-09-08" 5146543,scalable neural array processor,"the neural computing paradigm is characterized as a dynamic and highly computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture for a scalable neural array process (snap) which uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. each neuron of the processor has an input function element, an activity function element, and a communicating adder. the neuron functions with two state modes, a compute state and a communications state. in response to a compute state, the input function element and said activity function generate a neuron value, and the communicating adder is placed in a compute mode and is responsive to the processor compute state. in a communications state a neuron is responsive to a communications state for operating the communicating adder for communicating a neuron value to an input function element.",1992-09-08,"The title of the patent is scalable neural array processor and its abstract is the neural computing paradigm is characterized as a dynamic and highly computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture for a scalable neural array process (snap) which uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. each neuron of the processor has an input function element, an activity function element, and a communicating adder. the neuron functions with two state modes, a compute state and a communications state. in response to a compute state, the input function element and said activity function generate a neuron value, and the communicating adder is placed in a compute mode and is responsive to the processor compute state. in a communications state a neuron is responsive to a communications state for operating the communicating adder for communicating a neuron value to an input function element. dated 1992-09-08" 5146602,method of increasing the accuracy of an analog neural network and the like,"a method for increasing the accuracy of an analog neural network which computers a sum-of-products between an input vector and a stored weight pattern is described. in one embodiment of the present invention, the method comprises initially training the network by programming the synapses with a certain weight pattern. the training may be carried out using any standard learning algorithm. preferably, a back-propagation learning algorithm is employed. next, network is baked at an elevated temperature to effectuate a change in the weight pattern previously programmed during initial training. this change results from a charge redistribution which occurs within each of the synapses of the network. after baking, the network is then retrained to compensate for the change resulting from the charge redistribution. the baking and retraining steps may be successively repeated to increase the accuracy of the neural network to any desired level.",1992-09-08,"The title of the patent is method of increasing the accuracy of an analog neural network and the like and its abstract is a method for increasing the accuracy of an analog neural network which computers a sum-of-products between an input vector and a stored weight pattern is described. in one embodiment of the present invention, the method comprises initially training the network by programming the synapses with a certain weight pattern. the training may be carried out using any standard learning algorithm. preferably, a back-propagation learning algorithm is employed. next, network is baked at an elevated temperature to effectuate a change in the weight pattern previously programmed during initial training. this change results from a charge redistribution which occurs within each of the synapses of the network. after baking, the network is then retrained to compensate for the change resulting from the charge redistribution. the baking and retraining steps may be successively repeated to increase the accuracy of the neural network to any desired level. dated 1992-09-08" 5148385,serial systolic processor,"a serial systolic processor for performing neural network functions. a serial processor (90) provides the digital processing circuits for processing an input serial data stream applied to a serial input (20). a memory (29) stores digital signals representative of interconnection strengths or coefficient data corresponding to autocorrelation matrix elements. plural outputs (a.sub.o -a.sub.n) of the memory (29) are connected respectively to each of the processor neurons (p.sub.o -p.sub.n) of the serial processor (90). the digital stream is output, unchanged, on processor output bus (22), while a processed data stream is output on bus (30).",1992-09-15,"The title of the patent is serial systolic processor and its abstract is a serial systolic processor for performing neural network functions. a serial processor (90) provides the digital processing circuits for processing an input serial data stream applied to a serial input (20). a memory (29) stores digital signals representative of interconnection strengths or coefficient data corresponding to autocorrelation matrix elements. plural outputs (a.sub.o -a.sub.n) of the memory (29) are connected respectively to each of the processor neurons (p.sub.o -p.sub.n) of the serial processor (90). the digital stream is output, unchanged, on processor output bus (22), while a processed data stream is output on bus (30). dated 1992-09-15" 5148514,neural network integrated circuit device having self-organizing function,"an extension directed integrated circuit device having a learning function on a boltzmann model, includes a plurality of synapse representing units arrayed in a matrix to form a rectangle including a first and second triangles on a semiconductor chip, a plurality of neuron representing units and a plurality of educator signal control circuits which are arranged along first and second sides of the rectangle, and a plurality of buffer circuits arranged along third and fourth sides of the rectangle. the first side is opposite to the third side, and the second side is opposite to the fourth side. axon signal transfer lines and dendrite signal lines are so arranged that the neuron representing units are full-connected in each of the first right triangle the second right triangle. alternatively, axon signal lines and dendrite signal ines are arranged in parallel with rows and columns of the synapse representing unit matrix, so that the neuron representing units are full-connected in the rectangle. each synapse representing unit is connected to a pair of axon signal transfer lines and a pair of dendrite signal transfer lines.",1992-09-15,"The title of the patent is neural network integrated circuit device having self-organizing function and its abstract is an extension directed integrated circuit device having a learning function on a boltzmann model, includes a plurality of synapse representing units arrayed in a matrix to form a rectangle including a first and second triangles on a semiconductor chip, a plurality of neuron representing units and a plurality of educator signal control circuits which are arranged along first and second sides of the rectangle, and a plurality of buffer circuits arranged along third and fourth sides of the rectangle. the first side is opposite to the third side, and the second side is opposite to the fourth side. axon signal transfer lines and dendrite signal lines are so arranged that the neuron representing units are full-connected in each of the first right triangle the second right triangle. alternatively, axon signal lines and dendrite signal ines are arranged in parallel with rows and columns of the synapse representing unit matrix, so that the neuron representing units are full-connected in the rectangle. each synapse representing unit is connected to a pair of axon signal transfer lines and a pair of dendrite signal transfer lines. dated 1992-09-15" 5148515,scalable neural array processor and method,"an array processor and method for a scalable array neural processor (snap) permits computing as a dynamic and highly parallel computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. the scalable neural array processor (snap) uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. the array processor is scalable. it has an array of function elements and a plurality of orthogonal horizontal and vertical processing elements for communication, computation and reduction. this structure permits in a first computation state the generation of a set of output values and in the first communication state the processing elements produce, responsive to the output values, first reduction values. in a second computation state processing elements, responsive to the first reduction values, generate vertical output values, and in a second computation state the vertical output values are communicated back to the inputs of the function elements. responsive to a third computation state responsive to the vertical output values, a second set of output values is generated by said function elements, and in a third communication state the horizontal processing elements produce second reduction values. in a fourth computation state the horizontal processing elements generate horizontal output values, and responsive to a fourth communication state the horizontal processing elements communicate the horizontal output values back to the inputs of the function elements.",1992-09-15,"The title of the patent is scalable neural array processor and method and its abstract is an array processor and method for a scalable array neural processor (snap) permits computing as a dynamic and highly parallel computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. the scalable neural array processor (snap) uses a unique intercommunication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. the array processor is scalable. it has an array of function elements and a plurality of orthogonal horizontal and vertical processing elements for communication, computation and reduction. this structure permits in a first computation state the generation of a set of output values and in the first communication state the processing elements produce, responsive to the output values, first reduction values. in a second computation state processing elements, responsive to the first reduction values, generate vertical output values, and in a second computation state the vertical output values are communicated back to the inputs of the function elements. responsive to a third computation state responsive to the vertical output values, a second set of output values is generated by said function elements, and in a third communication state the horizontal processing elements produce second reduction values. in a fourth computation state the horizontal processing elements generate horizontal output values, and responsive to a fourth communication state the horizontal processing elements communicate the horizontal output values back to the inputs of the function elements. dated 1992-09-15" 5150323,adaptive network for in-band signal separation,"an adaptive network for in-band signal separation (26) and method for providing in-band separation of a composite signal (32) into its constituent signals (28), (30). the input to the network (26) is a series of sampled portions of the composite signal (32). the network (26) is trained with at least one of said composite signals (28) (30) using a neural network training paradigm by presenting one or more of the constituent signals (28) (30) to said network (28). the network (26) may be used to separate multiple speech signals from a composite signal from a single sensor such as a microphone.",1992-09-22,"The title of the patent is adaptive network for in-band signal separation and its abstract is an adaptive network for in-band signal separation (26) and method for providing in-band separation of a composite signal (32) into its constituent signals (28), (30). the input to the network (26) is a series of sampled portions of the composite signal (32). the network (26) is trained with at least one of said composite signals (28) (30) using a neural network training paradigm by presenting one or more of the constituent signals (28) (30) to said network (28). the network (26) may be used to separate multiple speech signals from a composite signal from a single sensor such as a microphone. dated 1992-09-22" 5150449,speech recognition apparatus of speaker adaptation type,"a speech recognition apparatus of the speaker adaptation type operates to recognize an inputted speech pattern produced by a particular speaker by using a reference pattern produced by a voice of a standard speaker. the speech recognition apparatus is adapted to the speech of the particular speaker by converting the reference pattern into a normalized pattern by a neural network unit, internal parameters of which are modified through a learning operation using a normalized feature vector of the training pattern produced by the voice of the particular speaker and normalized on the basis of the reference pattern, so that the neural netowrk unit provides an optimum output similar to the corresponding normalized feature vector of the training pattern. in the alternative, the speech recognition apparatus operates to recognize an inputted speech pattern by converting the inputted speech pattern into a normalized speech pattern by the neural network unit, internal parameters of which are modified through a learning operation using a feature vector of the reference pattern normalized on the basis of the training pattern, so that the neural network unit provides an optimum output similar to the corresponding normalized feature vector of the reference pattern and recognizing the normalized speech pattern according to the reference pattern.",1992-09-22,"The title of the patent is speech recognition apparatus of speaker adaptation type and its abstract is a speech recognition apparatus of the speaker adaptation type operates to recognize an inputted speech pattern produced by a particular speaker by using a reference pattern produced by a voice of a standard speaker. the speech recognition apparatus is adapted to the speech of the particular speaker by converting the reference pattern into a normalized pattern by a neural network unit, internal parameters of which are modified through a learning operation using a normalized feature vector of the training pattern produced by the voice of the particular speaker and normalized on the basis of the reference pattern, so that the neural netowrk unit provides an optimum output similar to the corresponding normalized feature vector of the training pattern. in the alternative, the speech recognition apparatus operates to recognize an inputted speech pattern by converting the inputted speech pattern into a normalized speech pattern by the neural network unit, internal parameters of which are modified through a learning operation using a feature vector of the reference pattern normalized on the basis of the training pattern, so that the neural network unit provides an optimum output similar to the corresponding normalized feature vector of the reference pattern and recognizing the normalized speech pattern according to the reference pattern. dated 1992-09-22" 5150450,method and circuits for neuron perturbation in artificial neural network memory modification,"an artificial neural network has a plurality of output circuits individually perturbable for memory modification or learning by the network. the network has a plurality of synapses individually connecting each of a plurality of inputs to each output circuit. each synapse has a weight determining the effect on the associated output circuit of a signal provided on the associated input, and the synapse is addressable for selective variation of the weight. a perturbation signal is provided to one input, while data signals are provided to others of the inputs, so that perturbation of each output circuit may be controlled by varying the weights of a set of the synapses connecting the perturbation signal to the output circuits. an output circuit may be selected for perturbation by loading an appropriate weight in the synapse connecting the perturbation signal to the output circuit while zeroing the weights of the synapses connecting the perturbation signal to other output circuits. where the weights are provided by devices incapable of repeated cycles of zeroing and reloading, each synapse connecting the perturbation intput to an output circuit has an addressable switch which is closed for perturbation of this output circuit and which is open at other times. perturbations of different output circuits may be balanced by varying the weights of the set of synapses connected to the perturbation input or by varying the weights of another set of the synapses connected to one of inputs which receives a balancing signal.",1992-09-22,"The title of the patent is method and circuits for neuron perturbation in artificial neural network memory modification and its abstract is an artificial neural network has a plurality of output circuits individually perturbable for memory modification or learning by the network. the network has a plurality of synapses individually connecting each of a plurality of inputs to each output circuit. each synapse has a weight determining the effect on the associated output circuit of a signal provided on the associated input, and the synapse is addressable for selective variation of the weight. a perturbation signal is provided to one input, while data signals are provided to others of the inputs, so that perturbation of each output circuit may be controlled by varying the weights of a set of the synapses connecting the perturbation signal to the output circuits. an output circuit may be selected for perturbation by loading an appropriate weight in the synapse connecting the perturbation signal to the output circuit while zeroing the weights of the synapses connecting the perturbation signal to other output circuits. where the weights are provided by devices incapable of repeated cycles of zeroing and reloading, each synapse connecting the perturbation intput to an output circuit has an addressable switch which is closed for perturbation of this output circuit and which is open at other times. perturbations of different output circuits may be balanced by varying the weights of the set of synapses connected to the perturbation input or by varying the weights of another set of the synapses connected to one of inputs which receives a balancing signal. dated 1992-09-22" 5151822,transform digital/optical processing system including wedge/ring accumulator,"a transform digital optical processing system generates a transform signal of an image. fourier or other well-known transforms may be employed. the transform signal may be generated in one of two ways: optically or electronically. in optical generation a two dimensional object is generated by modulating a beam of coherent light with an image of the object. a transform image of the modulated coherent light beam is formed, using an optical transform element. the optical transform is then stored in a two dimensional buffer. the transform signal may also be generated electronically by storing a digital video image of an object and generating a fourier or other transform of the digital video image using vector processing chips or other commercially available digital transform generating computers. this digitally generated information may be analyzed and classified through a neural network type processor. the two-dimensional transform data is then processed to obtain the inspection or other characteristics for comparison against predetermined characteristics. the two dimensional transform is divided into two types of zones, namely wedges and rings. the transform data is then mapped into a corresponding wedge and ring, and the data for each wedge and ring is accumulated or summed to obtain data values. it has been found that the summed wedge and ring data values can accurately characterize an image for inspection or other comparison purposes.",1992-09-29,"The title of the patent is transform digital/optical processing system including wedge/ring accumulator and its abstract is a transform digital optical processing system generates a transform signal of an image. fourier or other well-known transforms may be employed. the transform signal may be generated in one of two ways: optically or electronically. in optical generation a two dimensional object is generated by modulating a beam of coherent light with an image of the object. a transform image of the modulated coherent light beam is formed, using an optical transform element. the optical transform is then stored in a two dimensional buffer. the transform signal may also be generated electronically by storing a digital video image of an object and generating a fourier or other transform of the digital video image using vector processing chips or other commercially available digital transform generating computers. this digitally generated information may be analyzed and classified through a neural network type processor. the two-dimensional transform data is then processed to obtain the inspection or other characteristics for comparison against predetermined characteristics. the two dimensional transform is divided into two types of zones, namely wedges and rings. the transform data is then mapped into a corresponding wedge and ring, and the data for each wedge and ring is accumulated or summed to obtain data values. it has been found that the summed wedge and ring data values can accurately characterize an image for inspection or other comparison purposes. dated 1992-09-29" 5151874,integrated circuit for square root operation using neural network,"an integrated circuit for performing a square root operation uses adders made in accordance with neural network concepts. the integrated circuit includes an exponent part, a first mantissa part, a second mantissa part and a control part. the exponent part computes an exponent of the square root of an input operand; the first mantissa part preprocesses the mantissa of the input operand; the second mantissa part computes the square root of the output from the first mantissa part; and the control part controls interaction of input and output among various components of the integrated circuits. because the adders used in integrated circuit are composed of neural network circuits having a short propagation time for carry bits, the integrated circuit can computer a square root fast and efficiently.",1992-09-29,"The title of the patent is integrated circuit for square root operation using neural network and its abstract is an integrated circuit for performing a square root operation uses adders made in accordance with neural network concepts. the integrated circuit includes an exponent part, a first mantissa part, a second mantissa part and a control part. the exponent part computes an exponent of the square root of an input operand; the first mantissa part preprocesses the mantissa of the input operand; the second mantissa part computes the square root of the output from the first mantissa part; and the control part controls interaction of input and output among various components of the integrated circuits. because the adders used in integrated circuit are composed of neural network circuits having a short propagation time for carry bits, the integrated circuit can computer a square root fast and efficiently. dated 1992-09-29" 5151971,arrangement of data cells and neural network system utilizing such an arrangement,"an arrangement of data cells which stores at least one matrix of data words which are arranged in rows and columns, the matrix being distributed in the arrangement in order to deliver/receive, via a single bus, permuted data words which correspond either to a row or to a column of the matrix. each data cell is connected to the single bus via series-connected switches which are associated with a respective addressing mode, the switches which address a same word of a same mode being directly controlled by a same selection signal. circulation members enable the original order of the data on the bus to be restored. an arrangement of this kind is used in a layered neural network system for executing the error backpropagation algorithm.",1992-09-29,"The title of the patent is arrangement of data cells and neural network system utilizing such an arrangement and its abstract is an arrangement of data cells which stores at least one matrix of data words which are arranged in rows and columns, the matrix being distributed in the arrangement in order to deliver/receive, via a single bus, permuted data words which correspond either to a row or to a column of the matrix. each data cell is connected to the single bus via series-connected switches which are associated with a respective addressing mode, the switches which address a same word of a same mode being directly controlled by a same selection signal. circulation members enable the original order of the data on the bus to be restored. an arrangement of this kind is used in a layered neural network system for executing the error backpropagation algorithm. dated 1992-09-29" 5153923,high order information processing method by means of a neural network and minimum and maximum searching method therefor,"in order to improve the problematical points concerning the structure and the processing speed of a prior art neural network, the optimum structure of the neural network, in which a synapse structure constructed on the basis of living body physiological knowledge or presumed therefrom is determined to make it possible to realize high level information processing functions such as feature extraction, feature unification, memory, etc. applications to an image recognition, a movement control, etc. making the most of the robust recognizing power thereof, or application to an optimum problem, a large scale numerical analysis, etc. making the most of the parallel processing power thereof are made possible.",1992-10-06,"The title of the patent is high order information processing method by means of a neural network and minimum and maximum searching method therefor and its abstract is in order to improve the problematical points concerning the structure and the processing speed of a prior art neural network, the optimum structure of the neural network, in which a synapse structure constructed on the basis of living body physiological knowledge or presumed therefrom is determined to make it possible to realize high level information processing functions such as feature extraction, feature unification, memory, etc. applications to an image recognition, a movement control, etc. making the most of the robust recognizing power thereof, or application to an optimum problem, a large scale numerical analysis, etc. making the most of the parallel processing power thereof are made possible. dated 1992-10-06" 5155699,divider using neural network,"a divider using neural network configurations comprises a subtractor, a selecting means, a first latch means, a second latch means, a shift register and a control means. the subtractor of the divider comprises plural inverters and plural 3-bit full-adders which are composed of four output lines, an input synapse group, a first bias synapse group, a second bias synapse group, a feedback synapse group, a neuron group and an inverter group.",1992-10-13,"The title of the patent is divider using neural network and its abstract is a divider using neural network configurations comprises a subtractor, a selecting means, a first latch means, a second latch means, a shift register and a control means. the subtractor of the divider comprises plural inverters and plural 3-bit full-adders which are composed of four output lines, an input synapse group, a first bias synapse group, a second bias synapse group, a feedback synapse group, a neuron group and an inverter group. dated 1992-10-13" 5155763,look ahead method and apparatus for predictive dialing using a neural network,"a predictive dialing system having a computer connected to a telephone switch stores a group of call records in its internal storage. each call record contains a group of input parameters, including the date, the time, and one or more workload factors. workload factors can indicate the number of pending calls, the number of available operators, the average idle time, the connection delay, the completion rate, and the nuisance call rate, among other things. in the preferred embodiment, each call record also contains a dial action, which indicates whether a call was initiated or not. these call records are analyzed by a neutral network to determine a relationship between the input parameters and the dial action stored in each call record. this analysis is done as part of the training process for the neutral network. after this relationship is determined, the computer system sends a current group of input parameters to the neural network, and, based on the analysis of the previous call records, the neural network determines whether a call should be intiated or not. the neural network bases its decision on the complex relationship it has learned from its training data--perhaps several thousand call records spanning several days, months, or even years. the neural network is able to automatically adjust--in a look ahead, proactive manner--for slow and fast periods of the day, week, month, and year.",1992-10-13,"The title of the patent is look ahead method and apparatus for predictive dialing using a neural network and its abstract is a predictive dialing system having a computer connected to a telephone switch stores a group of call records in its internal storage. each call record contains a group of input parameters, including the date, the time, and one or more workload factors. workload factors can indicate the number of pending calls, the number of available operators, the average idle time, the connection delay, the completion rate, and the nuisance call rate, among other things. in the preferred embodiment, each call record also contains a dial action, which indicates whether a call was initiated or not. these call records are analyzed by a neutral network to determine a relationship between the input parameters and the dial action stored in each call record. this analysis is done as part of the training process for the neutral network. after this relationship is determined, the computer system sends a current group of input parameters to the neural network, and, based on the analysis of the previous call records, the neural network determines whether a call should be intiated or not. the neural network bases its decision on the complex relationship it has learned from its training data--perhaps several thousand call records spanning several days, months, or even years. the neural network is able to automatically adjust--in a look ahead, proactive manner--for slow and fast periods of the day, week, month, and year. dated 1992-10-13" 5155801,clustered neural networks,""" a plurality of neural networks are coupled to an output neural network, or judge network, to form a clustered neural network. each of the plurality of clustered networks comprises a supervised learning rule back-propagated neural network. each of the clustered neural networks are trained to perform substantially the same mapping function before they are clustered. following training, the clustered neural network computes its output by taking an """"average"""" of the outputs of the individual neural networks that make up the cluster. the judge network combines the outputs of the plurality of individual neural networks to provide the output from the entire clustered network. in addition, the output of the judge network may be fed back to each of the individual neural networks and used as a training input thereto, in order to provide for continuous training. the use of the clustered network increases the speed of learning and results in better generalization. in addition, clustering multiple back-propagation networks provides for increased performance and fault tolerance when compared to a single unclustered network having substantially the same computational complexity. the present invention may be used in applications that are amenable to neural network solutions, including control and image processing applications. clustering of the networks also permits the use of smaller networks and provides for improved performance. the clustering of multiple back-propagation networks provides for synergy that improves the properties of the clustered network over a comparably complex non-clustered network. """,1992-10-13,"The title of the patent is clustered neural networks and its abstract is "" a plurality of neural networks are coupled to an output neural network, or judge network, to form a clustered neural network. each of the plurality of clustered networks comprises a supervised learning rule back-propagated neural network. each of the clustered neural networks are trained to perform substantially the same mapping function before they are clustered. following training, the clustered neural network computes its output by taking an """"average"""" of the outputs of the individual neural networks that make up the cluster. the judge network combines the outputs of the plurality of individual neural networks to provide the output from the entire clustered network. in addition, the output of the judge network may be fed back to each of the individual neural networks and used as a training input thereto, in order to provide for continuous training. the use of the clustered network increases the speed of learning and results in better generalization. in addition, clustering multiple back-propagation networks provides for increased performance and fault tolerance when compared to a single unclustered network having substantially the same computational complexity. the present invention may be used in applications that are amenable to neural network solutions, including control and image processing applications. clustering of the networks also permits the use of smaller networks and provides for improved performance. the clustering of multiple back-propagation networks provides for synergy that improves the properties of the clustered network over a comparably complex non-clustered network. "" dated 1992-10-13" 5157399,neural network quantizer,"a neural network quantizer for quantizing input analog signals includes a plurality of multi-level neurons. the input analog signals are sampled and supplied to respective ones of the multi-level neurons. output values of the multi-level neurons are converted into analog values, weighted by weighting coefficients determined in accordance with a frequency band of at least one frequency component of the input analog signals and fed back to the respective one of the multi-level neurons and to the other multi-level neurons. the weighted analog values fed back are compared with the respective ones of the sampled input analog signals. the output values of the multi-level neurons are corrected in response to the compared results, and when the compared results are converged within a predetermined range, the output values of the multi-level neurons are produced to quantize the input analog signals.",1992-10-20,"The title of the patent is neural network quantizer and its abstract is a neural network quantizer for quantizing input analog signals includes a plurality of multi-level neurons. the input analog signals are sampled and supplied to respective ones of the multi-level neurons. output values of the multi-level neurons are converted into analog values, weighted by weighting coefficients determined in accordance with a frequency band of at least one frequency component of the input analog signals and fed back to the respective one of the multi-level neurons and to the other multi-level neurons. the weighted analog values fed back are compared with the respective ones of the sampled input analog signals. the output values of the multi-level neurons are corrected in response to the compared results, and when the compared results are converged within a predetermined range, the output values of the multi-level neurons are produced to quantize the input analog signals. dated 1992-10-20" 5157733,"radiation image processing apparatus, determination apparatus, and radiation image read-out apparatus","in a radiation image processing apparatus, signal processing for determining the shape and location of an irradiation field, adjusting read-out conditions for a final readout from a preliminary read-out image signal, adjusting image processing conditions, and/or detecting an abnormal pattern is carried out on an image signal representing a radiation image by using a neural network. after the neural network, the learning operations of which have been carried out, is incorporated into the radiation image processing apparatus, modifying information is entered from an input device into the neural network. the modifying information is used to modify the signal processing carried out by the neural network and thereby to carry out re-learning operations of the neural network.",1992-10-20,"The title of the patent is radiation image processing apparatus, determination apparatus, and radiation image read-out apparatus and its abstract is in a radiation image processing apparatus, signal processing for determining the shape and location of an irradiation field, adjusting read-out conditions for a final readout from a preliminary read-out image signal, adjusting image processing conditions, and/or detecting an abnormal pattern is carried out on an image signal representing a radiation image by using a neural network. after the neural network, the learning operations of which have been carried out, is incorporated into the radiation image processing apparatus, modifying information is entered from an input device into the neural network. the modifying information is used to modify the signal processing carried out by the neural network and thereby to carry out re-learning operations of the neural network. dated 1992-10-20" 5159590,multi-slot call relocation control method and system,"a multi-slot call relocation control method and system having a multi-slot call switching system and/or transmission equipment constituted by an address control memory, address controller and address location changing circuit whereby address write and read information for a channel memory is controlled. where unoccupied circuits are 2.sup.n (n: natural number) times the basic switching unit, incoming calls with a maximum of 2.sup.n basic switching units in capacity may not be switched or transmitted by the unoccupied circuits depending on their status involving the presence of other calls. in that case, calls are relocated within a frame using the fewest steps possible. this is achieved by a neural network in the address control memory of multi-slot call switching system a, the neural network learning to output a call allocation pattern such that the number of times calls are relocated becomes minimal. the information from the network makes it possible to relocate the least number of times the calls whose capacity is not more than 2.sup.n basic switching unit in the channel memory. the relocation information is transmitted from switching system a to another system b, connected oppositely to system a. using the relocation information received, system b relocates calls within a channel memory of its own.",1992-10-27,"The title of the patent is multi-slot call relocation control method and system and its abstract is a multi-slot call relocation control method and system having a multi-slot call switching system and/or transmission equipment constituted by an address control memory, address controller and address location changing circuit whereby address write and read information for a channel memory is controlled. where unoccupied circuits are 2.sup.n (n: natural number) times the basic switching unit, incoming calls with a maximum of 2.sup.n basic switching units in capacity may not be switched or transmitted by the unoccupied circuits depending on their status involving the presence of other calls. in that case, calls are relocated within a frame using the fewest steps possible. this is achieved by a neural network in the address control memory of multi-slot call switching system a, the neural network learning to output a call allocation pattern such that the number of times calls are relocated becomes minimal. the information from the network makes it possible to relocate the least number of times the calls whose capacity is not more than 2.sup.n basic switching unit in the channel memory. the relocation information is transmitted from switching system a to another system b, connected oppositely to system a. using the relocation information received, system b relocates calls within a channel memory of its own. dated 1992-10-27" 5159661,vertically interconnected parallel distributed processor,a parallel distributed processor comprises matrices of unit cells arranged in a stacked configuration. each unit cell includes a chalcogenide body which may be set and reset to a plurality of values of a given physical property. interconnections between the unit cells are established via the chalcogenide materials and the pattern and strength of the interconnections is determined by the set values of the chalcogenide. the processor is readily adapted to the construction of neural network computing systems.,1992-10-27,The title of the patent is vertically interconnected parallel distributed processor and its abstract is a parallel distributed processor comprises matrices of unit cells arranged in a stacked configuration. each unit cell includes a chalcogenide body which may be set and reset to a plurality of values of a given physical property. interconnections between the unit cells are established via the chalcogenide materials and the pattern and strength of the interconnections is determined by the set values of the chalcogenide. the processor is readily adapted to the construction of neural network computing systems. dated 1992-10-27 5161014,neural networks as for video signal processing,"a television signal processing apparatus includes at least one neural network for processing a signal representing an image. the neural network includes a plurality of perceptrons each of which includes circuitry for weighting a plurality of delayed representations of said signal, circuitry for providing sums of weighted signals provided by said weighting circuitry, and circuitry for processing said sums with a sigmoidal transfer function. the neural network also includes circuitry for combining output signals provided by ones of said perceptrons for providing a processed signal.",1992-11-03,"The title of the patent is neural networks as for video signal processing and its abstract is a television signal processing apparatus includes at least one neural network for processing a signal representing an image. the neural network includes a plurality of perceptrons each of which includes circuitry for weighting a plurality of delayed representations of said signal, circuitry for providing sums of weighted signals provided by said weighting circuitry, and circuitry for processing said sums with a sigmoidal transfer function. the neural network also includes circuitry for combining output signals provided by ones of said perceptrons for providing a processed signal. dated 1992-11-03" 5161204,apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices,"a method and apparatus under software control for pattern recognition utilizes a neural network implementation to recognize two dimensional input images which are sufficiently similar to a database of previously stored two dimensional images. images are first image processed and subjected to a fourier transform which yields a power spectrum. an in-class to out-of-class study is performed on a typical collection of images in order to determine the most discriminatory regions of the fourier transform. a feature vector consisting of the highest order (most discriminatory) magnitude information from the power spectrum of the fourier transform of the image is formed. feature vectors are input to a neural network having preferably two hidden layers, input dimensionality of the number of elements in the feature vector and output dimensionality of the number of data elements stored in the database. unique identifier numbers are preferably stored along with the feature vector. application of a query feature vector to the neural network will result in an output vector. the output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate that a successful identification has been made. where a successful identification has occurred, the unique identifier number may be displayed.",1992-11-03,"The title of the patent is apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices and its abstract is a method and apparatus under software control for pattern recognition utilizes a neural network implementation to recognize two dimensional input images which are sufficiently similar to a database of previously stored two dimensional images. images are first image processed and subjected to a fourier transform which yields a power spectrum. an in-class to out-of-class study is performed on a typical collection of images in order to determine the most discriminatory regions of the fourier transform. a feature vector consisting of the highest order (most discriminatory) magnitude information from the power spectrum of the fourier transform of the image is formed. feature vectors are input to a neural network having preferably two hidden layers, input dimensionality of the number of elements in the feature vector and output dimensionality of the number of data elements stored in the database. unique identifier numbers are preferably stored along with the feature vector. application of a query feature vector to the neural network will result in an output vector. the output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate that a successful identification has been made. where a successful identification has occurred, the unique identifier number may be displayed. dated 1992-11-03" 5162899,color data correction apparatus ultilizing neural network,"a color correction apparatus for use in an apparatus such as a color copier for operating on data obtained by scanning and color analysis of a source image to obtain color density data for use in printing a copy of the source image, the correction apparatus containing a neural network. parameter values of the neural network are established by repetitive computations based on amounts of difference between color density data previously used to print a plurality of color samples and color density data produced by the neural network in response to color analysis data obtained by analyzing these color samples.",1992-11-10,"The title of the patent is color data correction apparatus ultilizing neural network and its abstract is a color correction apparatus for use in an apparatus such as a color copier for operating on data obtained by scanning and color analysis of a source image to obtain color density data for use in printing a copy of the source image, the correction apparatus containing a neural network. parameter values of the neural network are established by repetitive computations based on amounts of difference between color density data previously used to print a plurality of color samples and color density data produced by the neural network in response to color analysis data obtained by analyzing these color samples. dated 1992-11-10" 5164837,method of correcting setup parameter decision characteristics and automatic setup apparatus using a neural network,"image data of an original read by an image reader is analyzed by analyzer means to be supplied to a neural network. an operator inputs scene information and desired finish information to the neural network with data input means. the neural network calculates setup parameter values in compliance with a conversion rule specified by predetermined weighting values and functional forms, and sets these values in an image data converter. then, the operator corrects the setup parameter values on the basis of finish condition of the produced color separation films. the corrected setup parameter values are inputted to leaning means. the leaning means computes proper weighting values with which the neural network calculates setup parameter values equal to or approximate to the corrected setup parameter values. such proper weighting values are supplied to the neural network as new weighting values.",1992-11-17,"The title of the patent is method of correcting setup parameter decision characteristics and automatic setup apparatus using a neural network and its abstract is image data of an original read by an image reader is analyzed by analyzer means to be supplied to a neural network. an operator inputs scene information and desired finish information to the neural network with data input means. the neural network calculates setup parameter values in compliance with a conversion rule specified by predetermined weighting values and functional forms, and sets these values in an image data converter. then, the operator corrects the setup parameter values on the basis of finish condition of the produced color separation films. the corrected setup parameter values are inputted to leaning means. the leaning means computes proper weighting values with which the neural network calculates setup parameter values equal to or approximate to the corrected setup parameter values. such proper weighting values are supplied to the neural network as new weighting values. dated 1992-11-17" 5165009,neural network processing system using semiconductor memories,"herein disclosed is a data processing system having a memory packaged therein for realizing a large-scale and high-speed parallel distributed processing and, especially, a data processing system for the neural network processing. the neural network processing system according to the present invention comprises: a memory circuit for storing neuron output values, connection weights, the desired values of outputs, and data necessary for learning; and input/output circuit for writing or reading data in or out of said memory circuit; a processing circuit for performing a processing for determining the neuron outputs such as the product, sum and nonlinear conversion of the data stored in said memory circuit, a comparison of the output value and its desired value, and a processing necessary for learning; and a control circuit for controlling the operations of said memory circuit, said input/output circuit and said processing circuit. the processing circuit is constructed to include at least one of an adder, a multiplier, a nonlinear transfer function circuit and a comparator so that at least a portion of the processing necessary for determining the neutron output values such as the product or sum may be accomplished in parallel. moreover, these circuits are shared among a plurality of neutrons and are operated in a time sharing manner to determine the plural neuron output values. still moreover, the aforementioned comparator compares the neuron output value determined and the desired value of the otuput in parallel.",1992-11-17,"The title of the patent is neural network processing system using semiconductor memories and its abstract is herein disclosed is a data processing system having a memory packaged therein for realizing a large-scale and high-speed parallel distributed processing and, especially, a data processing system for the neural network processing. the neural network processing system according to the present invention comprises: a memory circuit for storing neuron output values, connection weights, the desired values of outputs, and data necessary for learning; and input/output circuit for writing or reading data in or out of said memory circuit; a processing circuit for performing a processing for determining the neuron outputs such as the product, sum and nonlinear conversion of the data stored in said memory circuit, a comparison of the output value and its desired value, and a processing necessary for learning; and a control circuit for controlling the operations of said memory circuit, said input/output circuit and said processing circuit. the processing circuit is constructed to include at least one of an adder, a multiplier, a nonlinear transfer function circuit and a comparator so that at least a portion of the processing necessary for determining the neutron output values such as the product or sum may be accomplished in parallel. moreover, these circuits are shared among a plurality of neutrons and are operated in a time sharing manner to determine the plural neuron output values. still moreover, the aforementioned comparator compares the neuron output value determined and the desired value of the otuput in parallel. dated 1992-11-17" 5165069,method and system for non-invasively identifying the operational status of a vcr,a method and apparatus are provided for identifying one of a plurality of operational modes of a monitored video cassette recorder (vcr). a sensor is positioned near the monitored vcr for detecting a radiated signal of the monitored vcr. the detected signal is appplied to a filter for filtering the detected signal and for providing a plurality of predetermined band-pass filtered signals. a neural network is used for processing the plurality of predetermined band-pass filtered signals to identify the operational mode of the vcr.,1992-11-17,The title of the patent is method and system for non-invasively identifying the operational status of a vcr and its abstract is a method and apparatus are provided for identifying one of a plurality of operational modes of a monitored video cassette recorder (vcr). a sensor is positioned near the monitored vcr for detecting a radiated signal of the monitored vcr. the detected signal is appplied to a filter for filtering the detected signal and for providing a plurality of predetermined band-pass filtered signals. a neural network is used for processing the plurality of predetermined band-pass filtered signals to identify the operational mode of the vcr. dated 1992-11-17 5165270,non-destructive materials testing apparatus and technique for use in the field,"the present invention features an apparatus and method for impact-echo testing of structures in situ, in the field. the impact-echo testing method provides a non-invasive, non-destructive way of determining the defects in the structure. the method is both uniform and reliable, the test procedure being substantially identical every time. the apparatus of the invention comprises a portable, hand-held unit having a plurality of impactors disposed therein. the plurality of impactors comprise a number of differently weighted spheres that are each designed to impart a different impact energy into the structure to be tested. each sphere is disposed on a distal end of a spring-steel rod. a particular weighted sphere is chosen by a selector disposed on the testing unit. the sphere is withdrawn from the rest position by a pair of jaws to a given height above the structure. at a predetermined release point, the sphere is released, causing it to impact the structure with a specific duration and impart a given energy thereto. the impact produces stress waves that are reflected from the internal flaws and external surfaces of the structure. the reflected waves are detected by a transducer that converts the stress waves into an electrical signal (displacement waveform). the waveform is then processed to provide an amplitude spectrum, and in the case of plates, a reflection spectrum. for plates, the reflection spectrum can be interpreted by a neural network provides results that are indicative of either the thickness of the structure or of the defects disposed therein.",1992-11-24,"The title of the patent is non-destructive materials testing apparatus and technique for use in the field and its abstract is the present invention features an apparatus and method for impact-echo testing of structures in situ, in the field. the impact-echo testing method provides a non-invasive, non-destructive way of determining the defects in the structure. the method is both uniform and reliable, the test procedure being substantially identical every time. the apparatus of the invention comprises a portable, hand-held unit having a plurality of impactors disposed therein. the plurality of impactors comprise a number of differently weighted spheres that are each designed to impart a different impact energy into the structure to be tested. each sphere is disposed on a distal end of a spring-steel rod. a particular weighted sphere is chosen by a selector disposed on the testing unit. the sphere is withdrawn from the rest position by a pair of jaws to a given height above the structure. at a predetermined release point, the sphere is released, causing it to impact the structure with a specific duration and impart a given energy thereto. the impact produces stress waves that are reflected from the internal flaws and external surfaces of the structure. the reflected waves are detected by a transducer that converts the stress waves into an electrical signal (displacement waveform). the waveform is then processed to provide an amplitude spectrum, and in the case of plates, a reflection spectrum. for plates, the reflection spectrum can be interpreted by a neural network provides results that are indicative of either the thickness of the structure or of the defects disposed therein. dated 1992-11-24" 5166539,neural network circuit,"a neural network circuit, in which a number n of weight coefficients (wl-wn) corresponding to a number n of inputs are provided, subtraction circuits determine the difference between inputs and the weight coefficients in each input terminal, the result thereof is inputted into absolute value circuits, all calculation results of the absolute value circuts corresponding to the inputs and the weight coefficients are inputted into an addition circuit and accumulated, and this accumulation result determines the output value. the threshold value circuit, which determines the final output value, has characteristics of a step function pattern, a polygonal line pattern, or a sigmoid function pattern, depending on the object. in the case in which a neural network circuit is realized by means of digital circuits, the absolute value circuits can comprise simply ex-or logic (exclusive or) gates. furthermore, in the case in which the input terminals have two input paths and two weight coefficients corresponding to each input path, the neuron circuits form a recognition area having a flexible shape which is controlled by the weight coefficients. neuron circuits are widely used in pattern recognition; neuron circuits react to a pattern inputted into the input layer and recognition is thereby conducted.",1992-11-24,"The title of the patent is neural network circuit and its abstract is a neural network circuit, in which a number n of weight coefficients (wl-wn) corresponding to a number n of inputs are provided, subtraction circuits determine the difference between inputs and the weight coefficients in each input terminal, the result thereof is inputted into absolute value circuits, all calculation results of the absolute value circuts corresponding to the inputs and the weight coefficients are inputted into an addition circuit and accumulated, and this accumulation result determines the output value. the threshold value circuit, which determines the final output value, has characteristics of a step function pattern, a polygonal line pattern, or a sigmoid function pattern, depending on the object. in the case in which a neural network circuit is realized by means of digital circuits, the absolute value circuits can comprise simply ex-or logic (exclusive or) gates. furthermore, in the case in which the input terminals have two input paths and two weight coefficients corresponding to each input path, the neuron circuits form a recognition area having a flexible shape which is controlled by the weight coefficients. neuron circuits are widely used in pattern recognition; neuron circuits react to a pattern inputted into the input layer and recognition is thereby conducted. dated 1992-11-24" 5166896,discrete cosine transform chip using neural network concepts for calculating values of a discrete cosine transform function,"a discrete cosine transform chip includes circuits using neural network concepts that have parallel processing capability as well as conventional digital logic circuits. in particular, the discrete cosine transform chip includes a cosine term processing portion, a multiplier, an adder, a subtractor, and two groups of latches. the multiplier, the adder and the subtractor incorporated in the discrete cosine transform chip use unidirectional feed back neural network models.",1992-11-24,"The title of the patent is discrete cosine transform chip using neural network concepts for calculating values of a discrete cosine transform function and its abstract is a discrete cosine transform chip includes circuits using neural network concepts that have parallel processing capability as well as conventional digital logic circuits. in particular, the discrete cosine transform chip includes a cosine term processing portion, a multiplier, an adder, a subtractor, and two groups of latches. the multiplier, the adder and the subtractor incorporated in the discrete cosine transform chip use unidirectional feed back neural network models. dated 1992-11-24" 5166938,error correction circuit using a design based on a neural network model comprising an encoder portion and a decoder portion,an error correction circuit is provided which uses nmos and pmos synapses to form network type responses to a coded multi-bit input. use of mos technology logic in error correction circuits allows such devices to be easily interfaced with other like technology circuits without the need to use distinct interface logic as with conventional error correction circuitry.,1992-11-24,The title of the patent is error correction circuit using a design based on a neural network model comprising an encoder portion and a decoder portion and its abstract is an error correction circuit is provided which uses nmos and pmos synapses to form network type responses to a coded multi-bit input. use of mos technology logic in error correction circuits allows such devices to be easily interfaced with other like technology circuits without the need to use distinct interface logic as with conventional error correction circuitry. dated 1992-11-24 5167006,"neuron unit, neural network and signal processing method","a neuron unit processes a plurality of input signals and outputs an output signal which is indicative of a result of the processing. the neuron unit includes input lines for receiving the input signals, a forward process part including a supplying part for supplying weight functions and an operation part for carrying out an operation on each of the input signals using one of the weight functions and for outputting the output signal, and a self-learning part including a generating part for generating new weight functions based on errors between the output signal of the forward process part and teaching signals and a varying part for varying the weight functions supplied by the supplying part of the forward process part to the new weight functions generated by the generating part.",1992-11-24,"The title of the patent is neuron unit, neural network and signal processing method and its abstract is a neuron unit processes a plurality of input signals and outputs an output signal which is indicative of a result of the processing. the neuron unit includes input lines for receiving the input signals, a forward process part including a supplying part for supplying weight functions and an operation part for carrying out an operation on each of the input signals using one of the weight functions and for outputting the output signal, and a self-learning part including a generating part for generating new weight functions based on errors between the output signal of the forward process part and teaching signals and a varying part for varying the weight functions supplied by the supplying part of the forward process part to the new weight functions generated by the generating part. dated 1992-11-24" 5167007,multilayered optical neural network system,"a multilayered optical neural network system comprise an input layer, an output layer, at least one hidden layer provided between the input layer and the output layer, a memory matrix holding device provided between the respective layers for holding weighted couplings between the layers, a correlation operating device for optically computing a correlation between an output optical pattern from the previous layer and the memory matrix pattern, an output function operating device for implementing optical computing of an output function corresponding to a result of the correlation operation, and a memory matrix correcting device provided between the respective layers for optically correcting a memory matrix held in the memory matrix holding device by a learning operation, whereby the system is capable of two-dimensional optical computing of all data transfers and operations and executing a great amount of computing without use of holograms.",1992-11-24,"The title of the patent is multilayered optical neural network system and its abstract is a multilayered optical neural network system comprise an input layer, an output layer, at least one hidden layer provided between the input layer and the output layer, a memory matrix holding device provided between the respective layers for holding weighted couplings between the layers, a correlation operating device for optically computing a correlation between an output optical pattern from the previous layer and the memory matrix pattern, an output function operating device for implementing optical computing of an output function corresponding to a result of the correlation operation, and a memory matrix correcting device provided between the respective layers for optically correcting a memory matrix held in the memory matrix holding device by a learning operation, whereby the system is capable of two-dimensional optical computing of all data transfers and operations and executing a great amount of computing without use of holograms. dated 1992-11-24" 5167008,digital circuitry for approximating sigmoidal response in a neural network layer,"a plurality of neural circuits are connected in a neural network layer for generating their respective digital axonal responses to the same plurality of synapse input signals. each neural circuit includes digital circuitry for approximating a sigmoidal response connected after respective circuitry for performing a weighted summation of the synapse input signals to generate a weighted summation result in digital form. in this digital circuitry the absolute value of the digital weighted summation result is first determined. then, a window comparator determines into which of a plurality of amplitude ranges the absolute value of the weighted summation result falls. a digital intercept value and a digital slope value are selected in accordance with the range into which the absolute value of the weighted summation result falls. the absolute value of the digital weighted summation result is multiplied by the selected digital slope value to generate a digital product; and the digital intercept value is added to the digital product to generate an absolute value representation of a digital axonal response. the polarity of the weighted summation result is determined, and the same polarity is assigned to the absolute value representation of the digital axonal response, thereby to generate the digital axonal response.",1992-11-24,"The title of the patent is digital circuitry for approximating sigmoidal response in a neural network layer and its abstract is a plurality of neural circuits are connected in a neural network layer for generating their respective digital axonal responses to the same plurality of synapse input signals. each neural circuit includes digital circuitry for approximating a sigmoidal response connected after respective circuitry for performing a weighted summation of the synapse input signals to generate a weighted summation result in digital form. in this digital circuitry the absolute value of the digital weighted summation result is first determined. then, a window comparator determines into which of a plurality of amplitude ranges the absolute value of the weighted summation result falls. a digital intercept value and a digital slope value are selected in accordance with the range into which the absolute value of the weighted summation result falls. the absolute value of the digital weighted summation result is multiplied by the selected digital slope value to generate a digital product; and the digital intercept value is added to the digital product to generate an absolute value representation of a digital axonal response. the polarity of the weighted summation result is determined, and the same polarity is assigned to the absolute value representation of the digital axonal response, thereby to generate the digital axonal response. dated 1992-11-24" 5167009,on-line process control neural network using data pointers,"an on-line process control neural network using data pointers allows the neural network to be easily configured to use data in a process control environment. the inputs, outputs, training inputs and errors can be retrieved and/or stored from any available data source without programming. the user of the neural network specifies data pointers indicating the particular computer system in which the data resides or will be stored; the type of data to be retrieved and/or stored; and the specific data value or storage location to be used. the data pointers include maximum, minimum, and maximum change limits, which can also serve as scaling limits for the neural network. data pointers indicating time-dependent data, such as time averages, also include time boundary specifiers. the data pointers are entered by the user of the neural network using pop-up menus and by completing fields in a template. an historical database provides both a source of input data and a storage function for output and error data.",1992-11-24,"The title of the patent is on-line process control neural network using data pointers and its abstract is an on-line process control neural network using data pointers allows the neural network to be easily configured to use data in a process control environment. the inputs, outputs, training inputs and errors can be retrieved and/or stored from any available data source without programming. the user of the neural network specifies data pointers indicating the particular computer system in which the data resides or will be stored; the type of data to be retrieved and/or stored; and the specific data value or storage location to be used. the data pointers include maximum, minimum, and maximum change limits, which can also serve as scaling limits for the neural network. data pointers indicating time-dependent data, such as time averages, also include time boundary specifiers. the data pointers are entered by the user of the neural network using pop-up menus and by completing fields in a template. an historical database provides both a source of input data and a storage function for output and error data. dated 1992-11-24" 5168262,fire alarm system,a fire alarm system employs a neural network for obtaining one or more types of fire related information values. a plurality of detection information values are time-serially collected from plural fire phenomenon detectors. the detection information values are signal processed such that a weighting coefficient is assigned thereto in accordance with a relative significance of the detection information value to the desired fire related information value. the various weighting coefficients are stored in advance in a memory. the weighting coefficients stored are established so that the fire related information value for a particular set of detection information values approximates a desired fire related information value.,1992-12-01,The title of the patent is fire alarm system and its abstract is a fire alarm system employs a neural network for obtaining one or more types of fire related information values. a plurality of detection information values are time-serially collected from plural fire phenomenon detectors. the detection information values are signal processed such that a weighting coefficient is assigned thereto in accordance with a relative significance of the detection information value to the desired fire related information value. the various weighting coefficients are stored in advance in a memory. the weighting coefficients stored are established so that the fire related information value for a particular set of detection information values approximates a desired fire related information value. dated 1992-12-01 5168352,coloring device for performing adaptive coloring of a monochromatic image,"a coloring device includes an image sampling device for sampling an input signal block representing a group of n.times.m pixels of a monochromatic image and for outputting first signals representing the sampled pixels of the input signal block of the monochromatic image; and artificial neural network, a connection for providing to the artificial neural network, substantially simultaneously, pattern information on patterns to be contained in the monochromatic image and color information on first data indicating colors given to the patterns indicated by the pattern information prior to generation of a color image signal, the artificial neural network having internal state parameters which are adaptively optimized by using a learning algorithm prior to the generation of a color image, the artificial neural network operating for receiving data representing the first signal, for determining which of colors preliminarily and respectively assigned to patterns to be contained in the group of pixels of the monochromatic image represented by the input signal block is given to a pattern actually contained in the group of pixels represented by the input signal block and for outputting second signals representing second data on three primary colors which are used to represent the determined colors given to the patterns actually contained in the group of pixels represented by the input signal block; and a color image storing device for receiving the second signals outputted from the artificial neural network, for storing the received second signals in locations thereof corresponding to the positions of the pixels represented by the input signal block and for outputting third signals representing the three primary color component images of the pixels represented by the input signal block; wherein the image sampling device further functions for scanning the whole of the monochromatic image by generating successive input signal blocks representing successive groups n.times.m pixels to be sampled, thereby outputting third signals for all pixels of the monochromatic image.",1992-12-01,"The title of the patent is coloring device for performing adaptive coloring of a monochromatic image and its abstract is a coloring device includes an image sampling device for sampling an input signal block representing a group of n.times.m pixels of a monochromatic image and for outputting first signals representing the sampled pixels of the input signal block of the monochromatic image; and artificial neural network, a connection for providing to the artificial neural network, substantially simultaneously, pattern information on patterns to be contained in the monochromatic image and color information on first data indicating colors given to the patterns indicated by the pattern information prior to generation of a color image signal, the artificial neural network having internal state parameters which are adaptively optimized by using a learning algorithm prior to the generation of a color image, the artificial neural network operating for receiving data representing the first signal, for determining which of colors preliminarily and respectively assigned to patterns to be contained in the group of pixels of the monochromatic image represented by the input signal block is given to a pattern actually contained in the group of pixels represented by the input signal block and for outputting second signals representing second data on three primary colors which are used to represent the determined colors given to the patterns actually contained in the group of pixels represented by the input signal block; and a color image storing device for receiving the second signals outputted from the artificial neural network, for storing the received second signals in locations thereof corresponding to the positions of the pixels represented by the input signal block and for outputting third signals representing the three primary color component images of the pixels represented by the input signal block; wherein the image sampling device further functions for scanning the whole of the monochromatic image by generating successive input signal blocks representing successive groups n.times.m pixels to be sampled, thereby outputting third signals for all pixels of the monochromatic image. dated 1992-12-01" 5168549,inference rule determining method and inference device,""" an inference rule determining process according to the present invention sequentially determines, using a learning function of a neural network model, a membership function representing a degree which the conditions of the if part of each inference rule is satisfied when input data is received to thereby obtain an optimal inference result without using experience rules. the inventive inference device uses an inference rule of the type """"if . . . then . . . """" and includes a membership value determiner (1) which includes all of if part and has a neural network; individual inference quantity determiners (21)-(2r) which correspond to the respective then parts of the inference rules and determine the corresponding inference quantities for the inference rules; and a final inference quantity determiner which determines these inference quantities synthetically to obtain the final results of the inference. if the individual inference quantity determiners (2) each has a neural network structure, the non-linearity of the neural network models is used to obtain the result of the inference with high inference accuracy even if an object to be inferred is non-linear. """,1992-12-01,"The title of the patent is inference rule determining method and inference device and its abstract is "" an inference rule determining process according to the present invention sequentially determines, using a learning function of a neural network model, a membership function representing a degree which the conditions of the if part of each inference rule is satisfied when input data is received to thereby obtain an optimal inference result without using experience rules. the inventive inference device uses an inference rule of the type """"if . . . then . . . """" and includes a membership value determiner (1) which includes all of if part and has a neural network; individual inference quantity determiners (21)-(2r) which correspond to the respective then parts of the inference rules and determine the corresponding inference quantities for the inference rules; and a final inference quantity determiner which determines these inference quantities synthetically to obtain the final results of the inference. if the individual inference quantity determiners (2) each has a neural network structure, the non-linearity of the neural network models is used to obtain the result of the inference with high inference accuracy even if an object to be inferred is non-linear. "" dated 1992-12-01" 5168551,mos decoder circuit implemented using a neural network architecture,"a decoder circuit based on the concept of a neural network architecture has a unique configuration using a connection structure having cmos inverters, and pmos and nmos bias and synapse transistors. the decoder circuit consists of m parallel inverter input circuit corresponding to an m-bit digital signal and forming an input neuron group, a 2.sup.m parallel inverter output circuit corresponding to 2.sup.m decoded outputs and forming an output neuron group, and a synapse group connected between the input neuron group and the output neuron group responsive to a bias group and the m-bit digital original for providing a decoded output signal to one of the 2.sup.m outputs of the output neuron group when a match is detected. hence, only one of the 2.sup.m outputs will be active at any one time.",1992-12-01,"The title of the patent is mos decoder circuit implemented using a neural network architecture and its abstract is a decoder circuit based on the concept of a neural network architecture has a unique configuration using a connection structure having cmos inverters, and pmos and nmos bias and synapse transistors. the decoder circuit consists of m parallel inverter input circuit corresponding to an m-bit digital signal and forming an input neuron group, a 2.sup.m parallel inverter output circuit corresponding to 2.sup.m decoded outputs and forming an output neuron group, and a synapse group connected between the input neuron group and the output neuron group responsive to a bias group and the m-bit digital original for providing a decoded output signal to one of the 2.sup.m outputs of the output neuron group when a match is detected. hence, only one of the 2.sup.m outputs will be active at any one time. dated 1992-12-01" 5170071,stochastic artifical neuron with multilayer training capability,"a probabilistic or stochastic artificial neuron in which the inputs and synaptic weights are represented as probabilistic or stochastic functions of time, thus providing efficient implementations of the synapses. stochastic processing removes both the time criticality and the discrete symbol nature of traditional digital processing, while retaining the basic digital processing technology. this provides large gains in relaxed timing design constraints and fault tolerance, while the simplicity of stochastic arithmetic allows for the fabrication of very high densities of neurons. the synaptic weights are individually controlled by a backward error propagation which provides the capability to train multiple layers of neurons in a neural network.",1992-12-08,"The title of the patent is stochastic artifical neuron with multilayer training capability and its abstract is a probabilistic or stochastic artificial neuron in which the inputs and synaptic weights are represented as probabilistic or stochastic functions of time, thus providing efficient implementations of the synapses. stochastic processing removes both the time criticality and the discrete symbol nature of traditional digital processing, while retaining the basic digital processing technology. this provides large gains in relaxed timing design constraints and fault tolerance, while the simplicity of stochastic arithmetic allows for the fabrication of very high densities of neurons. the synaptic weights are individually controlled by a backward error propagation which provides the capability to train multiple layers of neurons in a neural network. dated 1992-12-08" 5172204,artificial ionic synapse,"an artificial neural synapse (10) is constructed to function as a modifiable excitatory synapse. in accordance with an embodiment of the invention the synapse is fabricated as a silicon mosfet that is modified to have ions within a gate oxide. the ions, such as lithium, sodium, potassium or fluoride ions, are selected for their ability to drift within the gate oxide under the influence of an applied electric field. in response to a positive voltage applied to a gate terminal of the device, positively charged ions, such as sodium or potassium ions, drift to a silicon/silicon dioxide interface, causing an increase in current flow through the device. the invention also pertains to assemblages of such devices that are interconnected to form an artificial neuron and to assemblages of such artificial neurons that form an artificial neural network.",1992-12-15,"The title of the patent is artificial ionic synapse and its abstract is an artificial neural synapse (10) is constructed to function as a modifiable excitatory synapse. in accordance with an embodiment of the invention the synapse is fabricated as a silicon mosfet that is modified to have ions within a gate oxide. the ions, such as lithium, sodium, potassium or fluoride ions, are selected for their ability to drift within the gate oxide under the influence of an applied electric field. in response to a positive voltage applied to a gate terminal of the device, positively charged ions, such as sodium or potassium ions, drift to a silicon/silicon dioxide interface, causing an increase in current flow through the device. the invention also pertains to assemblages of such devices that are interconnected to form an artificial neuron and to assemblages of such artificial neurons that form an artificial neural network. dated 1992-12-15" 5172253,neural network model for reaching a goal state,""" an object, such as a robot, is located at an initial state in a finite state space area and moves under the control of the unsupervised neural network model of the invention. the network instructs the object to move in one of several directions from the initial state. upon reaching another state, the model again instructs the object to move in one of several directions. these instructions continue until either: a) the object has completed a cycle by ending up back at a state it has been to previously during this cycle, or b) the object has completed a cycle by reaching the goal state. if the object ends up back at a state it has been to previously during this cycle, the neural network model ends the cycle and immediately begins a new cycle from the present location. when the object reaches the goal state, the neural network model learns that this path is productive towards reaching the goal state, and is given delayed reinforcement in the form of a """"reward"""". upon reaching a state, the neural network model calculates a level of satisfaction with its progress towards reaching the goal state. if the level of satisfaction is low, the neural network model is more likely to override what has been learned thus far and deviate from a path known to lead to the goal state to experiment with new and possibly better paths. """,1992-12-15,"The title of the patent is neural network model for reaching a goal state and its abstract is "" an object, such as a robot, is located at an initial state in a finite state space area and moves under the control of the unsupervised neural network model of the invention. the network instructs the object to move in one of several directions from the initial state. upon reaching another state, the model again instructs the object to move in one of several directions. these instructions continue until either: a) the object has completed a cycle by ending up back at a state it has been to previously during this cycle, or b) the object has completed a cycle by reaching the goal state. if the object ends up back at a state it has been to previously during this cycle, the neural network model ends the cycle and immediately begins a new cycle from the present location. when the object reaches the goal state, the neural network model learns that this path is productive towards reaching the goal state, and is given delayed reinforcement in the form of a """"reward"""". upon reaching a state, the neural network model calculates a level of satisfaction with its progress towards reaching the goal state. if the level of satisfaction is low, the neural network model is more likely to override what has been learned thus far and deviate from a path known to lead to the goal state to experiment with new and possibly better paths. "" dated 1992-12-15" 5172490,clothes dryer with neurocontrol device,"a clothes dryer of the dehumidification type is disclosed in which hot air induced by a heater is circulated from a drying compartment through a heat exchanger. a volume, wetness, wetness unevenness, temperature, temperature unevenness of clothes to be dried and the temperature of the hot air blown out of the drying compartment are detected by respective detectors. results of detection are input to a control device incorporating a neural network. the control device operates in the manner of neurocontrol to control a volume of outside air supplied to the heat exchanger and a heating value of the heater.",1992-12-22,"The title of the patent is clothes dryer with neurocontrol device and its abstract is a clothes dryer of the dehumidification type is disclosed in which hot air induced by a heater is circulated from a drying compartment through a heat exchanger. a volume, wetness, wetness unevenness, temperature, temperature unevenness of clothes to be dried and the temperature of the hot air blown out of the drying compartment are detected by respective detectors. results of detection are input to a control device incorporating a neural network. the control device operates in the manner of neurocontrol to control a volume of outside air supplied to the heat exchanger and a heating value of the heater. dated 1992-12-22" 5175678,method and procedure for neural control of dynamic processes,a neural network control based on a general multi-variable nonlinear dynamic model incorporating time delays is disclosed. the inverse dynamics of the process being controlled is learned represented by a multi-layer neural network which is used as a feedforward control to achieve a specified closed loop response under varying conditions. the weights between the layers in the neural network are adjusted during the learning process. the learning process is based on minimizing the combined error between the desired process value and the actual process output and the error between the desired process value and the inverse process neural network output.,1992-12-29,The title of the patent is method and procedure for neural control of dynamic processes and its abstract is a neural network control based on a general multi-variable nonlinear dynamic model incorporating time delays is disclosed. the inverse dynamics of the process being controlled is learned represented by a multi-layer neural network which is used as a feedforward control to achieve a specified closed loop response under varying conditions. the weights between the layers in the neural network are adjusted during the learning process. the learning process is based on minimizing the combined error between the desired process value and the actual process output and the error between the desired process value and the inverse process neural network output. dated 1992-12-29 5175793,recognition apparatus using articulation positions for recognizing a voice,"a first voice recognition apparatus includes a device for analyzing frequencies of the input voice and a device coupled to the analyzing unit for determining vowel zones and consonant zones of the analyzed input voice. the apparatus further includes a device for determining positions of articulation of an input voice determined from the vowel zones by calculating from frequency components of the input voice in accordance with a predetermined algorithm based on frequency components of monophthongs having known phonation contents and positions of articulation. a second voice recognition apparatus includes a device for analyzing frequencies of the input voice so as to derive acoustic parameters from the input voice. a pattern converting unit is coupled to the analyzing unit and uses a neural network for converting the acoustic parameters to articulartory vectors. the neural network is capable of learning, by the error back propagation method using target data produced by a predetermined sequence based on the acoustic parameters, to create rules for converting the acoustic parameters of the input voice to articulatory vectors having at least two vector elements. a recognizing unit is coupled to the pattern converting unit for recognizing the input voice by comparing a feature pattern of the analyzed input voice having the articulatory vector with reference feature patterns in a predetermined sequence. a storage unit is coupled to the recognizing unit for storing the reference feature patterns having the articulatory vectors created by the pattern converting unit.",1992-12-29,"The title of the patent is recognition apparatus using articulation positions for recognizing a voice and its abstract is a first voice recognition apparatus includes a device for analyzing frequencies of the input voice and a device coupled to the analyzing unit for determining vowel zones and consonant zones of the analyzed input voice. the apparatus further includes a device for determining positions of articulation of an input voice determined from the vowel zones by calculating from frequency components of the input voice in accordance with a predetermined algorithm based on frequency components of monophthongs having known phonation contents and positions of articulation. a second voice recognition apparatus includes a device for analyzing frequencies of the input voice so as to derive acoustic parameters from the input voice. a pattern converting unit is coupled to the analyzing unit and uses a neural network for converting the acoustic parameters to articulartory vectors. the neural network is capable of learning, by the error back propagation method using target data produced by a predetermined sequence based on the acoustic parameters, to create rules for converting the acoustic parameters of the input voice to articulatory vectors having at least two vector elements. a recognizing unit is coupled to the pattern converting unit for recognizing the input voice by comparing a feature pattern of the analyzed input voice having the articulatory vector with reference feature patterns in a predetermined sequence. a storage unit is coupled to the recognizing unit for storing the reference feature patterns having the articulatory vectors created by the pattern converting unit. dated 1992-12-29" 5175798,digital artificial neuron based on a probabilistic ram,"a neuron for use in a neural processing network, comprises a memory having a plurality of storage locations at each of which a number representing a probability is stored, each of the storage locations being selectively addressable to cause the contents of the location to be read to an input of a comparator. a noise generator inputs to the comparator a random number representing noise. at an output of the comparator an output signal appears having a first or second value depending on the values of the numbers received from the addressed storage location and the noise generator, the probability of the output signal having a given one of the first and second values being determined by the number at the addressed location. preferably the neuron receives from the environment signals representing success or failure of the network, the value of the number stored at the addressed location being changed in such a way as to increase the probability of the successful action if a success signal is received, and to decrease the probability of the unsuccessful action if a failure signal is received.",1992-12-29,"The title of the patent is digital artificial neuron based on a probabilistic ram and its abstract is a neuron for use in a neural processing network, comprises a memory having a plurality of storage locations at each of which a number representing a probability is stored, each of the storage locations being selectively addressable to cause the contents of the location to be read to an input of a comparator. a noise generator inputs to the comparator a random number representing noise. at an output of the comparator an output signal appears having a first or second value depending on the values of the numbers received from the addressed storage location and the noise generator, the probability of the output signal having a given one of the first and second values being determined by the number at the addressed location. preferably the neuron receives from the environment signals representing success or failure of the network, the value of the number stored at the addressed location being changed in such a way as to increase the probability of the successful action if a success signal is received, and to decrease the probability of the unsuccessful action if a failure signal is received. dated 1992-12-29" 5177746,error correction circuit using a design based on a neural network model,an error correction circuit is provided which uses nmos and pmos synapses to form neural network type responses to a coded multi-bit input. use of mos technology logic in error correction circuits allows such devices to be easily interfaced with other like technology circuits without the need to use distinct interface logic as with conventional error correction circuitry.,1993-01-05,The title of the patent is error correction circuit using a design based on a neural network model and its abstract is an error correction circuit is provided which uses nmos and pmos synapses to form neural network type responses to a coded multi-bit input. use of mos technology logic in error correction circuits allows such devices to be easily interfaced with other like technology circuits without the need to use distinct interface logic as with conventional error correction circuitry. dated 1993-01-05 5177994,odor sensing system,"an odor sensing system is comprised of a sensor cell including a plurality of quartz resonator sensors aligned therein to detect odor by variation of resonance frequencies derived from weight loading on surfaces thereof, a recognition line including a neural network which recognizes data obtained by subtraction between an output signal of the sensor as frequency variation and, a reference signal selected by one of the output signals of the sensor. the sensor cell is thermostatically regulated by circulating thermostatic water therein to maintain the temperature higher than an advance line of the system. a sample to be recognized is supplied to the sensor cell in a form of vapor generated by blowing a standard gas onto the surface of the sample.",1993-01-12,"The title of the patent is odor sensing system and its abstract is an odor sensing system is comprised of a sensor cell including a plurality of quartz resonator sensors aligned therein to detect odor by variation of resonance frequencies derived from weight loading on surfaces thereof, a recognition line including a neural network which recognizes data obtained by subtraction between an output signal of the sensor as frequency variation and, a reference signal selected by one of the output signals of the sensor. the sensor cell is thermostatically regulated by circulating thermostatic water therein to maintain the temperature higher than an advance line of the system. a sample to be recognized is supplied to the sensor cell in a form of vapor generated by blowing a standard gas onto the surface of the sample. dated 1993-01-12" 5179624,speech recognition apparatus using neural network and fuzzy logic,"a speech recognition apparatus has: a speech input unit for inputting a speech; a speech analysis unit for analyzing the inputted speech to output the time series of a feature vector; a candidates selection unit for inputting the time series of a feature vector from the speech analysis unit to select a plurality of candidates of recognition result from the speech categories; and a discrimination processing unit for discriminating the selected candidates to obtain a final recognition result. the discrimination processing unit includes three components in the form of a pair generation unit for generating all of the two combinations of the n-number of candidates selected by said candidate selection unit a pair discrimination unit for discriminating which of the candidates of the combinations is more certain for each of all .sub.n c.sub.2 -number of combinations (or pairs) on the basis of the extracted result of the acoustic feature intrinsic to each of said candidate speeches and a final decision unit for collecting all the pair discrimination results obtained from the pair discrimination unit for each of all the .sub.n c.sub.2 -number of combinations (or pairs) to decide the final result. the pair discrimination unit handles the extracted result of the acoustic feature intrinsic to each of the candidate speeches as fuzzy information and accomplishes the discrimination processing on the basis of fuzzy logic algorithms, and the final decision unit accomplishes its collections on the basis of the fuzzy logic algorithms.",1993-01-12,"The title of the patent is speech recognition apparatus using neural network and fuzzy logic and its abstract is a speech recognition apparatus has: a speech input unit for inputting a speech; a speech analysis unit for analyzing the inputted speech to output the time series of a feature vector; a candidates selection unit for inputting the time series of a feature vector from the speech analysis unit to select a plurality of candidates of recognition result from the speech categories; and a discrimination processing unit for discriminating the selected candidates to obtain a final recognition result. the discrimination processing unit includes three components in the form of a pair generation unit for generating all of the two combinations of the n-number of candidates selected by said candidate selection unit a pair discrimination unit for discriminating which of the candidates of the combinations is more certain for each of all .sub.n c.sub.2 -number of combinations (or pairs) on the basis of the extracted result of the acoustic feature intrinsic to each of said candidate speeches and a final decision unit for collecting all the pair discrimination results obtained from the pair discrimination unit for each of all the .sub.n c.sub.2 -number of combinations (or pairs) to decide the final result. the pair discrimination unit handles the extracted result of the acoustic feature intrinsic to each of the candidate speeches as fuzzy information and accomplishes the discrimination processing on the basis of fuzzy logic algorithms, and the final decision unit accomplishes its collections on the basis of the fuzzy logic algorithms. dated 1993-01-12" 5179631,neural network logic system,""" a novel neural network implementation for logic systems has been developed. the neural network can determine whether a particular logic system and knowledge base are self-consistent, which can be a difficult problem for more complex systems. through neural network hardware using parallel computation, valid solutions may be found more rapidly than could be done with previous, software-based implementations. this neural network is particularly suited for use in large, real-time problems, such as in a real-time expert system for testing the consistency of a programmable process controller, for testing the consistency of an integrated circuit design, or for testing the consistency of an """"expert system."""" this neural network may also be used as an """"inference engine,"""" i.e., to test the validity of a particular logical expression in the context of a given logic system and knowledge base, or to search for all valid solutions, or to search for valid solutions consistent with given truth values which have been """"clamped"""" as true or false. the neural network may be used with many different types of logic systems: those based on conventional """"truth table"""" logic, those based on a truth maintenance system, or many other types of logic systems. the """"justifications"""" corresponding to a particular logic system and knowledge base may be permanently hard-wired by the manufacturer, or may be supplied by the user, either reversibly or irreversibly. """,1993-01-12,"The title of the patent is neural network logic system and its abstract is "" a novel neural network implementation for logic systems has been developed. the neural network can determine whether a particular logic system and knowledge base are self-consistent, which can be a difficult problem for more complex systems. through neural network hardware using parallel computation, valid solutions may be found more rapidly than could be done with previous, software-based implementations. this neural network is particularly suited for use in large, real-time problems, such as in a real-time expert system for testing the consistency of a programmable process controller, for testing the consistency of an integrated circuit design, or for testing the consistency of an """"expert system."""" this neural network may also be used as an """"inference engine,"""" i.e., to test the validity of a particular logical expression in the context of a given logic system and knowledge base, or to search for all valid solutions, or to search for valid solutions consistent with given truth values which have been """"clamped"""" as true or false. the neural network may be used with many different types of logic systems: those based on conventional """"truth table"""" logic, those based on a truth maintenance system, or many other types of logic systems. the """"justifications"""" corresponding to a particular logic system and knowledge base may be permanently hard-wired by the manufacturer, or may be supplied by the user, either reversibly or irreversibly. "" dated 1993-01-12" 5180911,parameter measurement systems and methods having a neural network comprising parameter output means,"a system for measuring the value of a parameter, e.g., structural strain, includes an optical waveguide, a laser or equivalent light source for launching coherent light into the waveguide to propagate therein as multi modes, an array of a plurality of spaced apart photodetectors each comprising a light receptor surface and signal output, said array being arranged to have light emitted from said waveguide output portion irradiate said light receptor surfaces, an artificial neural network formed of a plurality of spaced apart neurons, connectors to impose weighted portions of signal outputs from the photodetectors upon the neurons which register the parameter value on a meter or like output device.",1993-01-19,"The title of the patent is parameter measurement systems and methods having a neural network comprising parameter output means and its abstract is a system for measuring the value of a parameter, e.g., structural strain, includes an optical waveguide, a laser or equivalent light source for launching coherent light into the waveguide to propagate therein as multi modes, an array of a plurality of spaced apart photodetectors each comprising a light receptor surface and signal output, said array being arranged to have light emitted from said waveguide output portion irradiate said light receptor surfaces, an artificial neural network formed of a plurality of spaced apart neurons, connectors to impose weighted portions of signal outputs from the photodetectors upon the neurons which register the parameter value on a meter or like output device. dated 1993-01-19" 5181171,adaptive network for automated first break picking of seismic refraction events and method of operating the same,"an adaptive, or neural, network and a method of operating the same is disclosed which is particularly adapted for performing first break analysis for seismic shot records. the adaptive network is first trained according to the generalized delta rule. the disclosed training method includes selection of the seismic trace with the highest error, where the backpropagation is performed according to the error of this worst trace. the learning and momentum factors in the generalized delta rule are adjusted according to the value of the worst error, so that the learning and momentum factors increase as the error decreases. the training method further includes detection of slow convergence regions, and methods for escaping such regions including restoration of previously trimmed dormant links, renormalization of the weighting factor values, and the addition of new layers to the network. the network, after the addition of a new layer, includes links between nodes which skip the hidden layer. the error value used in the backpropagation is reduced from that actually calculated, by adjusting the desired output value, in order to reduce the growth of the weighting factors. after the training of the network, data corresponding to an average of the graphical display of a portion of the shot record, including multiple traces over a period of time, is provided to the network. the time of interest of the data is incremented until such time as the network indicates that the time of interest equals the first break time. the analysis may be repeated for all of the traces in the shot record.",1993-01-19,"The title of the patent is adaptive network for automated first break picking of seismic refraction events and method of operating the same and its abstract is an adaptive, or neural, network and a method of operating the same is disclosed which is particularly adapted for performing first break analysis for seismic shot records. the adaptive network is first trained according to the generalized delta rule. the disclosed training method includes selection of the seismic trace with the highest error, where the backpropagation is performed according to the error of this worst trace. the learning and momentum factors in the generalized delta rule are adjusted according to the value of the worst error, so that the learning and momentum factors increase as the error decreases. the training method further includes detection of slow convergence regions, and methods for escaping such regions including restoration of previously trimmed dormant links, renormalization of the weighting factor values, and the addition of new layers to the network. the network, after the addition of a new layer, includes links between nodes which skip the hidden layer. the error value used in the backpropagation is reduced from that actually calculated, by adjusting the desired output value, in order to reduce the growth of the weighting factors. after the training of the network, data corresponding to an average of the graphical display of a portion of the shot record, including multiple traces over a period of time, is provided to the network. the time of interest of the data is incremented until such time as the network indicates that the time of interest equals the first break time. the analysis may be repeated for all of the traces in the shot record. dated 1993-01-19" 5181256,pattern recognition device using a neural network,"a pattern recognition device has a dp matching section. the dp matching section performs frequency expansion dp matching to a standard pattern and a characteristic pattern obtained from input voice waveform to obtain a dp score and dp path pattern. it is determined by means of a category identification neural network using the dp path pattern obtained from the dp matching section whether a category of the standard pattern and a category of the characteristic pattern are the same, and a determination result corresponding to the degree of identification is obtained. a normalized dp score, which is the dp score normalized for individual differences within a required range, is then obtained in a divider by compensating the dp score using the determination result.",1993-01-19,"The title of the patent is pattern recognition device using a neural network and its abstract is a pattern recognition device has a dp matching section. the dp matching section performs frequency expansion dp matching to a standard pattern and a characteristic pattern obtained from input voice waveform to obtain a dp score and dp path pattern. it is determined by means of a category identification neural network using the dp path pattern obtained from the dp matching section whether a category of the standard pattern and a category of the characteristic pattern are the same, and a determination result corresponding to the degree of identification is obtained. a normalized dp score, which is the dp score normalized for individual differences within a required range, is then obtained in a divider by compensating the dp score using the determination result. dated 1993-01-19" 5182794,recurrent neural networks teaching system,"a teaching method for a recurrent neural network having hidden, output and input neurons calculates weighting errors over a limited number of propagations of the network. this process permits the use of conventional teaching sets, such as are used with feedforward networks, to be used with recurrent networks. the teaching outputs are substituted for the computed activations of the output neurons in the forward propagation and error correction stages. back propagated error from the last propagation is assumed to be zero for the hidden neurons. a method of reducing drift of the network with respect to a modeled process is also described and a forced cycling method to eliminate the time lag between network input and output.",1993-01-26,"The title of the patent is recurrent neural networks teaching system and its abstract is a teaching method for a recurrent neural network having hidden, output and input neurons calculates weighting errors over a limited number of propagations of the network. this process permits the use of conventional teaching sets, such as are used with feedforward networks, to be used with recurrent networks. the teaching outputs are substituted for the computed activations of the output neurons in the forward propagation and error correction stages. back propagated error from the last propagation is assumed to be zero for the hidden neurons. a method of reducing drift of the network with respect to a modeled process is also described and a forced cycling method to eliminate the time lag between network input and output. dated 1993-01-26" 5184218,bandwidth compression and expansion system,a bandwidth compression and expansion system is provided in which analog data is processed in real time using a sub-sampling technique in which pixels or other data values within a sub-sampling region determine the value of a corresponding signal which also denotes trends or patterns in accordance with the other pixels or signal values within a sampling region encompassing the sub-sampling region. neural networks are used to implement the sub-sampling process both during bandwidth compression and during bandwidth expansion in which interpolation and extrapolation are employed to reverse the sub-sampling process used during compression. the neural network forms part of an arrangement in which analog input signals are converted to digital signals that are then stored in a random access memory which operates in conjunction with an address generator for identifying a succession of sampling and sub-sampling regions within the memory. the output of the memory is converted to an analog signal before being held in a sample and hold memory for use in the neural network.,1993-02-02,The title of the patent is bandwidth compression and expansion system and its abstract is a bandwidth compression and expansion system is provided in which analog data is processed in real time using a sub-sampling technique in which pixels or other data values within a sub-sampling region determine the value of a corresponding signal which also denotes trends or patterns in accordance with the other pixels or signal values within a sampling region encompassing the sub-sampling region. neural networks are used to implement the sub-sampling process both during bandwidth compression and during bandwidth expansion in which interpolation and extrapolation are employed to reverse the sub-sampling process used during compression. the neural network forms part of an arrangement in which analog input signals are converted to digital signals that are then stored in a random access memory which operates in conjunction with an address generator for identifying a succession of sampling and sub-sampling regions within the memory. the output of the memory is converted to an analog signal before being held in a sample and hold memory for use in the neural network. dated 1993-02-02 5185816,method of selecting characeteristics data for a data processing system,"a method of selecting characteristics data for a data processing system from a group of input data for reducing data volume of each input data by said data processing system having a neural network structure or a structure equivalent thereto, where each input data consists of a plurality of said characteristics data. the method includes the steps of storing outputs of said input data; selecting a specific characteristics data of a pair of different input data; exchanging said characteristics data of said input data pair with each other; comparing outputs from said data processing system in response to said input data before and after the exchange of said characteristics data; and removing said characteristics data from said input data in said group, when a difference between said outputs before and after is comparatively small.",1993-02-09,"The title of the patent is method of selecting characeteristics data for a data processing system and its abstract is a method of selecting characteristics data for a data processing system from a group of input data for reducing data volume of each input data by said data processing system having a neural network structure or a structure equivalent thereto, where each input data consists of a plurality of said characteristics data. the method includes the steps of storing outputs of said input data; selecting a specific characteristics data of a pair of different input data; exchanging said characteristics data of said input data pair with each other; comparing outputs from said data processing system in response to said input data before and after the exchange of said characteristics data; and removing said characteristics data from said input data in said group, when a difference between said outputs before and after is comparatively small. dated 1993-02-09" 5185848,noise reduction system using neural network,"a noise reduction system used for transmission and/or recognition of speech includes a speech analyzer for analyzing a noisy speech input signal thereby converting the speech signal into feature vectors such as autocorrelation coefficients, and a neural network for receiving the feature vectors of the noisy speech signal as its input. the neural network extracts from a codebook an index of prototype vectors corresponding to a noise-free equivalent to the noisy speech input signal. feature vectors of speech are read out from the codebook on the basis of the index delivered as an output from the neural network, thereby causing the speech input to be reproduced on the basis of the feature vectors of speech read out from the codebook.",1993-02-09,"The title of the patent is noise reduction system using neural network and its abstract is a noise reduction system used for transmission and/or recognition of speech includes a speech analyzer for analyzing a noisy speech input signal thereby converting the speech signal into feature vectors such as autocorrelation coefficients, and a neural network for receiving the feature vectors of the noisy speech signal as its input. the neural network extracts from a codebook an index of prototype vectors corresponding to a noise-free equivalent to the noisy speech input signal. feature vectors of speech are read out from the codebook on the basis of the index delivered as an output from the neural network, thereby causing the speech input to be reproduced on the basis of the feature vectors of speech read out from the codebook. dated 1993-02-09" 5185850,color transformation method and apparatus for transforming physical to psychological attribute using a neural network,"to practice a method of transforming color sensation informations such that multidimensional physical informations and color sensation informations sensed by living bodies in response to the physical informations are non-linearly transformed therebetween, a multilayer feedforward type neural network is used for the purpose of accomplishing the foregoing transformation. the physical informations are provided in the form of data derived from multidimensional spectral distribution of light and the color sensation informations are provided in the form of sensitive colors each sensed by the living bodies as a psychological quantity relative to a certain color. an apparatus for carrying out the method includes an input section into which a physical quantity is inputted as an electrical signal, an information transforming section in which the inputted signal is transformed into a color sensation information representing psychological quantity of color and an output section from which the transformed color information is outputted. the information transforming section includes a multilayer feedforward type neural network.",1993-02-09,"The title of the patent is color transformation method and apparatus for transforming physical to psychological attribute using a neural network and its abstract is to practice a method of transforming color sensation informations such that multidimensional physical informations and color sensation informations sensed by living bodies in response to the physical informations are non-linearly transformed therebetween, a multilayer feedforward type neural network is used for the purpose of accomplishing the foregoing transformation. the physical informations are provided in the form of data derived from multidimensional spectral distribution of light and the color sensation informations are provided in the form of sensitive colors each sensed by the living bodies as a psychological quantity relative to a certain color. an apparatus for carrying out the method includes an input section into which a physical quantity is inputted as an electrical signal, an information transforming section in which the inputted signal is transformed into a color sensation information representing psychological quantity of color and an output section from which the transformed color information is outputted. the information transforming section includes a multilayer feedforward type neural network. dated 1993-02-09" 5195169,control device for controlling learning of a neural network,"a control device for controlling the learning of a neural netowrk includes a monitor for monitoring weight values of synapse connections between units of the neural netowrk during learning of the neural network so as to update these weight values. when one of the weight values satisfies a preset condition, the weight value is updated to a predetermined value such that configuration of the neural network is determined in an optimum manner.",1993-03-16,"The title of the patent is control device for controlling learning of a neural network and its abstract is a control device for controlling the learning of a neural netowrk includes a monitor for monitoring weight values of synapse connections between units of the neural netowrk during learning of the neural network so as to update these weight values. when one of the weight values satisfies a preset condition, the weight value is updated to a predetermined value such that configuration of the neural network is determined in an optimum manner. dated 1993-03-16" 5195170,neural-network dedicated processor for solving assignment problems,"a neural network processor for solving first-order competitive assignment problems consists of a matrix of n.times.m processing units, each of which corresponds to the pairing of a first number of elements of {r.sub.i } with a second number of elements {c.sub.j }, wherein limits of the first number are programmed in row control superneurons, and limits of the second number are programmed in column superneurons as min and max values. the cost (weight) w.sub.ij of the pairings is programmed separately into each pu. for each row and column of pus, a dedicated constraint superneuron insures that the number of active neurons within the associated row or column fall within a specified range. annealing is provided by gradually increasing the pu gain for each row and column or increasing positive feedback to each pu, the latter being effective to increase hysteresis of each pu or by combining both of these techniques.",1993-03-16,"The title of the patent is neural-network dedicated processor for solving assignment problems and its abstract is a neural network processor for solving first-order competitive assignment problems consists of a matrix of n.times.m processing units, each of which corresponds to the pairing of a first number of elements of {r.sub.i } with a second number of elements {c.sub.j }, wherein limits of the first number are programmed in row control superneurons, and limits of the second number are programmed in column superneurons as min and max values. the cost (weight) w.sub.ij of the pairings is programmed separately into each pu. for each row and column of pus, a dedicated constraint superneuron insures that the number of active neurons within the associated row or column fall within a specified range. annealing is provided by gradually increasing the pu gain for each row and column or increasing positive feedback to each pu, the latter being effective to increase hysteresis of each pu or by combining both of these techniques. dated 1993-03-16" 5197114,computer neural network regulatory process control system and method,"a computer neural network regulatory process control system and method allows for the elimination of a human operator from real time control of the process. the present invention operates in three modes: training, operation (prediction), and retraining. in the training mode, training input data is produced by the control adjustment made to the process by the human operator. the neural network of the present invention is trained by producing output data using input data for prediction. the output data is compared with the training input data to produce error data, which is used to adjust the weight(s) of the neural network. when the error data is less than a preselected criterion, training has been completed. in the operation mode, the neutral network of the present invention provides output data based upon predictions using the input data. the output data is used to control a state of the process via an actuator. in the retraining mode, retraining data is supplied by monitoring the supplemental actions of the human operator. the retraining data is used by the neural network for adjusting the weight(s) of the neural network.",1993-03-23,"The title of the patent is computer neural network regulatory process control system and method and its abstract is a computer neural network regulatory process control system and method allows for the elimination of a human operator from real time control of the process. the present invention operates in three modes: training, operation (prediction), and retraining. in the training mode, training input data is produced by the control adjustment made to the process by the human operator. the neural network of the present invention is trained by producing output data using input data for prediction. the output data is compared with the training input data to produce error data, which is used to adjust the weight(s) of the neural network. when the error data is less than a preselected criterion, training has been completed. in the operation mode, the neutral network of the present invention provides output data based upon predictions using the input data. the output data is used to control a state of the process via an actuator. in the retraining mode, retraining data is supplied by monitoring the supplemental actions of the human operator. the retraining data is used by the neural network for adjusting the weight(s) of the neural network. dated 1993-03-23" 5200816,method and apparatus for color processing with neural networks,""" a method and apparatus for constructing, training and utilizing an artificial neural network (also termed herein a """"neural network"""", an ann, or an nn) in order to transform a first color value in a first color coordinate system into a second color value in a second color coordinate system. """,1993-04-06,"The title of the patent is method and apparatus for color processing with neural networks and its abstract is "" a method and apparatus for constructing, training and utilizing an artificial neural network (also termed herein a """"neural network"""", an ann, or an nn) in order to transform a first color value in a first color coordinate system into a second color value in a second color coordinate system. "" dated 1993-04-06" 5200898,method of controlling motor vehicle,"a motor vehicle is controlled with a neural network which has a data learning capability. a present value of the throttle valve opening of the engine on the motor vehicle and a rate of change of the present value of the throttle valve opening are periodically supplied to the neural network. the neural network is controlled to learn the present value of the throttle valve opening when the rate of change of the present value of the throttle valve opening becomes zero so that a predicted value of the throttle valve opening approaches the actual value of the throttle valve opening at the time the rate of change thereof becomes zero. an operating condition of the motor vehicle is controlled based on the predicted value of the throttle valve opening, which is represented by a periodically produced output signal from the neural network.",1993-04-06,"The title of the patent is method of controlling motor vehicle and its abstract is a motor vehicle is controlled with a neural network which has a data learning capability. a present value of the throttle valve opening of the engine on the motor vehicle and a rate of change of the present value of the throttle valve opening are periodically supplied to the neural network. the neural network is controlled to learn the present value of the throttle valve opening when the rate of change of the present value of the throttle valve opening becomes zero so that a predicted value of the throttle valve opening approaches the actual value of the throttle valve opening at the time the rate of change thereof becomes zero. an operating condition of the motor vehicle is controlled based on the predicted value of the throttle valve opening, which is represented by a periodically produced output signal from the neural network. dated 1993-04-06" 5200908,placement optimizing method/apparatus and apparatus for designing semiconductor devices,"a method of finding the optimal placement of circuit elements is disclosed in which the optimal position of each circuit element is determined from the results of arithmetic operations performed by a processor network where a plurality of processors are interconnected so as to form a neural network, and each processor takes in its own output and the outputs of all other processors to solve a problem.",1993-04-06,"The title of the patent is placement optimizing method/apparatus and apparatus for designing semiconductor devices and its abstract is a method of finding the optimal placement of circuit elements is disclosed in which the optimal position of each circuit element is determined from the results of arithmetic operations performed by a processor network where a plurality of processors are interconnected so as to form a neural network, and each processor takes in its own output and the outputs of all other processors to solve a problem. dated 1993-04-06" 5201026,method of architecting multiple neural network and system therefor,"to facilitate architecting of a multiple neural network, irrespective of the quantity of cases and the complexity of case dependence relationship, sets of input instances and the desirable outputs corresponding thereto are stored; the stored sets are read in sequence to discriminate whether all variables included in the input instances of the read set are included in the outputs of any given stored set, to mark variables not included in the outputs of any sets; the sets whose input instances include only the marked variables are selected from among the read sets; unit neural networks for learning the selected sets are formed and simultaneously variables included in the outputs of the formed unit neural networks are marked; a unit neural network for learning any given set is formed; and the formed unit neural networks are connected to each other to architect a multiple neural network.",1993-04-06,"The title of the patent is method of architecting multiple neural network and system therefor and its abstract is to facilitate architecting of a multiple neural network, irrespective of the quantity of cases and the complexity of case dependence relationship, sets of input instances and the desirable outputs corresponding thereto are stored; the stored sets are read in sequence to discriminate whether all variables included in the input instances of the read set are included in the outputs of any given stored set, to mark variables not included in the outputs of any sets; the sets whose input instances include only the marked variables are selected from among the read sets; unit neural networks for learning the selected sets are formed and simultaneously variables included in the outputs of the formed unit neural networks are marked; a unit neural network for learning any given set is formed; and the formed unit neural networks are connected to each other to architect a multiple neural network. dated 1993-04-06" 5202956,semiconductor neural network and operating method thereof,"a semiconductor neural network includes a coupling matrix having coupling elements arranged in a matrix which couple with specific coupling strengths internal data input lines to internal data output lines. the internal data output lines are divided into groups. the neural network further comprises weighting addition circuits provided corresponding to the groups of the internal data output lines. a weighting addition circuit includes weighing elements for adding weights to signals on the internal data output lines in the corresponding group and outputting the weighted signals, and an addition circuit for outputting a total sum of the outputs of those weighting elements. the internal data output lines are arranged to form pairs and the addition circuit has a first input terminal for receiving one weighting element output of each of the pairs in common, a second input terminal for receiving the other weighting element output of each of the pairs in common, and sense amplifier for differentially amplifying signals at the first and second input terminals. the neural network further includes a circuit for detecting a change time of an input signals, a circuit responsive to an input signal change for equalizing the first and second input terminals for a predetermined period, and a circuit for activating the sense amplifier after the equalization is completed. the information retention capability of each coupling element is set according to the weight of an associated weighting element. this neural network can provide multi-valued expression of coupling strength with fewer coupling elements.",1993-04-13,"The title of the patent is semiconductor neural network and operating method thereof and its abstract is a semiconductor neural network includes a coupling matrix having coupling elements arranged in a matrix which couple with specific coupling strengths internal data input lines to internal data output lines. the internal data output lines are divided into groups. the neural network further comprises weighting addition circuits provided corresponding to the groups of the internal data output lines. a weighting addition circuit includes weighing elements for adding weights to signals on the internal data output lines in the corresponding group and outputting the weighted signals, and an addition circuit for outputting a total sum of the outputs of those weighting elements. the internal data output lines are arranged to form pairs and the addition circuit has a first input terminal for receiving one weighting element output of each of the pairs in common, a second input terminal for receiving the other weighting element output of each of the pairs in common, and sense amplifier for differentially amplifying signals at the first and second input terminals. the neural network further includes a circuit for detecting a change time of an input signals, a circuit responsive to an input signal change for equalizing the first and second input terminals for a predetermined period, and a circuit for activating the sense amplifier after the equalization is completed. the information retention capability of each coupling element is set according to the weight of an associated weighting element. this neural network can provide multi-valued expression of coupling strength with fewer coupling elements. dated 1993-04-13" 5203984,monitoring system for plant operation condition and its in-situ electrochemical electrode,"a plant operational status monitoring supervisory system comprising; means for extracting information directly relating to water quality of an objective portion consecutively for a period of time by means of an electrochemical water quality sensor installed in an objective portion to monitor in-situ in a plant; means for evaluating water quality based on thus extracted information; means for comparing an obtained water quality evaluation result with a reference value for a predetermined plant operation procedure; and means for displaying or storing necessary portion out of said comparison results; is disclosed. an electrochemical reference electrode used in this system being provided with an electrolyte layer containing ion of the electrode member; a porous ceramic layer surrounding the same without permeating liquid; and electrode member electrochemically contacting with said elec-trolyte layer; and a terminal electrically contacting with said electrode member; and further having a long life in high temperature water, various status of high temperature water in objective portions and that of nearby constituent members in a plant are possible to be monitored online by means of this reference electrode. further, because monitored data are processed by means of a neural network, the higher precision level of monitoring has been achieved.",1993-04-20,"The title of the patent is monitoring system for plant operation condition and its in-situ electrochemical electrode and its abstract is a plant operational status monitoring supervisory system comprising; means for extracting information directly relating to water quality of an objective portion consecutively for a period of time by means of an electrochemical water quality sensor installed in an objective portion to monitor in-situ in a plant; means for evaluating water quality based on thus extracted information; means for comparing an obtained water quality evaluation result with a reference value for a predetermined plant operation procedure; and means for displaying or storing necessary portion out of said comparison results; is disclosed. an electrochemical reference electrode used in this system being provided with an electrolyte layer containing ion of the electrode member; a porous ceramic layer surrounding the same without permeating liquid; and electrode member electrochemically contacting with said elec-trolyte layer; and a terminal electrically contacting with said electrode member; and further having a long life in high temperature water, various status of high temperature water in objective portions and that of nearby constituent members in a plant are possible to be monitored online by means of this reference electrode. further, because monitored data are processed by means of a neural network, the higher precision level of monitoring has been achieved. dated 1993-04-20" 5204872,control system for electric arc furnace,"an improved arc furnace regulator employs neural circuits connected in a multi-layer network configuration with various weighted relationships between the successive layers which are automatically changed over time as a function of an error signal by means of the back-propagation method so that the regulator gradually improves its control algorithm as a result of accumulated experience. the network is implemented in software which can be developed and run on a pc with extra co-computing capability for greater execution speed. a second trainable neural network which emulates the arc furnace is used to develop the error signal, and is trained in mutually exclusive time periods with the training of the regular network.",1993-04-20,"The title of the patent is control system for electric arc furnace and its abstract is an improved arc furnace regulator employs neural circuits connected in a multi-layer network configuration with various weighted relationships between the successive layers which are automatically changed over time as a function of an error signal by means of the back-propagation method so that the regulator gradually improves its control algorithm as a result of accumulated experience. the network is implemented in software which can be developed and run on a pc with extra co-computing capability for greater execution speed. a second trainable neural network which emulates the arc furnace is used to develop the error signal, and is trained in mutually exclusive time periods with the training of the regular network. dated 1993-04-20" 5204938,method of implementing a neural network on a digital computer,"a digital computer architecture specifically tailored for implementing a neural network. several simultaneously operable processors (10) each have their own local memory (17) for storing weight and connectivity information corresponding to nodes of the neural network whose output values will be calculated by said processor (10). a global memory (55,56) is coupled to each of the processors (10) via a common data bus (30). output values corresponding to a first layer of the neural network are broadcast from the global memory (55,56) into each of the processors (10). the processors (10) calculate output values for a set of nodes of the next higher-ordered layer of the neural network. said newly-calculated output values are broadcast from each processor (10) to the global memory (55,56) and to all the other processors (10), which use the output values as a head start in calculating a new set of output values corresponding to the next layer of the neural network.",1993-04-20,"The title of the patent is method of implementing a neural network on a digital computer and its abstract is a digital computer architecture specifically tailored for implementing a neural network. several simultaneously operable processors (10) each have their own local memory (17) for storing weight and connectivity information corresponding to nodes of the neural network whose output values will be calculated by said processor (10). a global memory (55,56) is coupled to each of the processors (10) via a common data bus (30). output values corresponding to a first layer of the neural network are broadcast from the global memory (55,56) into each of the processors (10). the processors (10) calculate output values for a set of nodes of the next higher-ordered layer of the neural network. said newly-calculated output values are broadcast from each processor (10) to the global memory (55,56) and to all the other processors (10), which use the output values as a head start in calculating a new set of output values corresponding to the next layer of the neural network. dated 1993-04-20" 5208900,digital neural network computation ring,"an artificial neural network is provided using a digital architecture having feedforward and feedback processors interconnected with a digital computation ring or data bus to handle complex neural feedback arrangements. the feedforward processor receives a sequence of digital input signals and multiplies each by a weight in a predetermined manner and stores the results in an accumulator. the accumulated values may be shifted around the computation ring and read from a tap point thereof, or reprocessed through the feedback processor with predetermined scaling factors and combined with the feedforward outcomes for providing various types neural network feedback computations. alternately, the feedforward outcomes may be placed sequentially on a data bus for feedback processing through the network. the digital architecture includes a predetermined number of data input terminals for the digital input signal irrespective of the number of synapses per neuron and the number of neurons per neural network, and allows the synapses to share a common multiplier and thereby reduce the physical area of the neural network. a learning circuit may be utilized in the feedforward processor for real-time updating the weights thereof to reflect changes in the environement.",1993-05-04,"The title of the patent is digital neural network computation ring and its abstract is an artificial neural network is provided using a digital architecture having feedforward and feedback processors interconnected with a digital computation ring or data bus to handle complex neural feedback arrangements. the feedforward processor receives a sequence of digital input signals and multiplies each by a weight in a predetermined manner and stores the results in an accumulator. the accumulated values may be shifted around the computation ring and read from a tap point thereof, or reprocessed through the feedback processor with predetermined scaling factors and combined with the feedforward outcomes for providing various types neural network feedback computations. alternately, the feedforward outcomes may be placed sequentially on a data bus for feedback processing through the network. the digital architecture includes a predetermined number of data input terminals for the digital input signal irrespective of the number of synapses per neuron and the number of neurons per neural network, and allows the synapses to share a common multiplier and thereby reduce the physical area of the neural network. a learning circuit may be utilized in the feedforward processor for real-time updating the weights thereof to reflect changes in the environement. dated 1993-05-04" 5210798,vector neural network for low signal-to-noise ratio detection of a target,"a vector neural network (vnn) of interconnected neurons is provided in transition mappings of potential targets wherein the threshold (energy) of a single frame does not provide adequate information (energy) to declare a target position. the vnn enhances the signal-to-noise ratio (snr) by integrating target energy over multiple frames including the steps of postulating massive numbers of target tracks (the hypotheses), propagating these target tracks over multiple frames, and accommodating different velocity target by pixel quantization. the vnn then defers thresholding to subsequent target stages when higher snr's are prevalent so that the loss of target information is minimized, and the vnn can declare both target location and velocity. the vnn can further include target maneuver detection by a process of energy balancing hypotheses.",1993-05-11,"The title of the patent is vector neural network for low signal-to-noise ratio detection of a target and its abstract is a vector neural network (vnn) of interconnected neurons is provided in transition mappings of potential targets wherein the threshold (energy) of a single frame does not provide adequate information (energy) to declare a target position. the vnn enhances the signal-to-noise ratio (snr) by integrating target energy over multiple frames including the steps of postulating massive numbers of target tracks (the hypotheses), propagating these target tracks over multiple frames, and accommodating different velocity target by pixel quantization. the vnn then defers thresholding to subsequent target stages when higher snr's are prevalent so that the loss of target information is minimized, and the vnn can declare both target location and velocity. the vnn can further include target maneuver detection by a process of energy balancing hypotheses. dated 1993-05-11" 5212741,preprocessing of dot-matrix/ink-jet printed text for optical character recognition,"method and apparatus are disclosed for processing image data of dot-matrix/ink-jet printed text to perform optical character recognition (ocr) of such image data. in the method and apparatus, the image data is viewed for detecting if dot-matrix/ink-jet printed text is present. any detected dot-matrix/ink-jet produced text is then pre-processed by determining the image characteristic thereof by forming a histogram of pixel density values in the image data. a 2-d spatial averaging operation as a second pre-processing step smooths the dots of the characters into strokes and reduces the dynamic range of the image data. the resultant spatially averaged image data is then contrast stretched in a third pre-processing step to darken dark regions of the image data and lighten light regions of the image data. edge enhancement is then applied to the contrast stretched image data in a fourth pre-processing step to bring out higher frequency line details. the edge enhanced image data is then binarized and applied to a dot-matrix/ink jet neural network classifier for recognizing characters in the binarized image data from a predetermined set of symbols prior to ocr.",1993-05-18,"The title of the patent is preprocessing of dot-matrix/ink-jet printed text for optical character recognition and its abstract is method and apparatus are disclosed for processing image data of dot-matrix/ink-jet printed text to perform optical character recognition (ocr) of such image data. in the method and apparatus, the image data is viewed for detecting if dot-matrix/ink-jet printed text is present. any detected dot-matrix/ink-jet produced text is then pre-processed by determining the image characteristic thereof by forming a histogram of pixel density values in the image data. a 2-d spatial averaging operation as a second pre-processing step smooths the dots of the characters into strokes and reduces the dynamic range of the image data. the resultant spatially averaged image data is then contrast stretched in a third pre-processing step to darken dark regions of the image data and lighten light regions of the image data. edge enhancement is then applied to the contrast stretched image data in a fourth pre-processing step to bring out higher frequency line details. the edge enhanced image data is then binarized and applied to a dot-matrix/ink jet neural network classifier for recognizing characters in the binarized image data from a predetermined set of symbols prior to ocr. dated 1993-05-18" 5212765,on-line training neural network system for process control,"an on-line training neural network for process control system and method trains by retrieving training sets from the stream of process data. the neural network detects the availability of new training data, and constructs a training set by retrieving the corresponding input data. the neural network is trained using the training set. over time, many training sets are presented to the neural network. when multiple presentations are needed to effectively train, a buffer of training sets is filled and updated as new training data becomes available. the size of the buffer is selected in accordance with the training needs of the neural network. once the buffer is full, a new training set bumps the oldest training set off the top of the buffer stack. the training sets in the buffer stack can be presented one or more times each time a new training set is constructed. an historical database of timestamped data can be used to construct training sets when training input data has a time delay from sample time to availability for the neural network. the timestamps of the training input data are used to select the appropriate timestamp at which input data is retrieved for use in the training set. using the historical database, the neural network can be trained retrospectively by searching the historical database and constructing training sets based on past data.",1993-05-18,"The title of the patent is on-line training neural network system for process control and its abstract is an on-line training neural network for process control system and method trains by retrieving training sets from the stream of process data. the neural network detects the availability of new training data, and constructs a training set by retrieving the corresponding input data. the neural network is trained using the training set. over time, many training sets are presented to the neural network. when multiple presentations are needed to effectively train, a buffer of training sets is filled and updated as new training data becomes available. the size of the buffer is selected in accordance with the training needs of the neural network. once the buffer is full, a new training set bumps the oldest training set off the top of the buffer stack. the training sets in the buffer stack can be presented one or more times each time a new training set is constructed. an historical database of timestamped data can be used to construct training sets when training input data has a time delay from sample time to availability for the neural network. the timestamps of the training input data are used to select the appropriate timestamp at which input data is retrieved for use in the training set. using the historical database, the neural network can be trained retrospectively by searching the historical database and constructing training sets based on past data. dated 1993-05-18" 5212766,neural network representing apparatus having self-organizing function,a neutral network representing apparatus includes a plurality of neuron expressing units and a plurality of synapse load expressing units. each of the synapse load expressing units couples two neuron expressing units through a synapse load which is specific thereto. the synapse load of the synapse load expressing unit is adjusted in accordance with a prescribed learning rule in learning of the neural network representing apparatus. this learning rule includes a learning coefficient which defines the amount of a synapse load to be changed in a single learning cycle. this learning coefficient is set according to a spatial or physical distance between two neurons expressed by two neuron expressing units which are coupled by a synapse load expressing unit. the learning coefficient is provided by a monotone decreasing function of the distance between the two neurons.,1993-05-18,The title of the patent is neural network representing apparatus having self-organizing function and its abstract is a neutral network representing apparatus includes a plurality of neuron expressing units and a plurality of synapse load expressing units. each of the synapse load expressing units couples two neuron expressing units through a synapse load which is specific thereto. the synapse load of the synapse load expressing unit is adjusted in accordance with a prescribed learning rule in learning of the neural network representing apparatus. this learning rule includes a learning coefficient which defines the amount of a synapse load to be changed in a single learning cycle. this learning coefficient is set according to a spatial or physical distance between two neurons expressed by two neuron expressing units which are coupled by a synapse load expressing unit. the learning coefficient is provided by a monotone decreasing function of the distance between the two neurons. dated 1993-05-18 5212767,multi-layer network and learning method therefor,"a multi-layer neural network comprising an input layer, a hidden layer and an output layer and a learning method for such a network are disclosed. a processor belonging to the hidden layer stores both the factors of multiplication or weights of link for a successive layer nearer to the input layer and the factors of multiplication or weights of link for a preceding layer nearer to the output layer. namely, the weight for a certain connection is doubly stored in processors which are at opposite ends of that connection. upon forward calculation, the access to the weights for the successive layer among the weights stored in the processors of the hidden layer can be made by the processors independently from each other. similarly, upon backward calculation, the access to weights for the preceding layer can be made by the processors independently from each other.",1993-05-18,"The title of the patent is multi-layer network and learning method therefor and its abstract is a multi-layer neural network comprising an input layer, a hidden layer and an output layer and a learning method for such a network are disclosed. a processor belonging to the hidden layer stores both the factors of multiplication or weights of link for a successive layer nearer to the input layer and the factors of multiplication or weights of link for a preceding layer nearer to the output layer. namely, the weight for a certain connection is doubly stored in processors which are at opposite ends of that connection. upon forward calculation, the access to the weights for the successive layer among the weights stored in the processors of the hidden layer can be made by the processors independently from each other. similarly, upon backward calculation, the access to weights for the preceding layer can be made by the processors independently from each other. dated 1993-05-18" 5214715,predictive self-organizing neural network,"an a pattern recognition subsystem responds to an a feature representation input to select a-category-representation and predict a b-category-representation and its associated b feature representation input. during learning trials, a predicted b-category-representation is compared to that obtained through a b pattern recognition subsystem. with mismatch, a vigilance parameter of the a-pattern-recognition subsystem is increased to cause reset of the first-category-representation selection. inputs to the pattern recognition subsystems may be preprocessed to complement code the inputs.",1993-05-25,"The title of the patent is predictive self-organizing neural network and its abstract is an a pattern recognition subsystem responds to an a feature representation input to select a-category-representation and predict a b-category-representation and its associated b feature representation input. during learning trials, a predicted b-category-representation is compared to that obtained through a b pattern recognition subsystem. with mismatch, a vigilance parameter of the a-pattern-recognition subsystem is increased to cause reset of the first-category-representation selection. inputs to the pattern recognition subsystems may be preprocessed to complement code the inputs. dated 1993-05-25" 5214744,method and apparatus for automatically identifying targets in sonar images,"a method and apparatus for automatically identifying targets in sonar images utilizes three processing systems which preferably operate simultaneously. after the image has been filtered and fourier transformed, a highlight-shadow detector classifies portions of the image as a highlight, a shadow or background according to greyness levels of pixels in such portions. a statistical cuer selects those portions which have been classified as a highlight or a shadow. a neural network then classifies the sets of highlight and shadow reports as targets or background.",1993-05-25,"The title of the patent is method and apparatus for automatically identifying targets in sonar images and its abstract is a method and apparatus for automatically identifying targets in sonar images utilizes three processing systems which preferably operate simultaneously. after the image has been filtered and fourier transformed, a highlight-shadow detector classifies portions of the image as a highlight, a shadow or background according to greyness levels of pixels in such portions. a statistical cuer selects those portions which have been classified as a highlight or a shadow. a neural network then classifies the sets of highlight and shadow reports as targets or background. dated 1993-05-25" 5214746,method and apparatus for training a neural network using evolutionary programming,"a method and apparatus for training neural networks using evolutionary programming. a network is adjusted to operate in a weighted configuration defined by a set of weight values and a plurality of training patterns are input to the network to generate evaluations of the training patterns as network outputs. each evaluation is compared to a desired output to obtain a corresponding error. from all of the errors, an overall error value corresponding to the set of weight values is determined. the above steps are repeated with different weighted configurations to obtain a plurality of overall error values. then, for each set of weight values, a score is determined by selecting error comparison values from a predetermined variable probability distribution and comparing them to the corresponding overall error value. a predetermined number of the sets of weight values determined to have the best scores are selected and copies are made. the copies are mutated by adding random numbers to their weights and the above steps are repeated with the best sets and the mutated copies defining the weighted configurations. this procedure is repeated until the overall error values diminish to below an acceptable threshold. the random numbers added to the weight values of copies are obtained from a continuous random distribution of numbers having zero mean and variance determined such that it would be expected to converge to zero as the different sets of weight values in successive iterations converge toward sets of weight values yielding the desired neural network performance.",1993-05-25,"The title of the patent is method and apparatus for training a neural network using evolutionary programming and its abstract is a method and apparatus for training neural networks using evolutionary programming. a network is adjusted to operate in a weighted configuration defined by a set of weight values and a plurality of training patterns are input to the network to generate evaluations of the training patterns as network outputs. each evaluation is compared to a desired output to obtain a corresponding error. from all of the errors, an overall error value corresponding to the set of weight values is determined. the above steps are repeated with different weighted configurations to obtain a plurality of overall error values. then, for each set of weight values, a score is determined by selecting error comparison values from a predetermined variable probability distribution and comparing them to the corresponding overall error value. a predetermined number of the sets of weight values determined to have the best scores are selected and copies are made. the copies are mutated by adding random numbers to their weights and the above steps are repeated with the best sets and the mutated copies defining the weighted configurations. this procedure is repeated until the overall error values diminish to below an acceptable threshold. the random numbers added to the weight values of copies are obtained from a continuous random distribution of numbers having zero mean and variance determined such that it would be expected to converge to zero as the different sets of weight values in successive iterations converge toward sets of weight values yielding the desired neural network performance. dated 1993-05-25" 5214747,segmented neural network with daisy chain control,"the present invention is a direct digitally implemented network system in which neural nodes 24, 26 and 28 which output to the same destination node 22 in the network share the same channel 30. if a set of nodes does not output any data to any node to which a second set of nodes outputs data (the two sets of nodes to not overlap or intersect), the two sets of nodes are independent and do not share a channel and have separate channels 120 and 122. the network is configured as parallel operating non-intersecting segments or independent sets where each segment has a segment communication channel or bus 30. each node in the independent set or segment is sequentially activated to produce an output by a daisy chain control signal. the outputs are thereby time division multiplexed over the channel 30 to the destination node 22. the nodes are implemented on integrated circuits 158 with multiple nodes per circuit. the outputs of the nodes on the circuits in a segment are connected to the segment channel. each node includes a memory array 136 that stores the weights applied to each input via a multiplier 152. the multiplied inputs are accumulated and applied to a lookup table 132 that performs any threshold comparison operation. the output of the lookup table 134 is placed on a common bus serving as the channel for the independent set of nodes by a tristate driver 44 controlled by the daisy chain control signal.",1993-05-25,"The title of the patent is segmented neural network with daisy chain control and its abstract is the present invention is a direct digitally implemented network system in which neural nodes 24, 26 and 28 which output to the same destination node 22 in the network share the same channel 30. if a set of nodes does not output any data to any node to which a second set of nodes outputs data (the two sets of nodes to not overlap or intersect), the two sets of nodes are independent and do not share a channel and have separate channels 120 and 122. the network is configured as parallel operating non-intersecting segments or independent sets where each segment has a segment communication channel or bus 30. each node in the independent set or segment is sequentially activated to produce an output by a daisy chain control signal. the outputs are thereby time division multiplexed over the channel 30 to the destination node 22. the nodes are implemented on integrated circuits 158 with multiple nodes per circuit. the outputs of the nodes on the circuits in a segment are connected to the segment channel. each node includes a memory array 136 that stores the weights applied to each input via a multiplier 152. the multiplied inputs are accumulated and applied to a lookup table 132 that performs any threshold comparison operation. the output of the lookup table 134 is placed on a common bus serving as the channel for the independent set of nodes by a tristate driver 44 controlled by the daisy chain control signal. dated 1993-05-25" 5216463,electrophotographic process control device using a neural network to control an amount of exposure,"an electrophotographic process control device capable of controlling the supply of a toner in such a manner as to stabilize an image against changes in the characteristics of a photoconductive element and in toner density. at the learning stage of a neural network, data from sensors are applied to the input layer of the network while a latent image gamma characteristic indicative of a relation between the amount of exposure and the potential of an image area is used as learning data to be given via the output layer of the network. at a control stage, the data from the sensors are applied to the input layer of the network, as at the learning stage, and the amount of exposure is so controlled as to set up a desired potential in an image area on the basis of a latent image gamma characteristic obtainable from the output layer of the network.",1993-06-01,"The title of the patent is electrophotographic process control device using a neural network to control an amount of exposure and its abstract is an electrophotographic process control device capable of controlling the supply of a toner in such a manner as to stabilize an image against changes in the characteristics of a photoconductive element and in toner density. at the learning stage of a neural network, data from sensors are applied to the input layer of the network while a latent image gamma characteristic indicative of a relation between the amount of exposure and the potential of an image area is used as learning data to be given via the output layer of the network. at a control stage, the data from the sensors are applied to the input layer of the network, as at the learning stage, and the amount of exposure is so controlled as to set up a desired potential in an image area on the basis of a latent image gamma characteristic obtainable from the output layer of the network. dated 1993-06-01" 5216746,error absorbing system in a neuron computer,"an error absorbing system for absorbing errors through a weight correction is provided in a neuron computer for receiving an analog input signal through a first analog bus in a time divisional manner, performing a sum-of-the-products operation, and outputting an analog output signal to a second analog bus. the error absorbing system includes a dummy node for producing a fixed voltage to an analog bus in a test mode. the dummy node is connected to the analog bus of the neural network. an error measuring unit compulsorily inputs 0 volts to the first analog bus through the dummy node in a first state of a test mode and detects an offset voltage produced in an analog neuron processor through the second analog bus. a weight correcting unit, in a second state of the test mode, determines a temporary weight between the dummy node and the neuron processor. the temporary weight is multiplied by the fixed voltage produced by the dummy node, based on an offset voltage of respective neuron processors. the weight correcting unit calculates a correct weight using a gain based on the detection output voltage output from the second analog bus. a weight memory stores the weight corrected by the weight correcting unit.",1993-06-01,"The title of the patent is error absorbing system in a neuron computer and its abstract is an error absorbing system for absorbing errors through a weight correction is provided in a neuron computer for receiving an analog input signal through a first analog bus in a time divisional manner, performing a sum-of-the-products operation, and outputting an analog output signal to a second analog bus. the error absorbing system includes a dummy node for producing a fixed voltage to an analog bus in a test mode. the dummy node is connected to the analog bus of the neural network. an error measuring unit compulsorily inputs 0 volts to the first analog bus through the dummy node in a first state of a test mode and detects an offset voltage produced in an analog neuron processor through the second analog bus. a weight correcting unit, in a second state of the test mode, determines a temporary weight between the dummy node and the neuron processor. the temporary weight is multiplied by the fixed voltage produced by the dummy node, based on an offset voltage of respective neuron processors. the weight correcting unit calculates a correct weight using a gain based on the detection output voltage output from the second analog bus. a weight memory stores the weight corrected by the weight correcting unit. dated 1993-06-01" 5216750,computation system and method using hamming distance,preferred embodiments include systems with neural network processors (58) having input encoders (56) that encode integers as binary vectors so that close integers encode as close binary vectors by requiring adjacent integers have encoded binary vectors that differ in a fixed fraction of their bits.,1993-06-01,The title of the patent is computation system and method using hamming distance and its abstract is preferred embodiments include systems with neural network processors (58) having input encoders (56) that encode integers as binary vectors so that close integers encode as close binary vectors by requiring adjacent integers have encoded binary vectors that differ in a fixed fraction of their bits. dated 1993-06-01 5216751,digital processing element in an artificial neural network,"an artificial neural network is provided using a digital architecture having feedforward and feedback processors interconnected with a digital computation ring or data bus to handle complex neural feedback arrangements. the feedforward processor receives a sequence of digital input signals and multiplies each by a weight in a predetermined manner and stores the results in an accumulator. the accumulated values may be shifted around the computation ring and read from a tap point thereof, or reprocessed through the feedback processor with predetermined scaling factors and combined with the feedforward outcomes for providing various types neural network feedback computations. alternately, the feedforward outcomes may be placed sequentially on a data bus for feedback processing through the network. the digital architecture includes a predetermined number of data input terminals for the digital input signal irrespective of the number of synapses per neuron and the number of neurons per neural network, and allows the synapses to share a common multiplier and thereby reduce the physical area of the neural network. a learning circuit may be utilized in the feedforward processor for real-time updating the weights thereof to reflect changes in the environment.",1993-06-01,"The title of the patent is digital processing element in an artificial neural network and its abstract is an artificial neural network is provided using a digital architecture having feedforward and feedback processors interconnected with a digital computation ring or data bus to handle complex neural feedback arrangements. the feedforward processor receives a sequence of digital input signals and multiplies each by a weight in a predetermined manner and stores the results in an accumulator. the accumulated values may be shifted around the computation ring and read from a tap point thereof, or reprocessed through the feedback processor with predetermined scaling factors and combined with the feedforward outcomes for providing various types neural network feedback computations. alternately, the feedforward outcomes may be placed sequentially on a data bus for feedback processing through the network. the digital architecture includes a predetermined number of data input terminals for the digital input signal irrespective of the number of synapses per neuron and the number of neurons per neural network, and allows the synapses to share a common multiplier and thereby reduce the physical area of the neural network. a learning circuit may be utilized in the feedforward processor for real-time updating the weights thereof to reflect changes in the environment. dated 1993-06-01" 5216752,interspike interval decoding neural network,a multi-layered neural network is disclosed that converts an incoming temporally coded spike train into a spatially distributed topographical map from which interspike-interval and bandwidth information may be extracted. this neural network may be used to decode multiplexed pulse-coded signals embedded serially in an incoming spike train into parallel distributed topographically mapped channels. a signal processing and code conversion algorithm not requiring learning is provided.,1993-06-01,The title of the patent is interspike interval decoding neural network and its abstract is a multi-layered neural network is disclosed that converts an incoming temporally coded spike train into a spatially distributed topographical map from which interspike-interval and bandwidth information may be extracted. this neural network may be used to decode multiplexed pulse-coded signals embedded serially in an incoming spike train into parallel distributed topographically mapped channels. a signal processing and code conversion algorithm not requiring learning is provided. dated 1993-06-01 5218245,programmable neural logic device,"a programmable logic cell, compatible with lssd (level sensitive scan design) technique, is described whose internal logic function can be initially loaded from an eprom or external processor. the output or contents of one cell can be connected to another cell to alter the logic operation of the second cell even while this second cell is in operation. the cells can be connected together to form a neural network.",1993-06-08,"The title of the patent is programmable neural logic device and its abstract is a programmable logic cell, compatible with lssd (level sensitive scan design) technique, is described whose internal logic function can be initially loaded from an eprom or external processor. the output or contents of one cell can be connected to another cell to alter the logic operation of the second cell even while this second cell is in operation. the cells can be connected together to form a neural network. dated 1993-06-08" 5218440,switched resistive neural network for sensor fusion,"an electronic image processing system uses data provided by one or more sensors to perform cooperative computations and improve image recognition performance. a smoothing resistive network, which may comprise an integrated circuit chip, has switching elements connected to each node. the system uses a first sensory output comprising primitives, such as discontinuities or object boundaries, detected by at least a first sensor to define a region for smoothing of a second sensory output comprising at least a second, distinct output of the first sensor or a distinct output of at least a second sensor. a bit pattern for controlling the switches is generated from the detected image discontinuities in the first sensory output. the second sensory output is applied to the resistive network for data smoothing. the switches turned off by the data from the first sensory output define regional boundaries for smoothing of the data provided by the second sensory output. smoothing operations based on this sensor fusion can proceed without spreading object characteristics beyond the object boundaries.",1993-06-08,"The title of the patent is switched resistive neural network for sensor fusion and its abstract is an electronic image processing system uses data provided by one or more sensors to perform cooperative computations and improve image recognition performance. a smoothing resistive network, which may comprise an integrated circuit chip, has switching elements connected to each node. the system uses a first sensory output comprising primitives, such as discontinuities or object boundaries, detected by at least a first sensor to define a region for smoothing of a second sensory output comprising at least a second, distinct output of the first sensor or a distinct output of at least a second sensor. a bit pattern for controlling the switches is generated from the detected image discontinuities in the first sensory output. the second sensory output is applied to the resistive network for data smoothing. the switches turned off by the data from the first sensory output define regional boundaries for smoothing of the data provided by the second sensory output. smoothing operations based on this sensor fusion can proceed without spreading object characteristics beyond the object boundaries. dated 1993-06-08" 5218529,neural network system and methods for analysis of organic materials and structures using spectral data,"apparatus and processes for recognizing and identifying materials. characteristic spectra are obtained for the materials via spectroscopy techniques including nuclear magnetic resonance spectroscopy, infrared absorption analysis, x-ray analysis, mass spectroscopy and gas chromatography. desired portions of the spectra may be selected and then placed in proper form and format for presentation to a number of input layer neurons in an offline neural network. the network is first trained according to a predetermined training process; it may then be employed to identify particular materials. such apparatus and processes are particularly useful for recognizing and identifying organic compounds such as complex carbohydrates, whose spectra conventionally require a high level of training and many hours of hard work to identify, and are frequently indistinguishable from one another by human interpretation.",1993-06-08,"The title of the patent is neural network system and methods for analysis of organic materials and structures using spectral data and its abstract is apparatus and processes for recognizing and identifying materials. characteristic spectra are obtained for the materials via spectroscopy techniques including nuclear magnetic resonance spectroscopy, infrared absorption analysis, x-ray analysis, mass spectroscopy and gas chromatography. desired portions of the spectra may be selected and then placed in proper form and format for presentation to a number of input layer neurons in an offline neural network. the network is first trained according to a predetermined training process; it may then be employed to identify particular materials. such apparatus and processes are particularly useful for recognizing and identifying organic compounds such as complex carbohydrates, whose spectra conventionally require a high level of training and many hours of hard work to identify, and are frequently indistinguishable from one another by human interpretation. dated 1993-06-08" 5218646,"classification procedure implemented in a hierarchical neural network, and hierarchical neural network","classification procedure implemented in a tree-like neural network which, in the course of learning steps, determines with the aid of a tree-like structure the number of neurons and their synaptic coefficients required for the processing of problems of classification of multi-class examples. each neuron tends to distinguish, from the examples, two groups of examples approximating as well as possible to a division into two predetermined groups of classes. this division can be obtained through a principal component analysis of the distribution of examples. the neural network comprises a directory of addresses of successor neurons which is loaded in learning mode then read in exploitation mode. a memory stores example classes associated with the ends of the branches of the tree.",1993-06-08,"The title of the patent is classification procedure implemented in a hierarchical neural network, and hierarchical neural network and its abstract is classification procedure implemented in a tree-like neural network which, in the course of learning steps, determines with the aid of a tree-like structure the number of neurons and their synaptic coefficients required for the processing of problems of classification of multi-class examples. each neuron tends to distinguish, from the examples, two groups of examples approximating as well as possible to a division into two predetermined groups of classes. this division can be obtained through a principal component analysis of the distribution of examples. the neural network comprises a directory of addresses of successor neurons which is loaded in learning mode then read in exploitation mode. a memory stores example classes associated with the ends of the branches of the tree. dated 1993-06-08" 5220202,memory device and memory apparatus using the same suitable for neural network,"a memory device includes a nonlinear electric conductivity element, a charge accumulation element, and a switching element. the nonlinear electric conductivity element has an insulating layer having opposite surfaces, and first and second conductive layers respectively formed on the opposite surfaces of the insulating layer. the nonlinear electric conductivity element receives an external write signal applied to one of the first and second conductive layers, and outputs a signal having nonlinear electric conductivity characteristics from the other of the first and second conductive layers. the charge accumulation element has charge accumulation characteristics and is connected to receive and store the signal output from the other of the first and second conductive layers. the switching element is on/off-controlled upon reception of the signal charge stored in the charge accumulation element. the switching element receives an external read voltage to read out the signal charge stored in the charge accumulation element as storage data. a memory apparatus includes a plurality of memory devices each having the nonlinear electric conductivity element the charge accumulation element and the switching element. the plurality of memory devices are connected in a matrix form such that the switching elements in at least two memory devices can commonly receive the read voltage and can commonly read out the storage data.",1993-06-15,"The title of the patent is memory device and memory apparatus using the same suitable for neural network and its abstract is a memory device includes a nonlinear electric conductivity element, a charge accumulation element, and a switching element. the nonlinear electric conductivity element has an insulating layer having opposite surfaces, and first and second conductive layers respectively formed on the opposite surfaces of the insulating layer. the nonlinear electric conductivity element receives an external write signal applied to one of the first and second conductive layers, and outputs a signal having nonlinear electric conductivity characteristics from the other of the first and second conductive layers. the charge accumulation element has charge accumulation characteristics and is connected to receive and store the signal output from the other of the first and second conductive layers. the switching element is on/off-controlled upon reception of the signal charge stored in the charge accumulation element. the switching element receives an external read voltage to read out the signal charge stored in the charge accumulation element as storage data. a memory apparatus includes a plurality of memory devices each having the nonlinear electric conductivity element the charge accumulation element and the switching element. the plurality of memory devices are connected in a matrix form such that the switching elements in at least two memory devices can commonly receive the read voltage and can commonly read out the storage data. dated 1993-06-15" 5220373,electrophotographic process control device using a neural network for estimating states of the device,"an electrophotographic process control device for an electrophotographic image forming apparatus. a neural network is incorporated in the control device for estimating the state of the image forming unit. parameters of the kind which should not be frequently measured, e.g., the surface potential of a photoconductive element and the amount of toner deposition thereon and parameters which are not easy to measure are determined by inference so as to control each section of the apparatus in an optimum way.",1993-06-15,"The title of the patent is electrophotographic process control device using a neural network for estimating states of the device and its abstract is an electrophotographic process control device for an electrophotographic image forming apparatus. a neural network is incorporated in the control device for estimating the state of the image forming unit. parameters of the kind which should not be frequently measured, e.g., the surface potential of a photoconductive element and the amount of toner deposition thereon and parameters which are not easy to measure are determined by inference so as to control each section of the apparatus in an optimum way. dated 1993-06-15" 5220618,classification method implemented in a layered neural network for multiclass classification and layered neural network,"classification method implemented in a layered neural network, comprising learning steps during which at least one layer is constructed by the addition of the successive neurons necessary for operating, by successive dichotomies, a classification of examples distributed over classes. in order to create at least one layer starting with a group of examples distributed over more than two classes, each successive neuron tends to distinguish its input data according to two predetermined sub-groups of classes peculiar to the said neuron according to a principal components analysis of the distribution of the said input data subjected to the learning of the neuron of the layer in question.",1993-06-15,"The title of the patent is classification method implemented in a layered neural network for multiclass classification and layered neural network and its abstract is classification method implemented in a layered neural network, comprising learning steps during which at least one layer is constructed by the addition of the successive neurons necessary for operating, by successive dichotomies, a classification of examples distributed over classes. in order to create at least one layer starting with a group of examples distributed over more than two classes, each successive neuron tends to distinguish its input data according to two predetermined sub-groups of classes peculiar to the said neuron according to a principal components analysis of the distribution of the said input data subjected to the learning of the neuron of the layer in question. dated 1993-06-15" 5220640,neural net architecture for rate-varying inputs,"a neural net architecture provides for the recognition of an input signal which is a rate variant of a learned signal pattern, reducing the neural net training requirements. the duration of a digital sampling of the input signal is scaled by a time-scaling network, creating a multiplicity of scaled signals which are then compared to memorized signal patterns contained in a self-organizing feature map. the feature map outputs values which indicate how well the scaled input signals match various learned signal patterns. a comparator determines which one of the values is greatest, thus indicating a best match between the input signal and one of the learned signal patterns.",1993-06-15,"The title of the patent is neural net architecture for rate-varying inputs and its abstract is a neural net architecture provides for the recognition of an input signal which is a rate variant of a learned signal pattern, reducing the neural net training requirements. the duration of a digital sampling of the input signal is scaled by a time-scaling network, creating a multiplicity of scaled signals which are then compared to memorized signal patterns contained in a self-organizing feature map. the feature map outputs values which indicate how well the scaled input signals match various learned signal patterns. a comparator determines which one of the values is greatest, thus indicating a best match between the input signal and one of the learned signal patterns. dated 1993-06-15" 5220643,monolithic neural network element,"a neural plane, which can form the basis of a neural network or a component thereof, is comprised by an optical modulator, an electrical non-linearity circuit and an optical detector interconnected whereby in use the non-linearity circuit controls the modulator in dependence on the detector output. there are parallel arrays (10, 11, 12) of such modulators, non-linearity circuits and detectors (m, t, d, 30, 33, 34). the modulator, non-linearity circuits and detectors have components formed in a common semiconductor substrate (20), for example by vlsi techniques with a silicon substrate, the modulators (30) may be comprised by liquid crystal on silicon in that case (figs. 4, 7).",1993-06-15,"The title of the patent is monolithic neural network element and its abstract is a neural plane, which can form the basis of a neural network or a component thereof, is comprised by an optical modulator, an electrical non-linearity circuit and an optical detector interconnected whereby in use the non-linearity circuit controls the modulator in dependence on the detector output. there are parallel arrays (10, 11, 12) of such modulators, non-linearity circuits and detectors (m, t, d, 30, 33, 34). the modulator, non-linearity circuits and detectors have components formed in a common semiconductor substrate (20), for example by vlsi techniques with a silicon substrate, the modulators (30) may be comprised by liquid crystal on silicon in that case (figs. 4, 7). dated 1993-06-15" 5220644,optical neural network system,"an optical system of an optical neural network model for parallel data processing is disclosed. taking advantage of the fact that an auto-correlation matrix is symmetric with respect to a main diagonal and the weights for modulating the values of diagonals of the auto-correlation matrix are equal to each other, the configuration of an optical modulation unit is simplified by a one-dimensional modulation array on the one hand, and both positive and negative weights are capable of being computed at the same time on the other hand. in particular, the optical system makes up a second-order neural network exhibiting invariant characteristics against the translation and scale.",1993-06-15,"The title of the patent is optical neural network system and its abstract is an optical system of an optical neural network model for parallel data processing is disclosed. taking advantage of the fact that an auto-correlation matrix is symmetric with respect to a main diagonal and the weights for modulating the values of diagonals of the auto-correlation matrix are equal to each other, the configuration of an optical modulation unit is simplified by a one-dimensional modulation array on the one hand, and both positive and negative weights are capable of being computed at the same time on the other hand. in particular, the optical system makes up a second-order neural network exhibiting invariant characteristics against the translation and scale. dated 1993-06-15" 5222194,neural network with modification of neuron weights and reaction coefficient,"a neural network computation apparatus having a plurality of layers, each of the plurality of layers has at least an input layer and an output layer, each layer having a plurality of units, a plurality of links, each of the plurality of links connecting units on the plurality of layers, and changing means for changing input and output characteristics of a particular unit of the plurality of units and/or the weight of a particular link of the plurality of links in accordance with an output of the output layer after learning an example and with a particular rule. after the neural network computation apparatus learns an example, the changing means changes input and output characteristics of units and weights of links in accordance with outputs of the output layer and a particular rule. thus, a mutual operation between a logical knowledge and a pattern recognizing performance can be accomplished and thereby a determination close to that of a specialist can be accomplished. in other words, a proper determination in accordance with an experience can be made so as to deal with unknown patterns with high flexibility.",1993-06-22,"The title of the patent is neural network with modification of neuron weights and reaction coefficient and its abstract is a neural network computation apparatus having a plurality of layers, each of the plurality of layers has at least an input layer and an output layer, each layer having a plurality of units, a plurality of links, each of the plurality of links connecting units on the plurality of layers, and changing means for changing input and output characteristics of a particular unit of the plurality of units and/or the weight of a particular link of the plurality of links in accordance with an output of the output layer after learning an example and with a particular rule. after the neural network computation apparatus learns an example, the changing means changes input and output characteristics of units and weights of links in accordance with outputs of the output layer and a particular rule. thus, a mutual operation between a logical knowledge and a pattern recognizing performance can be accomplished and thereby a determination close to that of a specialist can be accomplished. in other words, a proper determination in accordance with an experience can be made so as to deal with unknown patterns with high flexibility. dated 1993-06-22" 5222195,dynamically stable associative learning neural system with one fixed weight,"a dynamically stable associative learning neural network system include a plurality of synapses (122,22-28), a non-linear function circuit (30) and an adaptive weight circuit (150) for adjusting the weight of each synapse based upon the present signal and the prior history of signals applied to the input of the particular synapse and the present signal and the prior history of signals applied to the input of a predetermined set of other collateral synapses. a flow-through neuron circuit (1110) embodiment includes a flow-through synapse (122) having a predetermined fixed weight. a neural network is formed by a set of flow-through neuron circuits connected by flow-through synapses to form separate paths between each input (215) and a corresponding output (245). in one embodiment (200), the neuron network is initialized by setting the adjustable synapses at some value near the minimum weight and setting the flow-through neuron circuits at some arbitrarily high weight. the neural network embodiments are taught by successively application of sets of inputs signals to the input terminals until a dynamic equilibrium is reached.",1993-06-22,"The title of the patent is dynamically stable associative learning neural system with one fixed weight and its abstract is a dynamically stable associative learning neural network system include a plurality of synapses (122,22-28), a non-linear function circuit (30) and an adaptive weight circuit (150) for adjusting the weight of each synapse based upon the present signal and the prior history of signals applied to the input of the particular synapse and the present signal and the prior history of signals applied to the input of a predetermined set of other collateral synapses. a flow-through neuron circuit (1110) embodiment includes a flow-through synapse (122) having a predetermined fixed weight. a neural network is formed by a set of flow-through neuron circuits connected by flow-through synapses to form separate paths between each input (215) and a corresponding output (245). in one embodiment (200), the neuron network is initialized by setting the adjustable synapses at some value near the minimum weight and setting the flow-through neuron circuits at some arbitrarily high weight. the neural network embodiments are taught by successively application of sets of inputs signals to the input terminals until a dynamic equilibrium is reached. dated 1993-06-22" 5222196,neural network shell for application programs,"a neural network shell has a defined interface to an application program. by interfacing with the neural network shell, any application program becomes a neural network application program. the neural network shell contains a set of utility programs that transfers data into and out of a neural network data structure. this set of utility programs allows an application program to define a new neural network model, create a neural network data structure, train a neural network, and run a neural network. once trained, the neural network data structure can be transported to other computer systems or to application programs written in different computing languages running on similar or different computer systems.",1993-06-22,"The title of the patent is neural network shell for application programs and its abstract is a neural network shell has a defined interface to an application program. by interfacing with the neural network shell, any application program becomes a neural network application program. the neural network shell contains a set of utility programs that transfers data into and out of a neural network data structure. this set of utility programs allows an application program to define a new neural network model, create a neural network data structure, train a neural network, and run a neural network. once trained, the neural network data structure can be transported to other computer systems or to application programs written in different computing languages running on similar or different computer systems. dated 1993-06-22" 5222210,method of displaying the state of an artificial neural network,"a computer simulator is provided for displaying the state of an artificial neural network in a simplified yet meaningful manner on a computer display terminal. the user may enter commands to select one or more areas of interest within the neural network for further information regarding its state of learning and operation. one display mode illustrates the output activity of each neuron as representatively sized and shaded boxes within the border of the neuron, while another display mode shows the connectivity as weighted synapses between a user-selected neuron and the remaining neurons of the network in a similar manner. a third display mode provides a tuning curve wherein the synapses associated with each of the neurons are represented within the borders of the same. both grid block and line graph type characterization are supported. the methodology allows large neural networks on the order of thousands of neurons to be displayed in a meaningful manner.",1993-06-22,"The title of the patent is method of displaying the state of an artificial neural network and its abstract is a computer simulator is provided for displaying the state of an artificial neural network in a simplified yet meaningful manner on a computer display terminal. the user may enter commands to select one or more areas of interest within the neural network for further information regarding its state of learning and operation. one display mode illustrates the output activity of each neuron as representatively sized and shaded boxes within the border of the neuron, while another display mode shows the connectivity as weighted synapses between a user-selected neuron and the remaining neurons of the network in a similar manner. a third display mode provides a tuning curve wherein the synapses associated with each of the neurons are represented within the borders of the same. both grid block and line graph type characterization are supported. the methodology allows large neural networks on the order of thousands of neurons to be displayed in a meaningful manner. dated 1993-06-22" 5224203,on-line process control neural network using data pointers,"an on-line process control neural network using data pointers allows the neural network to be easily configured to use data in a process control environment. the inputs, outputs, training inputs and errors can be retrieved and/or stored from any available data source without programming. the user of the neural network specifies data pointers indicating the particular computer system in which the data resides or will be stored; the type of data to be retrieved and/or stored; and the specific data value or storage location to be used. the data pointers include maximum, minimum, and maximum change limits, which can also serve as scaling limits for the neural network. data pointers indicating time-dependent data, such as time averages, also include time boundary specifiers. the data pointers are entered by the user of the neural network using pop-up menus and by completing fields in a template. an historical database provides both a source of input data and a storage function for output and error data.",1993-06-29,"The title of the patent is on-line process control neural network using data pointers and its abstract is an on-line process control neural network using data pointers allows the neural network to be easily configured to use data in a process control environment. the inputs, outputs, training inputs and errors can be retrieved and/or stored from any available data source without programming. the user of the neural network specifies data pointers indicating the particular computer system in which the data resides or will be stored; the type of data to be retrieved and/or stored; and the specific data value or storage location to be used. the data pointers include maximum, minimum, and maximum change limits, which can also serve as scaling limits for the neural network. data pointers indicating time-dependent data, such as time averages, also include time boundary specifiers. the data pointers are entered by the user of the neural network using pop-up menus and by completing fields in a template. an historical database provides both a source of input data and a storage function for output and error data. dated 1993-06-29" 5226092,method and apparatus for learning in a neural network,""" a method and apparatus for speeding and enhancing the """"learning"""" function of a computer configured as a multilayered, feed format artificial neural network using logistic functions as an activation function. the enhanced learning method provides a linear probing method for determining local minima values computed first along the gradient of the weight space and then adjusting the slope and direction of a linear probe line after determining the likelihood that a """"ravine"""" has been encountered in the terrain of the weight space. """,1993-07-06,"The title of the patent is method and apparatus for learning in a neural network and its abstract is "" a method and apparatus for speeding and enhancing the """"learning"""" function of a computer configured as a multilayered, feed format artificial neural network using logistic functions as an activation function. the enhanced learning method provides a linear probing method for determining local minima values computed first along the gradient of the weight space and then adjusting the slope and direction of a linear probe line after determining the likelihood that a """"ravine"""" has been encountered in the terrain of the weight space. "" dated 1993-07-06" 5227830,automatic camera,"in a focus detection device for a camera, a plurality of distance sensors each for detecting a distance to an object in a plurality of areas of a photographing image plane are provided and distance data obtained by the distance sensors for each object in each area is supplied to a main object detection circuit and a normalizing circuit. the normalizing circuit normalizes the distance data into a real number ranging from 0 to 1 and then supplies the same to a neural network. the neural network formed of a single-layered neuron units of which the synapse connection weighting factors are previously obtained by the learning process, calculates a vector difference between the distance data and the synapse connection weighting factors of each neuron unit, detects the minimum vector difference, and outputs position data of a main object corresponding to a neuron unit which gives the minimum vector difference. the position data of the main object is input to a main object detection circuit. one of the outputs from the distance sensors corresponding to the main object is selected by the main object detection circuit and is supplied to a focus detection circuit for effecting the calculation to detect the focus. an output of the focus detection circuit is supplied to a lens driving mechanism so as to adjust the focus.",1993-07-13,"The title of the patent is automatic camera and its abstract is in a focus detection device for a camera, a plurality of distance sensors each for detecting a distance to an object in a plurality of areas of a photographing image plane are provided and distance data obtained by the distance sensors for each object in each area is supplied to a main object detection circuit and a normalizing circuit. the normalizing circuit normalizes the distance data into a real number ranging from 0 to 1 and then supplies the same to a neural network. the neural network formed of a single-layered neuron units of which the synapse connection weighting factors are previously obtained by the learning process, calculates a vector difference between the distance data and the synapse connection weighting factors of each neuron unit, detects the minimum vector difference, and outputs position data of a main object corresponding to a neuron unit which gives the minimum vector difference. the position data of the main object is input to a main object detection circuit. one of the outputs from the distance sensors corresponding to the main object is selected by the main object detection circuit and is supplied to a focus detection circuit for effecting the calculation to detect the focus. an output of the focus detection circuit is supplied to a lens driving mechanism so as to adjust the focus. dated 1993-07-13" 5227835,teachable camera,a teachable camera 8 which includes an alterable template matching neural network 40 positioned between a microprocessor 10 that performs camera picture taking algorithms and the units 24-32 such as the shutter which control the characteristics of the picture. the network 40 alters the output of the algorithms to match the picture characteristics desired by the photographer. the network 40 is altered by a rule based expert system executing in a personal computer 70 which determines how to alter the matching template of the network 40.,1993-07-13,The title of the patent is teachable camera and its abstract is a teachable camera 8 which includes an alterable template matching neural network 40 positioned between a microprocessor 10 that performs camera picture taking algorithms and the units 24-32 such as the shutter which control the characteristics of the picture. the network 40 alters the output of the algorithms to match the picture characteristics desired by the photographer. the network 40 is altered by a rule based expert system executing in a personal computer 70 which determines how to alter the matching template of the network 40. dated 1993-07-13 5228113,accelerated training apparatus for back propagation networks,a supervised procedure for obtaining weight values for back-propagation neural networks is described. the method according to the invention performs a sequence of partial optimizations in order to determine values for the network connection weights. the partial optimization depends on a constrained representation of hidden weights derived from a singular value decomposition of the input space as well as an iterative least squares optimization solution for the output weights.,1993-07-13,The title of the patent is accelerated training apparatus for back propagation networks and its abstract is a supervised procedure for obtaining weight values for back-propagation neural networks is described. the method according to the invention performs a sequence of partial optimizations in order to determine values for the network connection weights. the partial optimization depends on a constrained representation of hidden weights derived from a singular value decomposition of the input space as well as an iterative least squares optimization solution for the output weights. dated 1993-07-13 5229623,"electric circuit using multiple differential negative resistance elements, semiconductor device and neuro chip using the same","a semiconductor device is disclosed, which includes a multiple negative differential resistance element having negative differential resistance characteristics at at least two places in the current-voltage characteristics, and which is suitable for constructing a neural network having a high density integration and a high reliability.",1993-07-20,"The title of the patent is electric circuit using multiple differential negative resistance elements, semiconductor device and neuro chip using the same and its abstract is a semiconductor device is disclosed, which includes a multiple negative differential resistance element having negative differential resistance characteristics at at least two places in the current-voltage characteristics, and which is suitable for constructing a neural network having a high density integration and a high reliability. dated 1993-07-20" 5235339,radar target discrimination systems using artificial neural network topology,"a system for distinguishing between a target and clutter analyzes frequency components of returned wave energy by one or more networks each having inputs receiving successive samples of the returned energy and having outputs individually connected to the inputs through multiplier elements providing selectable factors. the multipliers corresponding to each output are connected to the output through a summing element and a selectable and generally sigmoidal activation function. the factors may be bandpass filter coefficients or discrete fourier transform coefficients so as to generate frequency components of the energy. predetermined frequency characteristics of the returned energy may be detected by providing the outputs of a network to a network in which the factors are selected as correlation or convolution coefficients, are selected to integrate fed back outputs, or are selected to sum several outputs within a predetermined range. the activation functions may be selected for thresholding, linearity, limiting, or generation of logarithms.",1993-08-10,"The title of the patent is radar target discrimination systems using artificial neural network topology and its abstract is a system for distinguishing between a target and clutter analyzes frequency components of returned wave energy by one or more networks each having inputs receiving successive samples of the returned energy and having outputs individually connected to the inputs through multiplier elements providing selectable factors. the multipliers corresponding to each output are connected to the output through a summing element and a selectable and generally sigmoidal activation function. the factors may be bandpass filter coefficients or discrete fourier transform coefficients so as to generate frequency components of the energy. predetermined frequency characteristics of the returned energy may be detected by providing the outputs of a network to a network in which the factors are selected as correlation or convolution coefficients, are selected to integrate fed back outputs, or are selected to sum several outputs within a predetermined range. the activation functions may be selected for thresholding, linearity, limiting, or generation of logarithms. dated 1993-08-10" 5235440,"optical interconnector and highly interconnected, learning neural network incorporating optical interconnector therein","a variable weight optical interconnector is disclosed to include a projecting device and an interconnection weighting device remote from the projecting device. the projecting device projects a distribution of interconnecting light beams when illuminated by a spatially-modulated light pattern. the weighting device includes a photosensitive screen provided in optical alignment with the projecting device to independently control the intensity of each projected interconnecting beam to thereby assign an interconnection weight to each such beam. further in accordance with the present invention, a highly-interconnected optical neural network having learning capability is disclosed as including a spatial light modulator, a detecting device, an interconnector according to the present invention, and a device responsive to detection signals generated by the detecting device to modify the interconnection weights assigned by the photosensitive screen of the interconnector.",1993-08-10,"The title of the patent is optical interconnector and highly interconnected, learning neural network incorporating optical interconnector therein and its abstract is a variable weight optical interconnector is disclosed to include a projecting device and an interconnection weighting device remote from the projecting device. the projecting device projects a distribution of interconnecting light beams when illuminated by a spatially-modulated light pattern. the weighting device includes a photosensitive screen provided in optical alignment with the projecting device to independently control the intensity of each projected interconnecting beam to thereby assign an interconnection weight to each such beam. further in accordance with the present invention, a highly-interconnected optical neural network having learning capability is disclosed as including a spatial light modulator, a detecting device, an interconnector according to the present invention, and a device responsive to detection signals generated by the detecting device to modify the interconnection weights assigned by the photosensitive screen of the interconnector. dated 1993-08-10" 5235650,pattern classifier for character recognition,a pattern classifier for character recognition is constructed in accordance with a neural network model. the pattern classifier comprises (2n+1).times.(2n+1) input buffer amplifiers and m output buffer amplifiers. the input buffer amplifiers have an inverted output line and a non-inverted output line which intersect input lines to the output buffers. synapses are selectively arranged at the intersections of the output and input lines in accordance with predetermined mask patterns used in character recognition. pmos and nmos transistors are employed for the synapses.,1993-08-10,The title of the patent is pattern classifier for character recognition and its abstract is a pattern classifier for character recognition is constructed in accordance with a neural network model. the pattern classifier comprises (2n+1).times.(2n+1) input buffer amplifiers and m output buffer amplifiers. the input buffer amplifiers have an inverted output line and a non-inverted output line which intersect input lines to the output buffers. synapses are selectively arranged at the intersections of the output and input lines in accordance with predetermined mask patterns used in character recognition. pmos and nmos transistors are employed for the synapses. dated 1993-08-10 5235672,hardware for electronic neural network,"this application discloses hardware suitable for use in a neural network system. it makes use of z-technology modules, each containing densely packaged electronic circuitry. the modules provide access planes which are electrically connected to circuitry located on planar surfaces interfacing with such access planes. one such planar surface comprises a resistive feedback network. by combining two z-technology modules, whose stacked chips are in planes perpendicular to one another, and using switching networks between the two modules, the system provides bidirectional accessibility of each individual electronic element in the neural network to most or all of the other individual electronic elements in the system.",1993-08-10,"The title of the patent is hardware for electronic neural network and its abstract is this application discloses hardware suitable for use in a neural network system. it makes use of z-technology modules, each containing densely packaged electronic circuitry. the modules provide access planes which are electrically connected to circuitry located on planar surfaces interfacing with such access planes. one such planar surface comprises a resistive feedback network. by combining two z-technology modules, whose stacked chips are in planes perpendicular to one another, and using switching networks between the two modules, the system provides bidirectional accessibility of each individual electronic element in the neural network to most or all of the other individual electronic elements in the system. dated 1993-08-10" 5235673,enhanced neural network shell for application programs,"an enhanced neural network shell for application programs is disclosed. the user is prompted to enter in non-technical information about the specific problem type that the user wants solved by a neural network. the user also is prompted to indicate the input data usage information to the neural network. based on this information, the neural network shell creates a neural network data structure by automatically selecting an appropriate neural network model and automatically generating an appropriate number of inputs, outputs, and/or other model-specific parameters for the selected neural network model. the user is no longer required to have expertise in neural network technology to create a neural network data structure.",1993-08-10,"The title of the patent is enhanced neural network shell for application programs and its abstract is an enhanced neural network shell for application programs is disclosed. the user is prompted to enter in non-technical information about the specific problem type that the user wants solved by a neural network. the user also is prompted to indicate the input data usage information to the neural network. based on this information, the neural network shell creates a neural network data structure by automatically selecting an appropriate neural network model and automatically generating an appropriate number of inputs, outputs, and/or other model-specific parameters for the selected neural network model. the user is no longer required to have expertise in neural network technology to create a neural network data structure. dated 1993-08-10" 5237210,neural network accomodating parallel synaptic weight adjustments for correlation learning algorithms,a neural network providing correlation learning in a synapse cell coupled to a circuit for parallel implementation of weight adjustment in a broad class of learning algorithms. the circuit provides the learning portion of the synaptic operation and includes a pair of floating gate devices sharing a common floating gate member that stores the connection weight of the cell. parallel weight adjustments are performed in a predetermined number of cycles utilizing a novel debiasing technique.,1993-08-17,The title of the patent is neural network accomodating parallel synaptic weight adjustments for correlation learning algorithms and its abstract is a neural network providing correlation learning in a synapse cell coupled to a circuit for parallel implementation of weight adjustment in a broad class of learning algorithms. the circuit provides the learning portion of the synaptic operation and includes a pair of floating gate devices sharing a common floating gate member that stores the connection weight of the cell. parallel weight adjustments are performed in a predetermined number of cycles utilizing a novel debiasing technique. dated 1993-08-17 5239593,optical pattern recognition using detector and locator neural networks,"a system for performing optical pattern recognition includes a first detector neural network for detecting the presence of a particular optical pattern in an input image and a second locator neural network for locating and/or removing the particular optical pattern from the input image. the detector network and the locator network both comprise nodes which can take on the -1, +1, or undefined states. the nodes are arranged in layers and each node in a layer has a location corresponding to a pixel in the input image. a particular application of this neural network is in finding the amount field on a check and removing the line which borders the amount field.",1993-08-24,"The title of the patent is optical pattern recognition using detector and locator neural networks and its abstract is a system for performing optical pattern recognition includes a first detector neural network for detecting the presence of a particular optical pattern in an input image and a second locator neural network for locating and/or removing the particular optical pattern from the input image. the detector network and the locator network both comprise nodes which can take on the -1, +1, or undefined states. the nodes are arranged in layers and each node in a layer has a location corresponding to a pixel in the input image. a particular application of this neural network is in finding the amount field on a check and removing the line which borders the amount field. dated 1993-08-24" 5239594,self-organizing pattern classification neural network system,"a self-organizing pattern classification neural network system includes means for receiving incoming pattern of signals that were processed by feature extractors that extract feature vectors from the incoming signal. these feature vectors correspond to information regarding certain features of the incoming signal. the extracted feature vectors then each pass to separate self-organizing neural network classifiers. the classifiers compare the feature vectors to templates corresponding to respective classes and output the results of their comparisons. the output from the classifier for each class enter a discriminator. the discriminator generates a classification response indicating the best class for the input signal. the classification response includes information indicative of whether the classification is possible and also includes the identified best class. lastly, the system includes a learning trigger for transferring a correct glass signal to the self-organizing classifiers so that they can determine the validity of their classification results.",1993-08-24,"The title of the patent is self-organizing pattern classification neural network system and its abstract is a self-organizing pattern classification neural network system includes means for receiving incoming pattern of signals that were processed by feature extractors that extract feature vectors from the incoming signal. these feature vectors correspond to information regarding certain features of the incoming signal. the extracted feature vectors then each pass to separate self-organizing neural network classifiers. the classifiers compare the feature vectors to templates corresponding to respective classes and output the results of their comparisons. the output from the classifier for each class enter a discriminator. the discriminator generates a classification response indicating the best class for the input signal. the classification response includes information indicative of whether the classification is possible and also includes the identified best class. lastly, the system includes a learning trigger for transferring a correct glass signal to the self-organizing classifiers so that they can determine the validity of their classification results. dated 1993-08-24" 5239597,nearest neighbor dither image processing circuit,"a conversion circuit of binary dither image to multilevel image comprises a counter utilizing concepts of a neural network, an 8 bit register and 8 or gates, resulting in high speed of operation. the counter uses a neural network based on the hopfield model and is made up of an input synapse group, a first bias synapse group, a feedback synapse group, a second bias synapse group, a neuron group and an invertor group.",1993-08-24,"The title of the patent is nearest neighbor dither image processing circuit and its abstract is a conversion circuit of binary dither image to multilevel image comprises a counter utilizing concepts of a neural network, an 8 bit register and 8 or gates, resulting in high speed of operation. the counter uses a neural network based on the hopfield model and is made up of an input synapse group, a first bias synapse group, a feedback synapse group, a second bias synapse group, a neuron group and an invertor group. dated 1993-08-24" 5239618,data processing device with network structure and its learning processing method,"an output layer in a layered neural network uses a linear function or a designated region (linear region) of a threshold function instead of the threshold function to convert an input signal to an analog output signal when the basic unit uses the linear function, a limiter for limiting the output to a region between 1.0 and 0. when the basic unit uses the designated linear region of the threshold function, a limiter limits the output to a region between 0.8 and 0.2. upon a learning operation, the error propagation coefficient is determined as a constant value such as 1/6 and when the majority of the desired values are 1 or near 1, an error value regarding the opposite desired value 0 is amplified, and when the output values become equal to or more than 1, it is deemed that there is no error with regard to the output of more than 1 in case of many outputs 1, thereby speeding up an operation of updating the weight.",1993-08-24,"The title of the patent is data processing device with network structure and its learning processing method and its abstract is an output layer in a layered neural network uses a linear function or a designated region (linear region) of a threshold function instead of the threshold function to convert an input signal to an analog output signal when the basic unit uses the linear function, a limiter for limiting the output to a region between 1.0 and 0. when the basic unit uses the designated linear region of the threshold function, a limiter limits the output to a region between 0.8 and 0.2. upon a learning operation, the error propagation coefficient is determined as a constant value such as 1/6 and when the majority of the desired values are 1 or near 1, an error value regarding the opposite desired value 0 is amplified, and when the output values become equal to or more than 1, it is deemed that there is no error with regard to the output of more than 1 in case of many outputs 1, thereby speeding up an operation of updating the weight. dated 1993-08-24" 5239619,learning method for a data processing system having a multi-layer neural network,""" a learning method for a neural network having at least an input neuron layer, an output neuron layer, and a middle neuron layer between the input and output layers. each of the layers include a plurality of neurons which are coupled to corresponding neurons in adjacent neural layers. the learning method performs a learning function on the neurons of the middle layer on the basis of the respective outputs, or """"ignition patterns"""", of the neurons in the neural layers adjacent to the middle layer. the ignition pattern of neurons in the input layer is decided artificially according to a preferable image pattern to be input. the ignition pattern of neurons in the output layer is decided artificially according to the ignition pattern of the input layer neurons, wherein the ignition pattern of the output layer neurons is predetermined to correspond to a code or pattern preferable for a user. the ignition pattern of the middle layer neurons, coupled to the associated neurons of the respective input and output layers, is then decided according to the ignition pattern of the input layer and the output layer. """,1993-08-24,"The title of the patent is learning method for a data processing system having a multi-layer neural network and its abstract is "" a learning method for a neural network having at least an input neuron layer, an output neuron layer, and a middle neuron layer between the input and output layers. each of the layers include a plurality of neurons which are coupled to corresponding neurons in adjacent neural layers. the learning method performs a learning function on the neurons of the middle layer on the basis of the respective outputs, or """"ignition patterns"""", of the neurons in the neural layers adjacent to the middle layer. the ignition pattern of neurons in the input layer is decided artificially according to a preferable image pattern to be input. the ignition pattern of neurons in the output layer is decided artificially according to the ignition pattern of the input layer neurons, wherein the ignition pattern of the output layer neurons is predetermined to correspond to a code or pattern preferable for a user. the ignition pattern of the middle layer neurons, coupled to the associated neurons of the respective input and output layers, is then decided according to the ignition pattern of the input layer and the output layer. "" dated 1993-08-24" 5241509,arrangement of data cells and neural network system utilizing such an arrangement,"an arrangement of data cells which stores at least one matrix of data words which are arranged in rows and columns, the matrix being distributed in the arrangement in order to deliver/receive, via a single bus, permuted data words which correspond either to a row or to a column of the matrix. each data cell is connected to the single bus via series-connected switches which are associated with a respective addressing mode, the switches which address a same word of a same mode being directly controlled by a same selection signal. circulation members enable the original order of the data on the bus to be restored. an arrangement of this kind is used in a layered neural network system for executing the error backpropagation algorithm.",1993-08-31,"The title of the patent is arrangement of data cells and neural network system utilizing such an arrangement and its abstract is an arrangement of data cells which stores at least one matrix of data words which are arranged in rows and columns, the matrix being distributed in the arrangement in order to deliver/receive, via a single bus, permuted data words which correspond either to a row or to a column of the matrix. each data cell is connected to the single bus via series-connected switches which are associated with a respective addressing mode, the switches which address a same word of a same mode being directly controlled by a same selection signal. circulation members enable the original order of the data on the bus to be restored. an arrangement of this kind is used in a layered neural network system for executing the error backpropagation algorithm. dated 1993-08-31" 5241620,embedding neural networks into spreadsheet applications,"the present invention relates to a method of embedding a neural network into an application program such as a spreadsheet program. the method comprises providing an application program in which information is stored in rows and columns or a database containing fields and records and embedding a neural network in the application program or database using the stored information. the embedding step includes allocating unused memory in the application program and creating both a neural network engine and an application interface structure from the unused memory. once the neural network engine and an application interface structure have been created, the neural network may be trained using variable numerical and symbolic data stored within the application program. once training is completed, the neural network is ready for use, merely by using a recall function built into the applications program.",1993-08-31,"The title of the patent is embedding neural networks into spreadsheet applications and its abstract is the present invention relates to a method of embedding a neural network into an application program such as a spreadsheet program. the method comprises providing an application program in which information is stored in rows and columns or a database containing fields and records and embedding a neural network in the application program or database using the stored information. the embedding step includes allocating unused memory in the application program and creating both a neural network engine and an application interface structure from the unused memory. once the neural network engine and an application interface structure have been created, the neural network may be trained using variable numerical and symbolic data stored within the application program. once training is completed, the neural network is ready for use, merely by using a recall function built into the applications program. dated 1993-08-31" 5241845,neurocontrol for washing machines,"a fully automatic washing machine includes a detector for detecting a cloth volume, cloth type, soil degree and soil type in regard to clothes contained in a wash tub. a control device calculates a wash water stream in a wash step and a period of the wash step in the washing operation by a neurocontrol in which data of the cloth volume, the cloth type, soil degree and soil type are supplied to a neural network as input data. the neurocontrol is compensated for in accordance with the turbidity of a wash liquid detected at the time of completion of the wash step.",1993-09-07,"The title of the patent is neurocontrol for washing machines and its abstract is a fully automatic washing machine includes a detector for detecting a cloth volume, cloth type, soil degree and soil type in regard to clothes contained in a wash tub. a control device calculates a wash water stream in a wash step and a period of the wash step in the washing operation by a neurocontrol in which data of the cloth volume, the cloth type, soil degree and soil type are supplied to a neural network as input data. the neurocontrol is compensated for in accordance with the turbidity of a wash liquid detected at the time of completion of the wash step. dated 1993-09-07" 5243688,virtual neurocomputer architectures for neural networks,"the architectures for a scalable neural processor (snap) and a triangular scalable neural array processor (t-snap) are expanded to handle network simulations where the number of neurons to be modeled exceeds the number of physical neurons implemented. this virtual neural processing is described for three general virtual architectural approaches for handling the virtual neurons, one for snap and one for tsnap, and a third approach applied to both snap and tsnap.",1993-09-07,"The title of the patent is virtual neurocomputer architectures for neural networks and its abstract is the architectures for a scalable neural processor (snap) and a triangular scalable neural array processor (t-snap) are expanded to handle network simulations where the number of neurons to be modeled exceeds the number of physical neurons implemented. this virtual neural processing is described for three general virtual architectural approaches for handling the virtual neurons, one for snap and one for tsnap, and a third approach applied to both snap and tsnap. dated 1993-09-07" 5245672,object/anti-object neural network segmentation,"the system of the present invention applies self-organizing and/or supervd learning network methods to the problem of segmentation. the segmenter receives a visual field, implemented as a sliding window and distinguishes occurrences of complete characters from occurrences of parts of neighboring characters. images of isolated whole characters are true objects and the opposite of true objects are anti-objects, centered on the space between two characters. the window is moved across a line of text producing a sequence of images and the segmentation system distinguishes true objects from anti-objects. frames classified as anti-objects demarcate character boundaries, and frames classified as true objects represent detected character images. the system of the present invention may be a feedforward adaption using a symmetric triggering network. inputs to the network are applied directly to the separate associative memories of the network. the associative memories produce a best match pattern output for each part of the input data. the associative memories provide two or more subnetworks which define data subsets, such as objects or anti-objects, according to previously learned examples. multi-layer perceptron architecture may also be used in the system of the present invention rather than the symmetrically triggered feedforward adaptation with tradeoffs in training time but advantages in speed.",1993-09-14,"The title of the patent is object/anti-object neural network segmentation and its abstract is the system of the present invention applies self-organizing and/or supervd learning network methods to the problem of segmentation. the segmenter receives a visual field, implemented as a sliding window and distinguishes occurrences of complete characters from occurrences of parts of neighboring characters. images of isolated whole characters are true objects and the opposite of true objects are anti-objects, centered on the space between two characters. the window is moved across a line of text producing a sequence of images and the segmentation system distinguishes true objects from anti-objects. frames classified as anti-objects demarcate character boundaries, and frames classified as true objects represent detected character images. the system of the present invention may be a feedforward adaption using a symmetric triggering network. inputs to the network are applied directly to the separate associative memories of the network. the associative memories produce a best match pattern output for each part of the input data. the associative memories provide two or more subnetworks which define data subsets, such as objects or anti-objects, according to previously learned examples. multi-layer perceptron architecture may also be used in the system of the present invention rather than the symmetrically triggered feedforward adaptation with tradeoffs in training time but advantages in speed. dated 1993-09-14" 5245697,neural network processing apparatus for identifying an unknown image pattern as one of a plurality of instruction image patterns,"a neural network processing apparatus calculates an average of the absolute values of differences between the output values of all neurons and a center value whenever the output value of all neurons change, and calculates the difference between the average and the previous average. if the average is larger than a threshold or the previous average, the gain of a function in the network is decreased. if the average is smaller than the threshold or the previous average, the gain of the function is increased. then the controlled function is set to each neuron and the neural network is activated repeatedly to correctly identify an unknown multivalued image pattern.",1993-09-14,"The title of the patent is neural network processing apparatus for identifying an unknown image pattern as one of a plurality of instruction image patterns and its abstract is a neural network processing apparatus calculates an average of the absolute values of differences between the output values of all neurons and a center value whenever the output value of all neurons change, and calculates the difference between the average and the previous average. if the average is larger than a threshold or the previous average, the gain of a function in the network is decreased. if the average is smaller than the threshold or the previous average, the gain of the function is increased. then the controlled function is set to each neuron and the neural network is activated repeatedly to correctly identify an unknown multivalued image pattern. dated 1993-09-14" 5247206,neural network accommodating parallel synaptic weight adjustments in a single cycle,a neural network providing correlation learning in a synapse cell coupled to a circuit for parallel implementation of weight adjustment provides the learning portion of the synaptic operation and includes a floating gate device having a corresponding floating gate member that stores the connection weight of the cell. parallel weight adjustments are performed in a single operational cycle utilizing floating gate technology and control signals that facilitate programming/erasing operations.,1993-09-21,The title of the patent is neural network accommodating parallel synaptic weight adjustments in a single cycle and its abstract is a neural network providing correlation learning in a synapse cell coupled to a circuit for parallel implementation of weight adjustment provides the learning portion of the synaptic operation and includes a floating gate device having a corresponding floating gate member that stores the connection weight of the cell. parallel weight adjustments are performed in a single operational cycle utilizing floating gate technology and control signals that facilitate programming/erasing operations. dated 1993-09-21 5247445,control unit of an internal combustion engine control unit utilizing a neural network to reduce deviations between exhaust gas constituents and predetermined values,a control unit for an internal combustion engine that compensates for variations in injection valve flow rate characteristics by detecting an operation status of the engine and then using this status information to calculate a supply air amount or supply fuel amount in accordance with the detected status. exhaust gas constituents are detected and then used to correct the calculated supply air or supply fuel amount. the control unit compares the exhaust gas constituents with predetermined values and then uses a neural network to control the supply air amount or supply fuel amount to make any deviation between the exhaust gas constituents and the predetermined value approach zero.,1993-09-21,The title of the patent is control unit of an internal combustion engine control unit utilizing a neural network to reduce deviations between exhaust gas constituents and predetermined values and its abstract is a control unit for an internal combustion engine that compensates for variations in injection valve flow rate characteristics by detecting an operation status of the engine and then using this status information to calculate a supply air amount or supply fuel amount in accordance with the detected status. exhaust gas constituents are detected and then used to correct the calculated supply air or supply fuel amount. the control unit compares the exhaust gas constituents with predetermined values and then uses a neural network to control the supply air amount or supply fuel amount to make any deviation between the exhaust gas constituents and the predetermined value approach zero. dated 1993-09-21 5247584,signal processing unit for classifying objects on the basis of signals from sensors,"in a signal processing arrangement for classifying objects on the basis of signals from a plurality of different sensors each of the signals from the sensors is applied to a pair of neural networks. one neural network of each pair processes predetermined characteristics of the object and the other neural network processes movement or special data of the object such that these networks provide detection, identification and movement information specific for the sensors. feature vectors formed from this information specific for the sensors are applied to a neural network for determining the associations of the identification and movement information. the information obtained by this network is applied together with the feature vectors to a network for identifying and classifying the object. the information from the association and identification networks, respectively, are supplied together with the information specific for the sensors to an expert system which, by using further knowledge about data and facts of the potential objects, makes final decisions and conclusions for identification.",1993-09-21,"The title of the patent is signal processing unit for classifying objects on the basis of signals from sensors and its abstract is in a signal processing arrangement for classifying objects on the basis of signals from a plurality of different sensors each of the signals from the sensors is applied to a pair of neural networks. one neural network of each pair processes predetermined characteristics of the object and the other neural network processes movement or special data of the object such that these networks provide detection, identification and movement information specific for the sensors. feature vectors formed from this information specific for the sensors are applied to a neural network for determining the associations of the identification and movement information. the information obtained by this network is applied together with the feature vectors to a network for identifying and classifying the object. the information from the association and identification networks, respectively, are supplied together with the information specific for the sensors to an expert system which, by using further knowledge about data and facts of the potential objects, makes final decisions and conclusions for identification. dated 1993-09-21" 5247606,adaptively setting analog weights in a neural network and the like,"a method for adaptively setting analog weights in analog cells of a neural network and the like. the process starts by addressing a synapse cell in the network. a target weight for said addressed synapse cell is selected, and the current weight present on the synapse cell is measured. the amplitude and duration of a voltage pulse to be applied to said synapse cell to adjust said synapse cell in the direction of said target weight is calculated using a set of coefficients representing the the physical characteristics of the synapse cell. the voltage pulse is applied to the addressed synapse cell and the new weight of the synapse cell is re-measured. if the synapse cell weight is within acceptable limits of the target weight, the values of the coefficients are saved and the next adjacent synapse cell is addressed until all synapse cells are set. if the synapse cell is not within acceptable limits, new values for the coefficients are calculated in relation to the re-measured weight. a new voltage pulse is generated and applied to the synapse cell. the process is repeated until the weight of the synapse cell is set within an acceptable limit of the target weight.",1993-09-21,"The title of the patent is adaptively setting analog weights in a neural network and the like and its abstract is a method for adaptively setting analog weights in analog cells of a neural network and the like. the process starts by addressing a synapse cell in the network. a target weight for said addressed synapse cell is selected, and the current weight present on the synapse cell is measured. the amplitude and duration of a voltage pulse to be applied to said synapse cell to adjust said synapse cell in the direction of said target weight is calculated using a set of coefficients representing the the physical characteristics of the synapse cell. the voltage pulse is applied to the addressed synapse cell and the new weight of the synapse cell is re-measured. if the synapse cell weight is within acceptable limits of the target weight, the values of the coefficients are saved and the next adjacent synapse cell is addressed until all synapse cells are set. if the synapse cell is not within acceptable limits, new values for the coefficients are calculated in relation to the re-measured weight. a new voltage pulse is generated and applied to the synapse cell. the process is repeated until the weight of the synapse cell is set within an acceptable limit of the target weight. dated 1993-09-21" 5248899,neural network using photoelectric substance,"a neural network, and a method of storing information and retrieving it by such network. the network comprises neurons, synapses and switches, and when required also rectifying means. the network is based on a substance which undergoes a reversible change from stable state a to stable state b, and this substance can also be changed from state a to another state c, which change is also reversible, where each change provides a measurable electrical pulse. the change of state is brought about by means of illumination for a predetermined period of time at a certain wavelength, it being possible to convert a desired part of the substance from one state to the other.",1993-09-28,"The title of the patent is neural network using photoelectric substance and its abstract is a neural network, and a method of storing information and retrieving it by such network. the network comprises neurons, synapses and switches, and when required also rectifying means. the network is based on a substance which undergoes a reversible change from stable state a to stable state b, and this substance can also be changed from state a to another state c, which change is also reversible, where each change provides a measurable electrical pulse. the change of state is brought about by means of illumination for a predetermined period of time at a certain wavelength, it being possible to convert a desired part of the substance from one state to the other. dated 1993-09-28" 5249259,genetic algorithm technique for designing neural networks,"a generic algorithm search is applied to determine an optimum set of values (e.g., interconnection weights in a neural network), each value being associated with a pair of elements drawn from a universe of n elements, n an integer greater than zero, where the utility of any possible set of said values may be measured. an initial possible set of values is assembled, the values being organized in a matrix whose rows and columns correspond to the elements. a genetic algorithm operator is applied to generate successor matrices from said matrix. matrix computations are performed on the successor matrices to generate measures of the relative utilities of the successor matrices. a surviving matrix is selected from the successor matrices on the basis of the metrics. the steps are repeated until the metric of the surviving matrix is satisfactory.",1993-09-28,"The title of the patent is genetic algorithm technique for designing neural networks and its abstract is a generic algorithm search is applied to determine an optimum set of values (e.g., interconnection weights in a neural network), each value being associated with a pair of elements drawn from a universe of n elements, n an integer greater than zero, where the utility of any possible set of said values may be measured. an initial possible set of values is assembled, the values being organized in a matrix whose rows and columns correspond to the elements. a genetic algorithm operator is applied to generate successor matrices from said matrix. matrix computations are performed on the successor matrices to generate measures of the relative utilities of the successor matrices. a surviving matrix is selected from the successor matrices on the basis of the metrics. the steps are repeated until the metric of the surviving matrix is satisfactory. dated 1993-09-28" 5249954,integrated imaging sensor/neural network controller for combustion systems,"disclosed is an integrated imaging sensor/neural network controller for combustion control systems. the controller uses electronic imaging sensing of chemiluminescence from a combustion system, combined with neural network image processing, to sensitively identify and control a complex combustion system. the imaging system used is not adversely affected by the normal emissions variations caused by changes in burner load and flame position. by incorporating neural networks to learn emission patterns associated with combustor performance, control using image technology is fast enough to be used in a real time, closed loop control system. this advance in sensing and control strategy allows use of the spatial distribution of important parameters in the combustion system in identifying the overall operation condition of a given combustor and in formulating a control response accorded to a pre-determined control model.",1993-10-05,"The title of the patent is integrated imaging sensor/neural network controller for combustion systems and its abstract is disclosed is an integrated imaging sensor/neural network controller for combustion control systems. the controller uses electronic imaging sensing of chemiluminescence from a combustion system, combined with neural network image processing, to sensitively identify and control a complex combustion system. the imaging system used is not adversely affected by the normal emissions variations caused by changes in burner load and flame position. by incorporating neural networks to learn emission patterns associated with combustor performance, control using image technology is fast enough to be used in a real time, closed loop control system. this advance in sensing and control strategy allows use of the spatial distribution of important parameters in the combustion system in identifying the overall operation condition of a given combustor and in formulating a control response accorded to a pre-determined control model. dated 1993-10-05" 5250766,elevator control apparatus using neural network to predict car direction reversal floor,"an elevator control apparatus capable of predicting reversion floors of elevator cages accurately. the control apparatus comprises a neural network, in which traffic state data are fetched into the neural network, so that predicted values of floors where the moving direction of each cage is reversed are calculated as predicted reversion floors. in the elevator control apparatus, reversion floors near true reversion floors can be predicted flexibly correspondingly to traffic state and traffic volume.",1993-10-05,"The title of the patent is elevator control apparatus using neural network to predict car direction reversal floor and its abstract is an elevator control apparatus capable of predicting reversion floors of elevator cages accurately. the control apparatus comprises a neural network, in which traffic state data are fetched into the neural network, so that predicted values of floors where the moving direction of each cage is reversed are calculated as predicted reversion floors. in the elevator control apparatus, reversion floors near true reversion floors can be predicted flexibly correspondingly to traffic state and traffic volume. dated 1993-10-05" 5251269,multi-layer neural network modelled after the striate cortex for recognizing visual patterns,"a pattern recognition system includes at least one pair of basic associative units each having at least first and second unit ports for receiving pattern signal groups, respectively and a third unit port for outputting a pattern signal group. the pattern recognition system has characteristics of the type of pattern recognition carried out by living organisms. each of the basic units operates to derive weighting values for respective signals of the pattern signal groups inputted to the first and second unit ports of the basic unit itself in accordance with the degree of consistency between a previously given weighting pattern and respective patterns specified by the pattern signal groups inputted to the first and second unit ports of the basic unit itself. each of the basic units also operates to modulate the respective signals of the pattern signal groups inputted to the first and second unit ports of the basic unit in accordance with the derived weighting values and to totalize the modulated signals so as to form an output signal outputted form the third unit port of the basic unit itself. the third unit port of one of the basic unit pair is coupled to the first unit port of the other basic unit, and the third unit port of the other basic unit is coupled to the second unit port of the one basic unit. thus, the third unit port of one of the basic unit pair gives an recognition output.",1993-10-05,"The title of the patent is multi-layer neural network modelled after the striate cortex for recognizing visual patterns and its abstract is a pattern recognition system includes at least one pair of basic associative units each having at least first and second unit ports for receiving pattern signal groups, respectively and a third unit port for outputting a pattern signal group. the pattern recognition system has characteristics of the type of pattern recognition carried out by living organisms. each of the basic units operates to derive weighting values for respective signals of the pattern signal groups inputted to the first and second unit ports of the basic unit itself in accordance with the degree of consistency between a previously given weighting pattern and respective patterns specified by the pattern signal groups inputted to the first and second unit ports of the basic unit itself. each of the basic units also operates to modulate the respective signals of the pattern signal groups inputted to the first and second unit ports of the basic unit in accordance with the derived weighting values and to totalize the modulated signals so as to form an output signal outputted form the third unit port of the basic unit itself. the third unit port of one of the basic unit pair is coupled to the first unit port of the other basic unit, and the third unit port of the other basic unit is coupled to the second unit port of the one basic unit. thus, the third unit port of one of the basic unit pair gives an recognition output. dated 1993-10-05" 5251287,apparatus and method for neural processing,"the neural computing paradigm is characterized as a dynamic and highly computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture for a scalable neural array process (snap) which uses a unique interconnection and communication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. the array processor is made up of multiple sets of orthogonal interconnections and activity generators. each activity generator is responsive to an output signal in order to generate a neuron value. the interconnection structure also uses special adder trees which respond in a first state to generate an output signal and in a second state to communicate a neuron value back to the input of the array processor.",1993-10-05,"The title of the patent is apparatus and method for neural processing and its abstract is the neural computing paradigm is characterized as a dynamic and highly computationally intensive system typically consisting of input weight multiplications, product summation, neural state calculations, and complete connectivity among the neurons. herein is described neural network architecture for a scalable neural array process (snap) which uses a unique interconnection and communication scheme within an array structure that provides high performance for completely connected network models such as the hopfield model. snap's packaging and expansion capabilities are addressed, demonstrating snap's scalability to larger networks. the array processor is made up of multiple sets of orthogonal interconnections and activity generators. each activity generator is responsive to an output signal in order to generate a neuron value. the interconnection structure also uses special adder trees which respond in a first state to generate an output signal and in a second state to communicate a neuron value back to the input of the array processor. dated 1993-10-05" 5251626,apparatus and method for the detection and treatment of arrhythmias using a neural network,"an apparatus and method for the detection and treatment of arrhythmias using a processor having a neural network with a hierarchical arrangement including a first lower level for classifying individual waveforms, a second higher level for diagnosing detected arrhythmias and a third higher level for the application of therapy in response to a diagnosed arrhythmia. the neural network may be a back propogation neural network or an associative memory type neural network. the arrhythmias detected may be at least one of bradycardia, tachycardia and fibrillation. the apparatus may include a cardioverting/defibrillating pacemaker. in general, the apparatus acquires physiological signals representative of heart activity in a patient. a neural network receives the physiological signals and determines if any arrhythmia is present, and if present, selects therapy to be applied to the heart. a therapy generator then applies the therapy selected by the neural network. the physiological signals may be processed or unprocessed ecg signal, signals indicative of the properties of the blood including the presence of gases, blood temperature, and blood flow signals or signals representative of ventricular wall impedance or ventricular volume.",1993-10-12,"The title of the patent is apparatus and method for the detection and treatment of arrhythmias using a neural network and its abstract is an apparatus and method for the detection and treatment of arrhythmias using a processor having a neural network with a hierarchical arrangement including a first lower level for classifying individual waveforms, a second higher level for diagnosing detected arrhythmias and a third higher level for the application of therapy in response to a diagnosed arrhythmia. the neural network may be a back propogation neural network or an associative memory type neural network. the arrhythmias detected may be at least one of bradycardia, tachycardia and fibrillation. the apparatus may include a cardioverting/defibrillating pacemaker. in general, the apparatus acquires physiological signals representative of heart activity in a patient. a neural network receives the physiological signals and determines if any arrhythmia is present, and if present, selects therapy to be applied to the heart. a therapy generator then applies the therapy selected by the neural network. the physiological signals may be processed or unprocessed ecg signal, signals indicative of the properties of the blood including the presence of gases, blood temperature, and blood flow signals or signals representative of ventricular wall impedance or ventricular volume. dated 1993-10-12" 5252829,method of determining urea in milk,"the concentration of urea in a concentration range of 0-0.1% in a milk sample containing at least 1% fat, at least 1% dissolved lactose, and at least 1% protein, is determined with an accuracy better than 0.007%, expressed as standard error of prediction, by an infrared attenuation measuring technique, by determining, on the sample, the attenuation in the region of infrared radiation from 1000 cm.sup.-1 (10.0 .mu.m) to 4000 cm.sup.-1 (2.50 .mu.m), at least one determination being made in a waveband in the region from 1000 cm.sup.-1 (10.0 .mu.m) to 1800 cm.sup.-1 (5.56 .mu.m) in which urea absorbs, at least one other determination being made in a waveband in which fat absorbs, at least one further determination being made in a waveband where lactose absorbs, and at least one further determination being made in a waveband where protein absorbs; determining, on the basis of the thus determined attenuations and predetermined parameters established by multivariate calibration, the contribution from fat, lactose, and protein in the waveband where urea absorbs, and quantitatively assessing the concentration of urea in the sample on the basis of the absorption in the waveband where urea absorbs, and on the basis of the determined contribution from fat, lactose and protein in said waveband. the multivariate calibration may be performed by a partial least squares algorithm, principal component regression, multiple linear regression, or artificial neural network learning. using the method according to the invention, compensation for the influence on the urea measurement may further be performed for one or several of the following components: citric acid, free fatty acids, antibiotics, phosphates, somatic cells, bacteria, preservatives and casein.",1993-10-12,"The title of the patent is method of determining urea in milk and its abstract is the concentration of urea in a concentration range of 0-0.1% in a milk sample containing at least 1% fat, at least 1% dissolved lactose, and at least 1% protein, is determined with an accuracy better than 0.007%, expressed as standard error of prediction, by an infrared attenuation measuring technique, by determining, on the sample, the attenuation in the region of infrared radiation from 1000 cm.sup.-1 (10.0 .mu.m) to 4000 cm.sup.-1 (2.50 .mu.m), at least one determination being made in a waveband in the region from 1000 cm.sup.-1 (10.0 .mu.m) to 1800 cm.sup.-1 (5.56 .mu.m) in which urea absorbs, at least one other determination being made in a waveband in which fat absorbs, at least one further determination being made in a waveband where lactose absorbs, and at least one further determination being made in a waveband where protein absorbs; determining, on the basis of the thus determined attenuations and predetermined parameters established by multivariate calibration, the contribution from fat, lactose, and protein in the waveband where urea absorbs, and quantitatively assessing the concentration of urea in the sample on the basis of the absorption in the waveband where urea absorbs, and on the basis of the determined contribution from fat, lactose and protein in said waveband. the multivariate calibration may be performed by a partial least squares algorithm, principal component regression, multiple linear regression, or artificial neural network learning. using the method according to the invention, compensation for the influence on the urea measurement may further be performed for one or several of the following components: citric acid, free fatty acids, antibiotics, phosphates, somatic cells, bacteria, preservatives and casein. dated 1993-10-12" 5253327,optimization apparatus,"an optimization apparatus using a layered neural network having an input layer formed of input units and supplied with input data and an output layer formed of output units connected to the individual input units with specified synaptic weights, which comprises a calculator circuit for calculating, for each output unit, the degree of similarity between the input data and the synaptic weight as well as the evaluation function value by causing the optimization problem to correspond to the fired units in the output layer, a detector for detecting the best matching optimum output unit on the basis of the output of the calculator circuit, and a self-organization circuit for changing the synaptic weights of a group of the output units associated with the optimum unit detected by the detector.",1993-10-12,"The title of the patent is optimization apparatus and its abstract is an optimization apparatus using a layered neural network having an input layer formed of input units and supplied with input data and an output layer formed of output units connected to the individual input units with specified synaptic weights, which comprises a calculator circuit for calculating, for each output unit, the degree of similarity between the input data and the synaptic weight as well as the evaluation function value by causing the optimization problem to correspond to the fired units in the output layer, a detector for detecting the best matching optimum output unit on the basis of the output of the calculator circuit, and a self-organization circuit for changing the synaptic weights of a group of the output units associated with the optimum unit detected by the detector. dated 1993-10-12" 5253328,neural-network content-addressable memory,"a neural network content-addressable error-correcting memory system is disclosed including a plurality of hidden and visible processing units interconnected via a linear interconnection matrix. the network is symmetric and all self-connections are not present. all connections between processing units are present, except those connecting hidden units to other hidden units. each visible unit is connected to each other visible unit and to each hidden unit. a mean field theory learning and retrieval algorithm is also provided. bit patterns or code words are stored in the network via the learning algorithm. the retrieval algorithm retrieves error-corrected bit patterns in response to noisy or error-containing input bit patterns.",1993-10-12,"The title of the patent is neural-network content-addressable memory and its abstract is a neural network content-addressable error-correcting memory system is disclosed including a plurality of hidden and visible processing units interconnected via a linear interconnection matrix. the network is symmetric and all self-connections are not present. all connections between processing units are present, except those connecting hidden units to other hidden units. each visible unit is connected to each other visible unit and to each hidden unit. a mean field theory learning and retrieval algorithm is also provided. bit patterns or code words are stored in the network via the learning algorithm. the retrieval algorithm retrieves error-corrected bit patterns in response to noisy or error-containing input bit patterns. dated 1993-10-12" 5253329,neural network for processing both spatial and temporal data with time based back-propagation,"neural network algorithms have impressively demonstrated the capability of modelling spatial information. on the other hand, the application of parallel distributed models to processing of temporal data has been severely restricted. the invention introduces a novel technique which adds the dimension of time to the well known back-propagatio pac origin of the invention the invention described herein was made by employees of the united states government and ma be manufactured and used by or for the government of the united states of america for governmental purposes without payment of any royalties thereon or therefor.",1993-10-12,"The title of the patent is neural network for processing both spatial and temporal data with time based back-propagation and its abstract is neural network algorithms have impressively demonstrated the capability of modelling spatial information. on the other hand, the application of parallel distributed models to processing of temporal data has been severely restricted. the invention introduces a novel technique which adds the dimension of time to the well known back-propagatio pac origin of the invention the invention described herein was made by employees of the united states government and ma be manufactured and used by or for the government of the united states of america for governmental purposes without payment of any royalties thereon or therefor. dated 1993-10-12" 5253330,network architecture for the programmable emulation of artificial neural networks having digital operation,"a network architecture for the programmable emulation of large artificial neural networks ann having digital operation employs a plurality l of neuron units of identical structure, each equipped with m neurons, the inputs (e) thereof being connected to network inputs (e.sub.n) multiplied or branching via individual input registers (reg.sub.e). the outputs (a) of the neuron units are connectable to network outputs (a.sub.n) at different points in time via individual multiplexers (mux) and individual output registers (reg.sub.a) and the neuron units have individual auxiliary inputs via which signals can be supplied to them that represent weighting values (w) for weighting the appertaining neural connections and represent thresholds (0) for weighting input signals.",1993-10-12,"The title of the patent is network architecture for the programmable emulation of artificial neural networks having digital operation and its abstract is a network architecture for the programmable emulation of large artificial neural networks ann having digital operation employs a plurality l of neuron units of identical structure, each equipped with m neurons, the inputs (e) thereof being connected to network inputs (e.sub.n) multiplied or branching via individual input registers (reg.sub.e). the outputs (a) of the neuron units are connectable to network outputs (a.sub.n) at different points in time via individual multiplexers (mux) and individual output registers (reg.sub.a) and the neuron units have individual auxiliary inputs via which signals can be supplied to them that represent weighting values (w) for weighting the appertaining neural connections and represent thresholds (0) for weighting input signals. dated 1993-10-12" 5255342,pattern recognition system and method using neural network,"an inner product computing unit computes inner products of an input pattern whose category is unknown, and orthogonalized dictionary sets of a plurality of reference patterns whose categories are known. a nonlinear converting unit nonlinearly converts the inner products in accordance with a positive-negative symmetrical nonlinear function. a neural network unit or a statistical discriminant function computing unit performs predetermined computations of the nonlinearly converted values on the basis of preset coefficients in units of categories using a neural network or a statistical discriminant function. a determining section compares values calculated in units of categories using the preset coefficients with each other to discriminate a category to which the input pattern belongs.",1993-10-19,"The title of the patent is pattern recognition system and method using neural network and its abstract is an inner product computing unit computes inner products of an input pattern whose category is unknown, and orthogonalized dictionary sets of a plurality of reference patterns whose categories are known. a nonlinear converting unit nonlinearly converts the inner products in accordance with a positive-negative symmetrical nonlinear function. a neural network unit or a statistical discriminant function computing unit performs predetermined computations of the nonlinearly converted values on the basis of preset coefficients in units of categories using a neural network or a statistical discriminant function. a determining section compares values calculated in units of categories using the preset coefficients with each other to discriminate a category to which the input pattern belongs. dated 1993-10-19" 5255344,inference rule determining method and inference device,""" an inference rule determining process according to the present invention sequentially determines, using a learning function of a neural network model, a membership function representing a degree which the conditions of the if part of each inference rule is satisfied when input data is received to thereby obtain an optimal inference result without using experience rules. the inventive inference device uses an inference rule of the type """"if . . . then . . ."""" and includes a membership value determiner (1) which includes all of if part and has a neural network; individual inference quantity determiners (21)-(2r) which correspond to the respective then parts of the inference rules and determine the corresponding inference quantities for the inference rules; and a final inference quantity determiner which determines these inference quantities synthetically to obtain the final results of the inference. if the individual inference quantity determiners (2) each has a neural network structure, the non-linearity of the neural network models is used to obtain the result of the inference with high inference accuracy even if an object to be inferred is non-linear. """,1993-10-19,"The title of the patent is inference rule determining method and inference device and its abstract is "" an inference rule determining process according to the present invention sequentially determines, using a learning function of a neural network model, a membership function representing a degree which the conditions of the if part of each inference rule is satisfied when input data is received to thereby obtain an optimal inference result without using experience rules. the inventive inference device uses an inference rule of the type """"if . . . then . . ."""" and includes a membership value determiner (1) which includes all of if part and has a neural network; individual inference quantity determiners (21)-(2r) which correspond to the respective then parts of the inference rules and determine the corresponding inference quantities for the inference rules; and a final inference quantity determiner which determines these inference quantities synthetically to obtain the final results of the inference. if the individual inference quantity determiners (2) each has a neural network structure, the non-linearity of the neural network models is used to obtain the result of the inference with high inference accuracy even if an object to be inferred is non-linear. "" dated 1993-10-19" 5255346,method and apparatus for design of a vector quantizer,"a method and apparatus for the design of a robust vector quantizer is disclosed. the initial output vector set is equal to the centroid of a training sequence of input vectors. a neural-network simulation and neighborhood functions are utilized for splitting and optimizing the output vectors. in this manner, the entire output vector set is sensitive to each input vector and therefore optimal output vector locations with respect to specified distortion criteria are obtained. the resulting vector quantizer is robust for the class of signals represented by the training sequence.",1993-10-19,"The title of the patent is method and apparatus for design of a vector quantizer and its abstract is a method and apparatus for the design of a robust vector quantizer is disclosed. the initial output vector set is equal to the centroid of a training sequence of input vectors. a neural-network simulation and neighborhood functions are utilized for splitting and optimizing the output vectors. in this manner, the entire output vector set is sensitive to each input vector and therefore optimal output vector locations with respect to specified distortion criteria are obtained. the resulting vector quantizer is robust for the class of signals represented by the training sequence. dated 1993-10-19" 5255347,neural network with learning function,"a neural network system capable of performing integrated processing of a plurality of information includes a feature extractor group for extracting a plurality of learning feature data from learning data in a learning mode and a plurality of object feature data from object data to be processed in an execution mode, and an information processing unit for learning features of the learning data, based on the plurality of learning feature data from the feature extractor group and corresponding teacher data in the learning mode, and determining final learning result data from the plurality of object feature data from the feature extractor group in accordance with the learning result, including a logic representing relation between the plurality of object feature data in the execution mode.",1993-10-19,"The title of the patent is neural network with learning function and its abstract is a neural network system capable of performing integrated processing of a plurality of information includes a feature extractor group for extracting a plurality of learning feature data from learning data in a learning mode and a plurality of object feature data from object data to be processed in an execution mode, and an information processing unit for learning features of the learning data, based on the plurality of learning feature data from the feature extractor group and corresponding teacher data in the learning mode, and determining final learning result data from the plurality of object feature data from the feature extractor group in accordance with the learning result, including a logic representing relation between the plurality of object feature data in the execution mode. dated 1993-10-19" 5255348,"neural network for learning, recognition and recall of pattern sequences","a sequence processor for rapidly learning, recognizing and recalling temporal sequences. the processor, called the katamic system, is a biologically inspired artificial neural network based on a model of the functions of the cerebellum in the brain. the katamic system utilizes three basic types of neuron-like elements with different functional characteristics called predictrons, recognitrons and bi-stable switches. the katamic system is clock operated, processing input sequences pattern by pattern to produce an output pattern which is a prediction of the next pattern in the input sequence. the katamic system learns rapidly, has a large memory capacity, exhibits sequence completion and sequence recognition capability, and is fault and noise tolerant. the system's modular construction permits straightforward scaleability.",1993-10-19,"The title of the patent is neural network for learning, recognition and recall of pattern sequences and its abstract is a sequence processor for rapidly learning, recognizing and recalling temporal sequences. the processor, called the katamic system, is a biologically inspired artificial neural network based on a model of the functions of the cerebellum in the brain. the katamic system utilizes three basic types of neuron-like elements with different functional characteristics called predictrons, recognitrons and bi-stable switches. the katamic system is clock operated, processing input sequences pattern by pattern to produce an output pattern which is a prediction of the next pattern in the input sequence. the katamic system learns rapidly, has a large memory capacity, exhibits sequence completion and sequence recognition capability, and is fault and noise tolerant. the system's modular construction permits straightforward scaleability. dated 1993-10-19" 5255349,"""electronic neural network for solving """"traveling salesman"""" and similar global optimization problems""",""" this invention is a novel high-speed neural network based processor for solving the """"traveling salesman"""" and other global optimization problems. it comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem, such as the number of cities to be visited. the array is prompted by analog voltages representing variables such as distances. the processor incorporates two interconnected feedback networks, each of which solves part of the problem independently and simultaneously, yet which exchange information dynamically. """,1993-10-19,"The title of the patent is ""electronic neural network for solving """"traveling salesman"""" and similar global optimization problems"" and its abstract is "" this invention is a novel high-speed neural network based processor for solving the """"traveling salesman"""" and other global optimization problems. it comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem, such as the number of cities to be visited. the array is prompted by analog voltages representing variables such as distances. the processor incorporates two interconnected feedback networks, each of which solves part of the problem independently and simultaneously, yet which exchange information dynamically. "" dated 1993-10-19" 5255362,photo stimulated and controlled imaging neural network,"a photo stimulated and controlled imaging neural network for providing self generating learning sets and associative memory and programmability. an image to be recognized or detected is transferred to an imaging plane, which can be as simple as a lens or as complicated as a cathode ray tube. the imaging plane whose contents forms the input for a photo receptor array transfers the stimulus from the object to the photoreceptor array. the photoreceptor array responds to the stimulus provided by the imaging plane with various couplings between an array of neuron amplifiers. the photo receptor array comprises a plurality of synaptic photo controlled resistors which respond to the stimulus provided by the imaging plane. the individual neuron amplifiers settle into a set of on or off binary states based on the couplings of the photo controlled resistors which comprise the receptor array. the output states are equally weighted and as a whole constitute a particular learning set which is then passed onto a gate array where it can be utilized to make various decisions.",1993-10-19,"The title of the patent is photo stimulated and controlled imaging neural network and its abstract is a photo stimulated and controlled imaging neural network for providing self generating learning sets and associative memory and programmability. an image to be recognized or detected is transferred to an imaging plane, which can be as simple as a lens or as complicated as a cathode ray tube. the imaging plane whose contents forms the input for a photo receptor array transfers the stimulus from the object to the photoreceptor array. the photoreceptor array responds to the stimulus provided by the imaging plane with various couplings between an array of neuron amplifiers. the photo receptor array comprises a plurality of synaptic photo controlled resistors which respond to the stimulus provided by the imaging plane. the individual neuron amplifiers settle into a set of on or off binary states based on the couplings of the photo controlled resistors which comprise the receptor array. the output states are equally weighted and as a whole constitute a particular learning set which is then passed onto a gate array where it can be utilized to make various decisions. dated 1993-10-19" 5256911,neural network with multiplexed snyaptic processing,"in an apparatus for multiplexed operation of multi-cell neural network, the reference vector component values are stored as differential values in pairs of floating gate transistors. a long-tail pair differential transconductance multiplier is synthesized by selectively using the floating gate transistor pairs as the current source. appropriate transistor pairs are multiplexed into the network for forming a differential output current representative of the product of the input vector component applied to the differential input and the stored reference vector component stored in the multiplexed transistor pair that is switched into the multiplier network to function as the differential current source. pipelining and output multiplexing is also described in other preferred embodiments for increasing the effective output bandwidth of the network.",1993-10-26,"The title of the patent is neural network with multiplexed snyaptic processing and its abstract is in an apparatus for multiplexed operation of multi-cell neural network, the reference vector component values are stored as differential values in pairs of floating gate transistors. a long-tail pair differential transconductance multiplier is synthesized by selectively using the floating gate transistor pairs as the current source. appropriate transistor pairs are multiplexed into the network for forming a differential output current representative of the product of the input vector component applied to the differential input and the stored reference vector component stored in the multiplexed transistor pair that is switched into the multiplier network to function as the differential current source. pipelining and output multiplexing is also described in other preferred embodiments for increasing the effective output bandwidth of the network. dated 1993-10-26" 5257342,learning method for a data processing system with neighborhoods,"a learning method for a neural network type data processing system determines activation patterns in an input layer and output layer arbitrarily, increases weights of synapses in a middle layer and the output layer so that neuron activate with more than a certain rate among those corresponding to neurons in the input layer and the output layer and repeats the same process for each neuron in the middle layer. the input layer and output layer possess a plurality of neurons which activate and output certain data according to a specific result and the middle layer is between the input layer and output layer. the middle layer also possesses a plurality of neurons which are connected to each neuron in the input layer and output layer.",1993-10-26,"The title of the patent is learning method for a data processing system with neighborhoods and its abstract is a learning method for a neural network type data processing system determines activation patterns in an input layer and output layer arbitrarily, increases weights of synapses in a middle layer and the output layer so that neuron activate with more than a certain rate among those corresponding to neurons in the input layer and the output layer and repeats the same process for each neuron in the middle layer. the input layer and output layer possess a plurality of neurons which activate and output certain data according to a specific result and the middle layer is between the input layer and output layer. the middle layer also possesses a plurality of neurons which are connected to each neuron in the input layer and output layer. dated 1993-10-26" 5257343,intelligence information processing system,"an intelligence information processing system is composed of an associative memory and a serial processing-type computer. input pattern information is associated with the associative memory, and pattern recognition based on the computer evaluates an associative output. in accordance with this evaluation, an associative and restrictive condition is repeatedly added to the energy function of a neural network constituting the associative memory, thereby converging the associative output on a stable state of the energy. the converged associative output is verified with intelligence information stored in a computer memory. the associative and restrictive condition is again repeatedly added to the energy function in accordance with the verification so as to produce an output from the system.",1993-10-26,"The title of the patent is intelligence information processing system and its abstract is an intelligence information processing system is composed of an associative memory and a serial processing-type computer. input pattern information is associated with the associative memory, and pattern recognition based on the computer evaluates an associative output. in accordance with this evaluation, an associative and restrictive condition is repeatedly added to the energy function of a neural network constituting the associative memory, thereby converging the associative output on a stable state of the energy. the converged associative output is verified with intelligence information stored in a computer memory. the associative and restrictive condition is again repeatedly added to the energy function in accordance with the verification so as to produce an output from the system. dated 1993-10-26" 5258903,control circuit and power supply for televisions,"an adaptive feed forward control circuit and power supply for a television comprises a circuit for supplying energy from a source to a load, the load having energy requirements which vary in response to an input signal, for example a video signal. a feedback circuit generates a first correction signal indicative of a difference between an operating voltage or current level and a reference level. a neural network generates a second correction signal indicative of anticipated energy requirement variation by processing information in present values of the input signal. a control circuit, for example a pulse width modulating circuit, is responsive to the correction signals for controlling operation of the energy supplying circuit. the first and second correction signals are combined by a summing circuit. the neural network comprises a first signal adaptive circuit for the input signal and a second signal adaptive circuit for a processed version of the input signal. the processed input signal is linearly independent of the input signal to avoid redundancy in the weight factors. the square root of the input signal, for example, is appropriate for a switched mode power supply. the combination of outputs from the first and second signal adaptive circuits defines the second correction signal. a microprocessor can embody the neural network and provide the processed version of the input signal. the microprocessor can also embody the feedback circuit. the predictive correction signal can be adjusted responsive to the size and polarity of the energy requirement variation.",1993-11-02,"The title of the patent is control circuit and power supply for televisions and its abstract is an adaptive feed forward control circuit and power supply for a television comprises a circuit for supplying energy from a source to a load, the load having energy requirements which vary in response to an input signal, for example a video signal. a feedback circuit generates a first correction signal indicative of a difference between an operating voltage or current level and a reference level. a neural network generates a second correction signal indicative of anticipated energy requirement variation by processing information in present values of the input signal. a control circuit, for example a pulse width modulating circuit, is responsive to the correction signals for controlling operation of the energy supplying circuit. the first and second correction signals are combined by a summing circuit. the neural network comprises a first signal adaptive circuit for the input signal and a second signal adaptive circuit for a processed version of the input signal. the processed input signal is linearly independent of the input signal to avoid redundancy in the weight factors. the square root of the input signal, for example, is appropriate for a switched mode power supply. the combination of outputs from the first and second signal adaptive circuits defines the second correction signal. a microprocessor can embody the neural network and provide the processed version of the input signal. the microprocessor can also embody the feedback circuit. the predictive correction signal can be adjusted responsive to the size and polarity of the energy requirement variation. dated 1993-11-02" 5258934,charge domain bit serial vector-matrix multiplier and method thereof,a charge domain bit serial vector matrix multiplier for real time signal processing of mixed digital/analog signals for implementing opto-electronic neural networks and other signal processing functions. a combination of ccd and dcsd arrays permits vector/matrix multiplication with better than 10.sup.11 multiply accumulates per second on a one square centimeter chip. the ccd array portion of the invention is used to load and move charge packets into the dcsd array for processing therein. the ccd array is also used to empty the matrix of unwanted charge. the dcsd array is designed to store a plurality of charge packets representing the respective matrix values such as the synaptic interaction matrix of a neural network. the vector multiplicand may be applied in bit serial format. the row or sensor lines of the dcsd array are used to accumulate the results of the multiply operation. each such row output line is provided with a divide-by-two/accumulate ccd circuit which automatically compensates for the increasing value of the input vector element's bits from least significant bit to most significant bit. in a similar fashion each row output line can be provided with a multiply-by-two/accumulate ccd circuit which automatically accounts for the decreasing value of the input vector element's bits from most significant bit to least significant bit. the accumulated charge packet output of the array may be preferably converted to a digital signal compatible with the input vector configuration by utilizing a plurality of analog-to-digital converters.,1993-11-02,The title of the patent is charge domain bit serial vector-matrix multiplier and method thereof and its abstract is a charge domain bit serial vector matrix multiplier for real time signal processing of mixed digital/analog signals for implementing opto-electronic neural networks and other signal processing functions. a combination of ccd and dcsd arrays permits vector/matrix multiplication with better than 10.sup.11 multiply accumulates per second on a one square centimeter chip. the ccd array portion of the invention is used to load and move charge packets into the dcsd array for processing therein. the ccd array is also used to empty the matrix of unwanted charge. the dcsd array is designed to store a plurality of charge packets representing the respective matrix values such as the synaptic interaction matrix of a neural network. the vector multiplicand may be applied in bit serial format. the row or sensor lines of the dcsd array are used to accumulate the results of the multiply operation. each such row output line is provided with a divide-by-two/accumulate ccd circuit which automatically compensates for the increasing value of the input vector element's bits from least significant bit to most significant bit. in a similar fashion each row output line can be provided with a multiply-by-two/accumulate ccd circuit which automatically accounts for the decreasing value of the input vector element's bits from most significant bit to least significant bit. the accumulated charge packet output of the array may be preferably converted to a digital signal compatible with the input vector configuration by utilizing a plurality of analog-to-digital converters. dated 1993-11-02 5259064,signal processing apparatus having at least one neural network having pulse density signals as inputs and outputs,"a signal processing apparatus for controlling an object includes an input unit, a neural network, an output unit, a teaching unit, and an error signal generator for generating a teaching signal that makes the neural network learn in real time. an error signal generator generates an error signal from the teaching signal and information contained in the network output signal. the error signal controls the neural network so that the control output signal has correct control information with respect to the output signal from the controlled object.",1993-11-02,"The title of the patent is signal processing apparatus having at least one neural network having pulse density signals as inputs and outputs and its abstract is a signal processing apparatus for controlling an object includes an input unit, a neural network, an output unit, a teaching unit, and an error signal generator for generating a teaching signal that makes the neural network learn in real time. an error signal generator generates an error signal from the teaching signal and information contained in the network output signal. the error signal controls the neural network so that the control output signal has correct control information with respect to the output signal from the controlled object. dated 1993-11-02" 5259065,data processing system,"a data processing system of the neural network type. the system recognizes a predetermined shape by providing some connections that are inhibitory between a plurality of neurons in a neural layer of the neural network. if data is found in the inhibitory area, it makes it harder for the neurons in the correct area to fire. only when the neurons in the correct area fire is the predetermined shape recognized.",1993-11-02,"The title of the patent is data processing system and its abstract is a data processing system of the neural network type. the system recognizes a predetermined shape by providing some connections that are inhibitory between a plurality of neurons in a neural layer of the neural network. if data is found in the inhibitory area, it makes it harder for the neurons in the correct area to fire. only when the neurons in the correct area fire is the predetermined shape recognized. dated 1993-11-02" 5259384,ultrasonic bone-assessment apparatus and method,"non-invasive, quantitative in-vivo ultrasonic evaluation of bone is performed by subjecting bone to an acoustic excitation pulse supplied to one of two transducers on opposite sides of the bone, and involving a composite sine-wave signal consisting of repetitions of plural discrete ultrasonic frequencies that are spaced at approximately 2 mhz. signal-processing of received signal output of the other transducer is operative to sequentially average the most recently received given number of successive signals to obtain an averaged per-pulse signal and to produce a fourier transform of this signal. in a separate operation, the same transducer responds to the transmission and reception of the same excitation signal via a medium of known acoustic properties and path length to establish a reference signal, which is processed to produce its fourier transform. the two fourier transforms are comparatively evaluated to produce a bone-transfer function, which is then processed to derive the frequency-dependent specific-attenuation and group-velocity functions .mu.(f) and vg(f) associated with the bone-transfer function. the function vg(f) is related to the derivative of the phase of the bone-transfer function, as a function of frequency. a neural network, configured to generate an estimate of one or more of the desired bone-related quantities, is connected for response to the functions .mu.(f) and vg(f), whereby to generate the indicated estimates of bone status, namely, bone-density, bone-strength and fracture risk.",1993-11-09,"The title of the patent is ultrasonic bone-assessment apparatus and method and its abstract is non-invasive, quantitative in-vivo ultrasonic evaluation of bone is performed by subjecting bone to an acoustic excitation pulse supplied to one of two transducers on opposite sides of the bone, and involving a composite sine-wave signal consisting of repetitions of plural discrete ultrasonic frequencies that are spaced at approximately 2 mhz. signal-processing of received signal output of the other transducer is operative to sequentially average the most recently received given number of successive signals to obtain an averaged per-pulse signal and to produce a fourier transform of this signal. in a separate operation, the same transducer responds to the transmission and reception of the same excitation signal via a medium of known acoustic properties and path length to establish a reference signal, which is processed to produce its fourier transform. the two fourier transforms are comparatively evaluated to produce a bone-transfer function, which is then processed to derive the frequency-dependent specific-attenuation and group-velocity functions .mu.(f) and vg(f) associated with the bone-transfer function. the function vg(f) is related to the derivative of the phase of the bone-transfer function, as a function of frequency. a neural network, configured to generate an estimate of one or more of the desired bone-related quantities, is connected for response to the functions .mu.(f) and vg(f), whereby to generate the indicated estimates of bone status, namely, bone-density, bone-strength and fracture risk. dated 1993-11-09" 5260706,priority encoder,"a priority encoder using a mos array and neural network concepts is composed of an input side neuron group, an output side neuron group, a synapse group, a bias group and inverters. the encoder is simple in its construction and fast in its operating speed compared with the conventional priority encoders utilizing simple boolean logic.",1993-11-09,"The title of the patent is priority encoder and its abstract is a priority encoder using a mos array and neural network concepts is composed of an input side neuron group, an output side neuron group, a synapse group, a bias group and inverters. the encoder is simple in its construction and fast in its operating speed compared with the conventional priority encoders utilizing simple boolean logic. dated 1993-11-09" 5260871,method and apparatus for diagnosis of breast tumors,"an apparatus for distinguishing benign from malignant tumors in ultrasonic images of candidate tissue taken from a patient. a region of interest is located and defined on the ultrasonic image, including substantially all of the candidate tissue and excluding substantially all the normal tissue. the region of interest is digitized, generating an array of pixels intensity values. a first features is generated from the arrays of pixels corresponding to the angular second moment of the pixel intensity values. a second feature is generated from the array of pixels corresponding to the inverse contrast of the pixel intensity values. a third feature is generated from the array of pixels corresponding to the short run emphasis of the pixel intensity values. the first, second and third feature values are provided to a neural network. a set of trained weights are applied to the feature values, which generates a network output between 0 and 1, whereby the output values tend toward 1 when the candidate tissue is malignant and the output values tend toward 0 when the candidate tissue is benign.",1993-11-09,"The title of the patent is method and apparatus for diagnosis of breast tumors and its abstract is an apparatus for distinguishing benign from malignant tumors in ultrasonic images of candidate tissue taken from a patient. a region of interest is located and defined on the ultrasonic image, including substantially all of the candidate tissue and excluding substantially all the normal tissue. the region of interest is digitized, generating an array of pixels intensity values. a first features is generated from the arrays of pixels corresponding to the angular second moment of the pixel intensity values. a second feature is generated from the array of pixels corresponding to the inverse contrast of the pixel intensity values. a third feature is generated from the array of pixels corresponding to the short run emphasis of the pixel intensity values. the first, second and third feature values are provided to a neural network. a set of trained weights are applied to the feature values, which generates a network output between 0 and 1, whereby the output values tend toward 1 when the candidate tissue is malignant and the output values tend toward 0 when the candidate tissue is benign. dated 1993-11-09" 5261035,neural network architecture based on summation of phase-coherent alternating current signals,""" a neural network architecture has phase-coherent alternating current neural input signals. each input v.sub.k.sup.in is a two-phase pair of signals 180 degrees out of phase. capacitive coupling of both signals of n input pairs to a summation line gives a non-dissipative realization of the weighted sum ##equ1## with general real neural weights w.sub.ik. an alternating current offset signal proportional to u.sub.i is also capacitively coupled to the summation line. the signal on the summation line is passed through a low input capacitance follower/amplifier, a rectifier and a filter, producing a direct current signal proportional to the magnitude ##equ2## this signal is compared with a direct current threshold proportional to t.sub.i, and the resultant is used to gate a two-phase alternating current output signal. the output is therefore functionally related to the inputs by ##equ3## with .theta. the heaviside step function. this generalized neuron can directly compute the """"exclusive or"""" (xor) logical operation. alternative forms of the alternating current neuron using phase-shifters permit complex number inputs, outputs and neural weightings. """,1993-11-09,"The title of the patent is neural network architecture based on summation of phase-coherent alternating current signals and its abstract is "" a neural network architecture has phase-coherent alternating current neural input signals. each input v.sub.k.sup.in is a two-phase pair of signals 180 degrees out of phase. capacitive coupling of both signals of n input pairs to a summation line gives a non-dissipative realization of the weighted sum ##equ1## with general real neural weights w.sub.ik. an alternating current offset signal proportional to u.sub.i is also capacitively coupled to the summation line. the signal on the summation line is passed through a low input capacitance follower/amplifier, a rectifier and a filter, producing a direct current signal proportional to the magnitude ##equ2## this signal is compared with a direct current threshold proportional to t.sub.i, and the resultant is used to gate a two-phase alternating current output signal. the output is therefore functionally related to the inputs by ##equ3## with .theta. the heaviside step function. this generalized neuron can directly compute the """"exclusive or"""" (xor) logical operation. alternative forms of the alternating current neuron using phase-shifters permit complex number inputs, outputs and neural weightings. "" dated 1993-11-09" 5262632,integrated circuit for achieving pattern recognition,"an apparatus for massive computation in integrated circuits provides the ability to calculate multiple dot products between an image focused on the integrated circuit surface and many reference patterns built into the integrated circuit, and then give an output indication for all those reference patterns where the dot product exceeds a threshold. the implementation, using current mirrors for multiplication with fixed constants, permits the integrated circuit to achieve large amounts of computation per unit area. this apparatus permits a large input data bandwidth, and by virtue of having enough computation capacity to complete a processing task on one chip, the output bandwidth is greatly reduced as well. the apparatus is employed, as an example, in a neural network. a set of connections between nodes that modify the value of the signal passed from one node to the next. often many connections impinge on a node, and the summation of values at the node is further modified by a nonlinear function such as a threshold and amplitude limiter. values at the input nodes represent the signals to be evaluated by the network, and values at the outputs represent an evaluation by the network of the input signals. for instance, the input could be image pixels and the outputs could represent possible patterns to which the image could be assigned. the connections between weights are often determined and modified by training data, but they can also be prespecified in total or in part based on other information about the task of the network.",1993-11-16,"The title of the patent is integrated circuit for achieving pattern recognition and its abstract is an apparatus for massive computation in integrated circuits provides the ability to calculate multiple dot products between an image focused on the integrated circuit surface and many reference patterns built into the integrated circuit, and then give an output indication for all those reference patterns where the dot product exceeds a threshold. the implementation, using current mirrors for multiplication with fixed constants, permits the integrated circuit to achieve large amounts of computation per unit area. this apparatus permits a large input data bandwidth, and by virtue of having enough computation capacity to complete a processing task on one chip, the output bandwidth is greatly reduced as well. the apparatus is employed, as an example, in a neural network. a set of connections between nodes that modify the value of the signal passed from one node to the next. often many connections impinge on a node, and the summation of values at the node is further modified by a nonlinear function such as a threshold and amplitude limiter. values at the input nodes represent the signals to be evaluated by the network, and values at the outputs represent an evaluation by the network of the input signals. for instance, the input could be image pixels and the outputs could represent possible patterns to which the image could be assigned. the connections between weights are often determined and modified by training data, but they can also be prespecified in total or in part based on other information about the task of the network. dated 1993-11-16" 5263107,receptive field neural network with shift-invariant pattern recognition,"a neural network system and method of operating same wherein input data are initialized, then mapped onto a predetermined array for learning or recognition. the mapped information is divided into sub-input data or receptive fields, which are used for comparison of the input information with prelearned information having similar features, thereby allowing for correct classification of the input information. the receptive fields are shifted before the classification process, in order to generate a closest match between features which may be shifted at the time of input, and weights of the input information are updated based upon the closest-matching shifted position of the input information.",1993-11-16,"The title of the patent is receptive field neural network with shift-invariant pattern recognition and its abstract is a neural network system and method of operating same wherein input data are initialized, then mapped onto a predetermined array for learning or recognition. the mapped information is divided into sub-input data or receptive fields, which are used for comparison of the input information with prelearned information having similar features, thereby allowing for correct classification of the input information. the receptive fields are shifted before the classification process, in order to generate a closest match between features which may be shifted at the time of input, and weights of the input information are updated based upon the closest-matching shifted position of the input information. dated 1993-11-16" 5263121,neural network solution for interconnection apparatus,a neural network solution for routing calls through a three stage interconnection network selects an open path through the interconnection network if one exists. the neural network solution uses a neural network with a binary threshold. the weights of the neural network are fixed for all time and therefore are independent of the current state of the interconnection network. preferential call placement strategies are implemented by selecting appropriate external inputs to the neural network. an interconnection network controller stores information reflecting the current usage of the interconnection network and interfaces between the interconnection network and the neural network.,1993-11-16,The title of the patent is neural network solution for interconnection apparatus and its abstract is a neural network solution for routing calls through a three stage interconnection network selects an open path through the interconnection network if one exists. the neural network solution uses a neural network with a binary threshold. the weights of the neural network are fixed for all time and therefore are independent of the current state of the interconnection network. preferential call placement strategies are implemented by selecting appropriate external inputs to the neural network. an interconnection network controller stores information reflecting the current usage of the interconnection network and interfaces between the interconnection network and the neural network. dated 1993-11-16 5263122,neural network architecture,a frequency-based neural network in which the state of a neuron is indicated by the frequency of an impulse stream emitted by the neuron uses an interconnectivity structure employing a frequency-modulation multiplexing scheme to weight and communicate the pulse stream from an emitting neuron to receiving neurons in another network level.,1993-11-16,The title of the patent is neural network architecture and its abstract is a frequency-based neural network in which the state of a neuron is indicated by the frequency of an impulse stream emitted by the neuron uses an interconnectivity structure employing a frequency-modulation multiplexing scheme to weight and communicate the pulse stream from an emitting neuron to receiving neurons in another network level. dated 1993-11-16 5264734,difference calculating neural network utilizing switched capacitors,a difference calculating neural network is disclosed having an array of synapse cells arranged in rows and columns along pairs of row and column lines. the cells include a pair of floating gate devices which have their control gates coupled to receive one of a pair of complementary input voltages. the floating gate devices also have complementary threshold voltages such that packets of charge are produced from the synapse cells that are proportional to the difference between the input and voltage threshold. the charge packets are accumulated by the pairs of column lines in the array.,1993-11-23,The title of the patent is difference calculating neural network utilizing switched capacitors and its abstract is a difference calculating neural network is disclosed having an array of synapse cells arranged in rows and columns along pairs of row and column lines. the cells include a pair of floating gate devices which have their control gates coupled to receive one of a pair of complementary input voltages. the floating gate devices also have complementary threshold voltages such that packets of charge are produced from the synapse cells that are proportional to the difference between the input and voltage threshold. the charge packets are accumulated by the pairs of column lines in the array. dated 1993-11-23 5265192,method for the automated editing of seismic traces using an adaptive network,"an adaptive, or neural, network and a method of operating the same is disclosed which is particularly adapted for performing seismic trace editing for seismic shot records. the adaptive network is first trained according to the generalized delta rule. the disclosed training method includes backpropagation is performed according to the worst case error trace, including adjustment of the learning and momentum factors to increase as the worst case error decreases. slow convergence regions are detected, and methods applied to escape such regions including restoration of previously trimmed dormant links, renormalization of the weighting factor values, and the addition of new network layers with links between nodes that skip the hidden layer. after the training of the network, data corresponding to a discrete fast fourier transform of each trace, and to certain other attributes of the trace and adjacent traces thereto, are presented to the network. the network classifies the trace as good or noisy according to the inputs thereto, and to the weighting factors therewithin, such classification useful for ignoring noisy traces in subsequent data analysis. the analysis may be repeated for all of the traces in the shot record, and in multiple shot records.",1993-11-23,"The title of the patent is method for the automated editing of seismic traces using an adaptive network and its abstract is an adaptive, or neural, network and a method of operating the same is disclosed which is particularly adapted for performing seismic trace editing for seismic shot records. the adaptive network is first trained according to the generalized delta rule. the disclosed training method includes backpropagation is performed according to the worst case error trace, including adjustment of the learning and momentum factors to increase as the worst case error decreases. slow convergence regions are detected, and methods applied to escape such regions including restoration of previously trimmed dormant links, renormalization of the weighting factor values, and the addition of new network layers with links between nodes that skip the hidden layer. after the training of the network, data corresponding to a discrete fast fourier transform of each trace, and to certain other attributes of the trace and adjacent traces thereto, are presented to the network. the network classifies the trace as good or noisy according to the inputs thereto, and to the weighting factors therewithin, such classification useful for ignoring noisy traces in subsequent data analysis. the analysis may be repeated for all of the traces in the shot record, and in multiple shot records. dated 1993-11-23" 5267151,method and apparatus for detecting and identifying a condition,"a method and apparatus for sensing and classifying a condition of interest in a system from background noise in which a parameter representative of the condition of interest is sensed and an electrical signal representative of the sensed parameter is produced. the electrical signal is converted into a digital signal, this digital signal containing a signal of interest representative of the condition of interest and background noise. the digital signal is received by an artificial neural network which filters out the background noise to produce a filtered signal from the digital signal, and classifies the signal of interest from the filtered signal to produce an output representative of the classified signal.",1993-11-30,"The title of the patent is method and apparatus for detecting and identifying a condition and its abstract is a method and apparatus for sensing and classifying a condition of interest in a system from background noise in which a parameter representative of the condition of interest is sensed and an electrical signal representative of the sensed parameter is produced. the electrical signal is converted into a digital signal, this digital signal containing a signal of interest representative of the condition of interest and background noise. the digital signal is received by an artificial neural network which filters out the background noise to produce a filtered signal from the digital signal, and classifies the signal of interest from the filtered signal to produce an output representative of the classified signal. dated 1993-11-30" 5267165,data processing device and method for selecting data words contained in a dictionary,"a data processing device for selecting data words which are contained in a dictionary and which are nearest to a data word to be processed according to a correspondence criterion. the device includes: first apparatus for segmenting the space enclosing the assembly of data words of the dictionary; second apparatus for generating, for each segment, sub-dictionaries by making an arbitrary segment correspond, in accordance with the correspondence criterion, to words of a sub-dictionary; third apparatus for utilising the sub-dictionaries by determining, for an arbitrary data word to be processed, the segment with which it is associated, followed by determination, in accordance with the correspondence criterion, of that word or words among the words of the sub-dictionary associated with the segment which corresponds (correspond) best to the arbitrary data word to be processed. segmentation can be realised by means of a layered or tree-like neural network. the device may be used for data compression or data classification.",1993-11-30,"The title of the patent is data processing device and method for selecting data words contained in a dictionary and its abstract is a data processing device for selecting data words which are contained in a dictionary and which are nearest to a data word to be processed according to a correspondence criterion. the device includes: first apparatus for segmenting the space enclosing the assembly of data words of the dictionary; second apparatus for generating, for each segment, sub-dictionaries by making an arbitrary segment correspond, in accordance with the correspondence criterion, to words of a sub-dictionary; third apparatus for utilising the sub-dictionaries by determining, for an arbitrary data word to be processed, the segment with which it is associated, followed by determination, in accordance with the correspondence criterion, of that word or words among the words of the sub-dictionary associated with the segment which corresponds (correspond) best to the arbitrary data word to be processed. segmentation can be realised by means of a layered or tree-like neural network. the device may be used for data compression or data classification. dated 1993-11-30" 5267347,information processing element,"an information processing element for processing information with a function of neural network includes a semiconductor integrated circuit element portion comprising a plurality of neuron circuit regions constituting a neuron function among the neural network function, a molecular film element having a light-electricity function, provided on the circuit element portion, and the combination between the plurality of neurons is realized by utilizing a photoconductivity property of the molecular film element.",1993-11-30,"The title of the patent is information processing element and its abstract is an information processing element for processing information with a function of neural network includes a semiconductor integrated circuit element portion comprising a plurality of neuron circuit regions constituting a neuron function among the neural network function, a molecular film element having a light-electricity function, provided on the circuit element portion, and the combination between the plurality of neurons is realized by utilizing a photoconductivity property of the molecular film element. dated 1993-11-30" 5267502,weapons systems future muzzle velocity neural network,"in a device and method for predicting a future muzzle velocity of an indirect fire weapon 3, 7 means 9, 11 responsive to a measurement of muzzle velocity are adapted to implement an adaptive empirical prediction method to predict the future muzzle velocity. the invention also relates to an aiming system and method for an indirect-fire weapon 3, 7. the system comprises a muzzle velocity measuring device 5, and predictor means 9, 11 responsive to an output of the muzzle velocity measuring device 5 for determining a new elevation setting from the weapon. preferably, the predictor means utilizes an adaptive empirical prediction method such as a kalman filter or neural network.",1993-12-07,"The title of the patent is weapons systems future muzzle velocity neural network and its abstract is in a device and method for predicting a future muzzle velocity of an indirect fire weapon 3, 7 means 9, 11 responsive to a measurement of muzzle velocity are adapted to implement an adaptive empirical prediction method to predict the future muzzle velocity. the invention also relates to an aiming system and method for an indirect-fire weapon 3, 7. the system comprises a muzzle velocity measuring device 5, and predictor means 9, 11 responsive to an output of the muzzle velocity measuring device 5 for determining a new elevation setting from the weapon. preferably, the predictor means utilizes an adaptive empirical prediction method such as a kalman filter or neural network. dated 1993-12-07" 5268320,method of increasing the accuracy of an analog circuit employing floating gate memory devices,"a method for increasing the accuracy of an analog neural network which computers a sum-of-products between an input vector and a stored weight pattern is described. in one embodiment of the present invention, the method comprises initially training the network by programming the synapses with a certain weight pattern. the training may be carried out using any standard learning algorithm. preferably, a back-propagation learning algorithm is employed. next, the network is baked at an elevated temperature to effectuate a change in the weight pattern previously programmed during initial training. this change results from a charge redistribution which occurs within each of the synapses of the network. after baking, the network is then retrained to compensate for the change resulting from the charge redistribution. the baking and retraining steps may be successively repeated to increase the accuracy of the neural network to any desired level.",1993-12-07,"The title of the patent is method of increasing the accuracy of an analog circuit employing floating gate memory devices and its abstract is a method for increasing the accuracy of an analog neural network which computers a sum-of-products between an input vector and a stored weight pattern is described. in one embodiment of the present invention, the method comprises initially training the network by programming the synapses with a certain weight pattern. the training may be carried out using any standard learning algorithm. preferably, a back-propagation learning algorithm is employed. next, the network is baked at an elevated temperature to effectuate a change in the weight pattern previously programmed during initial training. this change results from a charge redistribution which occurs within each of the synapses of the network. after baking, the network is then retrained to compensate for the change resulting from the charge redistribution. the baking and retraining steps may be successively repeated to increase the accuracy of the neural network to any desired level. dated 1993-12-07" 5268684,apparatus for a neural network one-out-of-n encoder/decoder,"an artificial network for encoding the binary on-state of one-out-of-n inputs, say j, when only one state is on at a time wherein the jth on-state is represented by a suitable output level of an n-input mp type neuron operating in the non-saturated region of the neuron output nonlinearity. a single line transmits the encoded amplitude level signal to a decoder having n single input neural networks. the n outputs of the decoder are in the off-state except for the output corresponding to the active input node of the encoder",1993-12-07,"The title of the patent is apparatus for a neural network one-out-of-n encoder/decoder and its abstract is an artificial network for encoding the binary on-state of one-out-of-n inputs, say j, when only one state is on at a time wherein the jth on-state is represented by a suitable output level of an n-input mp type neuron operating in the non-saturated region of the neuron output nonlinearity. a single line transmits the encoded amplitude level signal to a decoder having n single input neural networks. the n outputs of the decoder are in the off-state except for the output corresponding to the active input node of the encoder dated 1993-12-07" 5268834,stable adaptive neural network controller,"an adaptive control system uses a neural network to provide adaptive control when the plant is operating within a normal operating range, but shifts to other types of control as the plant operating conditions move outside of the normal operating range. the controller uses a structure which allows the neural network parameters to be determined from minimal information about plant structure and the neural network is trained on-line during normal plant operation. the resulting system can be proven to be stable over all possible conditions. further, with the inventive techniques, the tracking accuracy can be controlled by appropriate network design.",1993-12-07,"The title of the patent is stable adaptive neural network controller and its abstract is an adaptive control system uses a neural network to provide adaptive control when the plant is operating within a normal operating range, but shifts to other types of control as the plant operating conditions move outside of the normal operating range. the controller uses a structure which allows the neural network parameters to be determined from minimal information about plant structure and the neural network is trained on-line during normal plant operation. the resulting system can be proven to be stable over all possible conditions. further, with the inventive techniques, the tracking accuracy can be controlled by appropriate network design. dated 1993-12-07" 5270950,apparatus and a method for locating a source of acoustic emission in a material,"an apparatus for locating a source of acoustic emission in a material comprises four spaced transducers coupled to the material. each transducer produces an output signal corresponding to a detected acoustic emission activity, and each output signal is amplified, rectified and enveloped before being supplied to a processor. artificially induced acoustic emission events, of known location, are generated in the material. the processor measures the times taken for each output signal corresponding to artificially induced acoustic emission events, to exceed two predetermined amplitudes from a datum time. a neural network analyzes the measured times to exceed the predetermined amplitudes for the output signals corresponding to the artificially induced acoustic emission events and infers the mathematical relationship between values of time and location of acoustic emission event. the times taken for each output signal, corresponding to acoustic emission events of unknown source location, to exceed two predetermined amplitudes from the datum are measured and are used to calculate the location of the unknown source with the mathematical relationship deduced by the neural network.",1993-12-14,"The title of the patent is apparatus and a method for locating a source of acoustic emission in a material and its abstract is an apparatus for locating a source of acoustic emission in a material comprises four spaced transducers coupled to the material. each transducer produces an output signal corresponding to a detected acoustic emission activity, and each output signal is amplified, rectified and enveloped before being supplied to a processor. artificially induced acoustic emission events, of known location, are generated in the material. the processor measures the times taken for each output signal corresponding to artificially induced acoustic emission events, to exceed two predetermined amplitudes from a datum time. a neural network analyzes the measured times to exceed the predetermined amplitudes for the output signals corresponding to the artificially induced acoustic emission events and infers the mathematical relationship between values of time and location of acoustic emission event. the times taken for each output signal, corresponding to acoustic emission events of unknown source location, to exceed two predetermined amplitudes from the datum are measured and are used to calculate the location of the unknown source with the mathematical relationship deduced by the neural network. dated 1993-12-14" 5271090,operational speed improvement for neural network,"higher operational speed is obtained without sacrificing computational accuracy and reliability in a neural network by interchanging a computationally complex nonlinear function with a similar but less complex nonlinear function in each neuron or computational element after each neuron of the network has been trained by an appropriate training algorithm for the classifying problem addressed by the neural network. in one exemplary embodiment, a hyperbolic tangent function is replaced by a piecewise linear threshold logic function.",1993-12-14,"The title of the patent is operational speed improvement for neural network and its abstract is higher operational speed is obtained without sacrificing computational accuracy and reliability in a neural network by interchanging a computationally complex nonlinear function with a similar but less complex nonlinear function in each neuron or computational element after each neuron of the network has been trained by an appropriate training algorithm for the classifying problem addressed by the neural network. in one exemplary embodiment, a hyperbolic tangent function is replaced by a piecewise linear threshold logic function. dated 1993-12-14" 5272723,waveform equalizer using a neural network,"a waveform equalizer for equalizing a distorted signal, contains a sampling unit, a time series generating unit, and an equalization neural network unit. the sampling unit samples the level of a distorted signal at a predetermined rate. the time series generating unit serially receives the sampled level and outputs in parallel a predetermined number of the levels which have been last received. the equalization neural network unit receives the outputs of the time series generating unit, and generates an equalized signal of the distorted signal based on the outputs of the time series generating unit using a set of equalization network weights which are preset therein. the waveform equalizer may further contain a distortion characteristic detecting unit, an equalization network weight holding unit, and a selector unit. the distortion characteristic detecting unit detects a distortion characteristic of the distorted signal. the equalization network weight holding unit holds a plurality of sets of equalization network weights each for being set in the equalization neural network unit. the selector unit selects one of the plurality of sets of equalization network weights according to the distortion characteristic which is detected in the distortion characteristic detecting unit, and supplies the selected set in the equalization neural network unit to set the selected set therein.",1993-12-21,"The title of the patent is waveform equalizer using a neural network and its abstract is a waveform equalizer for equalizing a distorted signal, contains a sampling unit, a time series generating unit, and an equalization neural network unit. the sampling unit samples the level of a distorted signal at a predetermined rate. the time series generating unit serially receives the sampled level and outputs in parallel a predetermined number of the levels which have been last received. the equalization neural network unit receives the outputs of the time series generating unit, and generates an equalized signal of the distorted signal based on the outputs of the time series generating unit using a set of equalization network weights which are preset therein. the waveform equalizer may further contain a distortion characteristic detecting unit, an equalization network weight holding unit, and a selector unit. the distortion characteristic detecting unit detects a distortion characteristic of the distorted signal. the equalization network weight holding unit holds a plurality of sets of equalization network weights each for being set in the equalization neural network unit. the selector unit selects one of the plurality of sets of equalization network weights according to the distortion characteristic which is detected in the distortion characteristic detecting unit, and supplies the selected set in the equalization neural network unit to set the selected set therein. dated 1993-12-21" 5274714,method and apparatus for determining and organizing feature vectors for neural network recognition,"a pattern recognition method and apparatus utilizes a neural network to recognize input images which are sufficiently similar to a database of previously stored images. images are first processed and subjected to a fourier transform which yields a power spectrum. an in-class to out-of-class study is performed on a typical collection of images in order to determine the most discriminatory regions of the fourier transform. a feature vector consisting of the (most discriminatory) information from the power spectrum of the fourier transform of the image is formed. feature vectors are input to a neural network having preferably two hidden layers, input dimensionality of the number of elements in the feature vector and output dimensionality of the number of data elements stored in the database. unique identifier numbers are preferably stored along with the feature vector. application of a query feature vector to the neural network results in an output vector. the output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate a successful identification whereupon a unique identifier number may be displayed.",1993-12-28,"The title of the patent is method and apparatus for determining and organizing feature vectors for neural network recognition and its abstract is a pattern recognition method and apparatus utilizes a neural network to recognize input images which are sufficiently similar to a database of previously stored images. images are first processed and subjected to a fourier transform which yields a power spectrum. an in-class to out-of-class study is performed on a typical collection of images in order to determine the most discriminatory regions of the fourier transform. a feature vector consisting of the (most discriminatory) information from the power spectrum of the fourier transform of the image is formed. feature vectors are input to a neural network having preferably two hidden layers, input dimensionality of the number of elements in the feature vector and output dimensionality of the number of data elements stored in the database. unique identifier numbers are preferably stored along with the feature vector. application of a query feature vector to the neural network results in an output vector. the output vector is subjected to statistical analysis to determine if a sufficiently high confidence level exists to indicate a successful identification whereupon a unique identifier number may be displayed. dated 1993-12-28" 5274742,combination problem solving method and apparatus,"by using the state transition of a highly interconnected neural network, in order to solve a combination problem, an energy function is set by the following procedure: (i) the energy function is set in correspondence to the size of the combination problem; (ii) the energy function is set for a combination problem to be solved by using an energy function which solved another combination problem of a different size from the combination problem to be solved. also, in order to solve a problem involving the cutting out a specific image from a whole image, as a combination problem when obtaining pixels representing a contour of an object, the energy function is set by either (i) or (ii) above.",1993-12-28,"The title of the patent is combination problem solving method and apparatus and its abstract is by using the state transition of a highly interconnected neural network, in order to solve a combination problem, an energy function is set by the following procedure: (i) the energy function is set in correspondence to the size of the combination problem; (ii) the energy function is set for a combination problem to be solved by using an energy function which solved another combination problem of a different size from the combination problem to be solved. also, in order to solve a problem involving the cutting out a specific image from a whole image, as a combination problem when obtaining pixels representing a contour of an object, the energy function is set by either (i) or (ii) above. dated 1993-12-28" 5274744,neural network for performing a relaxation process,"in accordance with the present invention, a neural network comprising an array of neurons (i.e. processing nodes) interconnected by synapses (i.e. weighted transmission links) is utilized to carry out a probabilistic relaxation process. the inventive neural network is especially suited for carrying out a variety of image processing tasks such as thresholding.",1993-12-28,"The title of the patent is neural network for performing a relaxation process and its abstract is in accordance with the present invention, a neural network comprising an array of neurons (i.e. processing nodes) interconnected by synapses (i.e. weighted transmission links) is utilized to carry out a probabilistic relaxation process. the inventive neural network is especially suited for carrying out a variety of image processing tasks such as thresholding. dated 1993-12-28" 5274745,method of processing information in artificial neural networks,"a method of processing information in an artificial neural network including a plurality of artificial neurons and weighted links coupling the neurons. in the method, those of the artificial neurons whose output values change by a value greater than a threshold value are selected. the output values of the selected neurons are calculated, and the influence which the changes in the output values of the selected neurons impose on the input values of the other artificial neurons is computed. the threshold value is changed such that an appropriate number of neurons are selected. the information processing in the artificial neural network is stopped when the threshold value decreased below a predetermined small value and the values output by all artificial neurons change by a value equal to or less than the threshold value.",1993-12-28,"The title of the patent is method of processing information in artificial neural networks and its abstract is a method of processing information in an artificial neural network including a plurality of artificial neurons and weighted links coupling the neurons. in the method, those of the artificial neurons whose output values change by a value greater than a threshold value are selected. the output values of the selected neurons are calculated, and the influence which the changes in the output values of the selected neurons impose on the input values of the other artificial neurons is computed. the threshold value is changed such that an appropriate number of neurons are selected. the information processing in the artificial neural network is stopped when the threshold value decreased below a predetermined small value and the values output by all artificial neurons change by a value equal to or less than the threshold value. dated 1993-12-28" 5274746,coupling element for semiconductor neural network device,"a neural network device includes internal data input lines, internal data output lines, coupling elements provided at the connections of the internal data input lines and the internal data output lines, word lines each for selecting one row of coupling elements. the coupling elements couple, with specific programmable coupling strengths, the associated internal data input lines to the associated internal data output lines. in a program mode, the internal data output lines serve as signal lines for transmitting the coupling strength information. each of the coupling elements includes memories constituted of cross-coupled inverters for storing the coupling strength information, first switching transistors responsive to signal potentials on associated word lines for connecting the memories to associated internal data output lines, second switching elements responsive to signal potentials on associated internal data input lines for transmitting the storage information in the memories to the associated internal data output lines. each of the internal data output lines has a pair of first and second internal data output lines.",1993-12-28,"The title of the patent is coupling element for semiconductor neural network device and its abstract is a neural network device includes internal data input lines, internal data output lines, coupling elements provided at the connections of the internal data input lines and the internal data output lines, word lines each for selecting one row of coupling elements. the coupling elements couple, with specific programmable coupling strengths, the associated internal data input lines to the associated internal data output lines. in a program mode, the internal data output lines serve as signal lines for transmitting the coupling strength information. each of the coupling elements includes memories constituted of cross-coupled inverters for storing the coupling strength information, first switching transistors responsive to signal potentials on associated word lines for connecting the memories to associated internal data output lines, second switching elements responsive to signal potentials on associated internal data input lines for transmitting the storage information in the memories to the associated internal data output lines. each of the internal data output lines has a pair of first and second internal data output lines. dated 1993-12-28" 5274748,electronic synapse circuit for artificial neural network,"an electronic synapse circuit is disclosed for multiplying an analog weight signal value by a digital state signal value to achieve a signed product value as a current which is capable of being summed with other such synapse circuit outputs. the circuit employs a storage multiplying digital-to-analog converter which provides storage for the analog weight signal value. additional circuitry permits programming different analog weight signal values into the circuit, performing four-quadrant multiplication, generating a current summable output, and maintaining the stored analog weight signal value at a substantially constant value independent of the digital state signal values.",1993-12-28,"The title of the patent is electronic synapse circuit for artificial neural network and its abstract is an electronic synapse circuit is disclosed for multiplying an analog weight signal value by a digital state signal value to achieve a signed product value as a current which is capable of being summed with other such synapse circuit outputs. the circuit employs a storage multiplying digital-to-analog converter which provides storage for the analog weight signal value. additional circuitry permits programming different analog weight signal values into the circuit, performing four-quadrant multiplication, generating a current summable output, and maintaining the stored analog weight signal value at a substantially constant value independent of the digital state signal values. dated 1993-12-28" 5276769,neural network learning apparatus and method,"a learning apparatus for use in a neural network system which has a plurality of classes representing different meanings. the learning apparatus is provided for learning a number of different patterns, inputted by input vectors, and classified in different classes. the learning apparatus is constructed by a computer and it includes a section for producing a plurality of output vectors representing different classes in response to an input vector, a section for obtaining a first largest output vector of all the output vectors, a section for obtaining a second largest output vector of all the output vectors, and a section for setting predetermined weights to the first and second largest output vectors, respectively, such that the first largest output vector is made larger, and the second largest output vector is made smaller. furthermore, a section for determining a ratio of the weighted first and second largest output vectors, respectively, is included. if the determined ratio is smaller than a predetermined value, the weighted first and second largest output vectors are further weighted to be made further larger and smaller, respectively.",1994-01-04,"The title of the patent is neural network learning apparatus and method and its abstract is a learning apparatus for use in a neural network system which has a plurality of classes representing different meanings. the learning apparatus is provided for learning a number of different patterns, inputted by input vectors, and classified in different classes. the learning apparatus is constructed by a computer and it includes a section for producing a plurality of output vectors representing different classes in response to an input vector, a section for obtaining a first largest output vector of all the output vectors, a section for obtaining a second largest output vector of all the output vectors, and a section for setting predetermined weights to the first and second largest output vectors, respectively, such that the first largest output vector is made larger, and the second largest output vector is made smaller. furthermore, a section for determining a ratio of the weighted first and second largest output vectors, respectively, is included. if the determined ratio is smaller than a predetermined value, the weighted first and second largest output vectors are further weighted to be made further larger and smaller, respectively. dated 1994-01-04" 5276770,training of neural network for multi-source data fusion,"a method of training a multilayer perceptron type neural network to provide a processor for fusion of target angle data detected by a plurality of sensors. the neural network includes a layer of input neurons at least equal in number to the number of sensors plus the maximum number of targets, at least one layer of inner neurons, and a plurality of output neurons forming an output layer. each neuron is connected to every neuron in adjacent layers by adjustable weighted synaptic connections. the method of training comprises the steps of (a) for each sensor, designing a plurality of the input neurons for receiving any target angle data, the number of designated input neurons for each sensor being at least as large as the maximum number of targets to be detected by the sensor; (b) for a known set of targets having a known target angle for each sensor, applying a signal related to each known target angle to the designated input neurons for each of the sensors, wherein the output neurons will produce an initial output; (c) for a selected one of the sensors, designating a plurality of the output neurons to correspond to the input neurons designated for the selected sensor and applying the signal related to the known target angles for the selected sensor to the designated output neurons to provide a designated output signal wherein the difference between the initial output and the designated output signal is used to adapt the weights throughout the neural network to provide an adjusted output signal; and (d) repeating steps (a)-(c) until the adjusted output signal corresponds to a desired output signal.",1994-01-04,"The title of the patent is training of neural network for multi-source data fusion and its abstract is a method of training a multilayer perceptron type neural network to provide a processor for fusion of target angle data detected by a plurality of sensors. the neural network includes a layer of input neurons at least equal in number to the number of sensors plus the maximum number of targets, at least one layer of inner neurons, and a plurality of output neurons forming an output layer. each neuron is connected to every neuron in adjacent layers by adjustable weighted synaptic connections. the method of training comprises the steps of (a) for each sensor, designing a plurality of the input neurons for receiving any target angle data, the number of designated input neurons for each sensor being at least as large as the maximum number of targets to be detected by the sensor; (b) for a known set of targets having a known target angle for each sensor, applying a signal related to each known target angle to the designated input neurons for each of the sensors, wherein the output neurons will produce an initial output; (c) for a selected one of the sensors, designating a plurality of the output neurons to correspond to the input neurons designated for the selected sensor and applying the signal related to the known target angles for the selected sensor to the designated output neurons to provide a designated output signal wherein the difference between the initial output and the designated output signal is used to adapt the weights throughout the neural network to provide an adjusted output signal; and (d) repeating steps (a)-(c) until the adjusted output signal corresponds to a desired output signal. dated 1994-01-04" 5276771,rapidly converging projective neural network,"a data processing system and method for solving pattern classification problems and function-fitting problems includes a neural network in which n-dimensional input vectors are augmented with at least one element to form an n+j-dimensional projected input vector, whose magnitude is then preferably normalized to lie on the surface of a hypersphere. weight vectors of at least a lowest intermediate layer of network nodes are preferably also constrained to lie on the n+j-dimensional surface. to train the network, the system compares network output values with known goal vectors, and an error function (which depends on all weights and threshold values of the intermediate and output nodes) is then minimized. in order to decrease the network's learning time even further, the weight vectors for the intermediate nodes are initially preferably set equal to known prototypes for the various classes of input vectors. furthermore, the invention also allows separation of the network into sub-networks, which are then trained individually and later recombined. the network is able to use both hyperspheres and hyperplanes to form decision boundaries, and, indeed, can converge to the one even if it initially assumes the other.",1994-01-04,"The title of the patent is rapidly converging projective neural network and its abstract is a data processing system and method for solving pattern classification problems and function-fitting problems includes a neural network in which n-dimensional input vectors are augmented with at least one element to form an n+j-dimensional projected input vector, whose magnitude is then preferably normalized to lie on the surface of a hypersphere. weight vectors of at least a lowest intermediate layer of network nodes are preferably also constrained to lie on the n+j-dimensional surface. to train the network, the system compares network output values with known goal vectors, and an error function (which depends on all weights and threshold values of the intermediate and output nodes) is then minimized. in order to decrease the network's learning time even further, the weight vectors for the intermediate nodes are initially preferably set equal to known prototypes for the various classes of input vectors. furthermore, the invention also allows separation of the network into sub-networks, which are then trained individually and later recombined. the network is able to use both hyperspheres and hyperplanes to form decision boundaries, and, indeed, can converge to the one even if it initially assumes the other. dated 1994-01-04" 5276772,real time adaptive probabilistic neural network system and method for data sorting,"an adaptive probabilistic neural network (apnn) includes a cluster processor circuit which generates a signal which represents a probability density function estimation value which is used to sort input pulse parameter data signals based upon a probability of obtaining a correct match with a group of input pulse parameter data signals that have already been sorted. in the apnn system, a pulse buffer memory circuit is contained within the cluster processor circuit and temporarily stores the assigned input pulse parameter data signals. the pulse buffer memory circuit is initially empty. as the input pulse parameter data signals are presented to the apnn, the system sorts the incoming data signals based on the probability density function estimation value signal generated by each currently operating cluster processor circuit. the current input pulse parameter data signal is sorted and stored in the pulse buffer memory circuit of the cluster processor circuit. a small probability density function estimation value signal indicates the current unassigned input pulse parameter data signal is not recognized by the apnn system. a large probability density function estimation value signal indicates a match and the current input pulse parameter data signal will be included within a particular cluster processor circuit.",1994-01-04,"The title of the patent is real time adaptive probabilistic neural network system and method for data sorting and its abstract is an adaptive probabilistic neural network (apnn) includes a cluster processor circuit which generates a signal which represents a probability density function estimation value which is used to sort input pulse parameter data signals based upon a probability of obtaining a correct match with a group of input pulse parameter data signals that have already been sorted. in the apnn system, a pulse buffer memory circuit is contained within the cluster processor circuit and temporarily stores the assigned input pulse parameter data signals. the pulse buffer memory circuit is initially empty. as the input pulse parameter data signals are presented to the apnn, the system sorts the incoming data signals based on the probability density function estimation value signal generated by each currently operating cluster processor circuit. the current input pulse parameter data signal is sorted and stored in the pulse buffer memory circuit of the cluster processor circuit. a small probability density function estimation value signal indicates the current unassigned input pulse parameter data signal is not recognized by the apnn system. a large probability density function estimation value signal indicates a match and the current input pulse parameter data signal will be included within a particular cluster processor circuit. dated 1994-01-04" 5276773,digital neural network executed in integrated circuit technology,"a digital neural network has a plurality of neurons (nr) completely meshed with one another, each of which comprises an evaluation stage having a plurality of evaluators (b) that is equal in number to the plurality of neurons (nr) and each of which comprises a decision stage having a decision unit (e). an adjustment information (inf.sub.e) that effects a defined pre-adjustment of the decision unit (e) can be supplied to every decision unit (e) by a pre-processing means via an information input. a weighting information (inf.sub.g) can be supplied to every evaluator (b) by a pre-processing means via an individual information input. an output information (inf.sub.a) can be output by every decision unit (e) to a post-processing means via a respective individual information output. the information outputs of the decision units (e) are each connected to an individual processing input of all evaluators (b) allocated to the appertaining decision unit (e). individual processing outputs of the evaluators (b) are connected to individual processing inputs of the decision unit (e) in the appertaining neuron (n), so that every output information (inf.sub.a) can be indirectly fed back onto every neuron (nr).",1994-01-04,"The title of the patent is digital neural network executed in integrated circuit technology and its abstract is a digital neural network has a plurality of neurons (nr) completely meshed with one another, each of which comprises an evaluation stage having a plurality of evaluators (b) that is equal in number to the plurality of neurons (nr) and each of which comprises a decision stage having a decision unit (e). an adjustment information (inf.sub.e) that effects a defined pre-adjustment of the decision unit (e) can be supplied to every decision unit (e) by a pre-processing means via an information input. a weighting information (inf.sub.g) can be supplied to every evaluator (b) by a pre-processing means via an individual information input. an output information (inf.sub.a) can be output by every decision unit (e) to a post-processing means via a respective individual information output. the information outputs of the decision units (e) are each connected to an individual processing input of all evaluators (b) allocated to the appertaining decision unit (e). individual processing outputs of the evaluators (b) are connected to individual processing inputs of the decision unit (e) in the appertaining neuron (n), so that every output information (inf.sub.a) can be indirectly fed back onto every neuron (nr). dated 1994-01-04" 5278755,method for determining image points in object images using neural networks,"an image point located in the region inside of an object image is determined from an image signal made up of a series of image signal components representing respective picture elements in a radiation image, which includes the object image and which has been recorded on a recording medium in accordance with a predetermined image recording menu. a plurality of different neural networks are prepared for a plurality of different image recording menus. each of the neural networks receives an image signal and generates outputs which represent an image point. a neural network, which is optimum for the predetermined image recording menu, is selected from the plurality of the neural networks. outputs, which represent the image point located in the region inside of the object image, are then obtained from the selected neural network.",1994-01-11,"The title of the patent is method for determining image points in object images using neural networks and its abstract is an image point located in the region inside of an object image is determined from an image signal made up of a series of image signal components representing respective picture elements in a radiation image, which includes the object image and which has been recorded on a recording medium in accordance with a predetermined image recording menu. a plurality of different neural networks are prepared for a plurality of different image recording menus. each of the neural networks receives an image signal and generates outputs which represent an image point. a neural network, which is optimum for the predetermined image recording menu, is selected from the plurality of the neural networks. outputs, which represent the image point located in the region inside of the object image, are then obtained from the selected neural network. dated 1994-01-11" 5278945,neural processor apparatus,"a neural processor apparatus implements a neural network at a low cost and with high efficiency by simultaneously processing a plurality of neurons using the same synaptic inputs. weight data is sequentially accessed from an external weight ram memory to minimize space on the ic. the input data and weight data may be configured as either a single, high-resolution input or a plurality of inputs having a lower resolution, whereby the plurality of inputs are processed simultaneously. a dynamic approximation method is implemented using a minimal amount of circuitry to provide high-resolution transformations in accordance with the transfer function of a given neuron model. the neural processor apparatus may be used to implement an entire neural network, or may be implemented using a plurality of devices, each device implementing a predetermined number of neural layers.",1994-01-11,"The title of the patent is neural processor apparatus and its abstract is a neural processor apparatus implements a neural network at a low cost and with high efficiency by simultaneously processing a plurality of neurons using the same synaptic inputs. weight data is sequentially accessed from an external weight ram memory to minimize space on the ic. the input data and weight data may be configured as either a single, high-resolution input or a plurality of inputs having a lower resolution, whereby the plurality of inputs are processed simultaneously. a dynamic approximation method is implemented using a minimal amount of circuitry to provide high-resolution transformations in accordance with the transfer function of a given neuron model. the neural processor apparatus may be used to implement an entire neural network, or may be implemented using a plurality of devices, each device implementing a predetermined number of neural layers. dated 1994-01-11" 5280564,neural network having an optimized transfer function for each neuron,"the characteristic data for determining the characteristics of the transfer functions (for example, sigmoid functions) of the neurons of the hidden layer and the output layer (the gradients of the sigmoid functions) of a neural network are learned and corrected in a manner similar to the correction of weighting data and threshold values. since at least one characteristic data which determines the characteristics of the transfer function of each neuron is learned, the transfer function characteristics can be different for different neurons in the network independently of the problem and/or the number of neurons, and be optimum. accordingly, a learning with high precision can be performed in a short time.",1994-01-18,"The title of the patent is neural network having an optimized transfer function for each neuron and its abstract is the characteristic data for determining the characteristics of the transfer functions (for example, sigmoid functions) of the neurons of the hidden layer and the output layer (the gradients of the sigmoid functions) of a neural network are learned and corrected in a manner similar to the correction of weighting data and threshold values. since at least one characteristic data which determines the characteristics of the transfer function of each neuron is learned, the transfer function characteristics can be different for different neurons in the network independently of the problem and/or the number of neurons, and be optimum. accordingly, a learning with high precision can be performed in a short time. dated 1994-01-18" 5280792,method and system for automatically classifying intracardiac electrograms,"the application is directed to a method for automatically classifying intracardiac electrograms, and a system for performing the method. in a further aspect, it concerns an implantable cardioverter defibrillator which incorporates the system and uses the method to monitor cardiac activity and deliver appropriate treatment. the method uses a combination of timing analysis and pattern matching using a neural network in order to correctly classify the electrograms. this technique allows both changes in rate and morphology to be taken into account.",1994-01-25,"The title of the patent is method and system for automatically classifying intracardiac electrograms and its abstract is the application is directed to a method for automatically classifying intracardiac electrograms, and a system for performing the method. in a further aspect, it concerns an implantable cardioverter defibrillator which incorporates the system and uses the method to monitor cardiac activity and deliver appropriate treatment. the method uses a combination of timing analysis and pattern matching using a neural network in order to correctly classify the electrograms. this technique allows both changes in rate and morphology to be taken into account. dated 1994-01-25" 5282131,control system for controlling a pulp washing system using a neural network controller,a control system for a countercurrent pulp washing process in which the pulp is formed as a pulp mat on at least one moving filter surface and the mat is supplied with rinse water to replace water in the pulp mat thereby reducing the soda loss in the mat before it is removed from the filter surface. the process is characterized by at least one predictable process variable including dissolved solids retained in the pulp mat. the system comprises a trainable neural network having a plurality of input neurons having input values applied thereto and output neurons for providing output values and means for training the neural network to provide predicted values for the predictable process variables.,1994-01-25,The title of the patent is control system for controlling a pulp washing system using a neural network controller and its abstract is a control system for a countercurrent pulp washing process in which the pulp is formed as a pulp mat on at least one moving filter surface and the mat is supplied with rinse water to replace water in the pulp mat thereby reducing the soda loss in the mat before it is removed from the filter surface. the process is characterized by at least one predictable process variable including dissolved solids retained in the pulp mat. the system comprises a trainable neural network having a plurality of input neurons having input values applied thereto and output neurons for providing output values and means for training the neural network to provide predicted values for the predictable process variables. dated 1994-01-25 5282261,neural network process measurement and control,"a computer neural network process measurement and control system and method uses real-time output data from a neural network to replace a sensor or laboratory input to a controller. the neural network can use readily available, inexpensive and reliable measurements from sensors as inputs, and produce predicted values of product properties as output data for input to the controller. the system and method overcome process deadtime, measurement deadtime, infrequent measurements, and measurement variability in laboratory data, thus providing improved control. an historical database can be used to provide a history of sensor and laboratory measurements to the neural network. the neural network can detect the appearance of new laboratory measurements in the history and automatically initiate retraining, on-line and in real-time. the system and method can use either a regulatory controller or a supervisory control architecture. a modular software implementation simplifies the building of multiple neural networks, and also optionally provides other control functions, such as supervisory controllers, expert systems, and statistical data filtering, thus allowing powerful extensions of the system and method. template specification for the neural network, and data specification using data pointers allow the system and method to be more easily implemented.",1994-01-25,"The title of the patent is neural network process measurement and control and its abstract is a computer neural network process measurement and control system and method uses real-time output data from a neural network to replace a sensor or laboratory input to a controller. the neural network can use readily available, inexpensive and reliable measurements from sensors as inputs, and produce predicted values of product properties as output data for input to the controller. the system and method overcome process deadtime, measurement deadtime, infrequent measurements, and measurement variability in laboratory data, thus providing improved control. an historical database can be used to provide a history of sensor and laboratory measurements to the neural network. the neural network can detect the appearance of new laboratory measurements in the history and automatically initiate retraining, on-line and in real-time. the system and method can use either a regulatory controller or a supervisory control architecture. a modular software implementation simplifies the building of multiple neural networks, and also optionally provides other control functions, such as supervisory controllers, expert systems, and statistical data filtering, thus allowing powerful extensions of the system and method. template specification for the neural network, and data specification using data pointers allow the system and method to be more easily implemented. dated 1994-01-25" 5283418,automated rotor welding processes using neural networks,"methods and apparatus for monitoring an arc welding process are disclosed. in a preferred embodiment, the present invention creates a digital representation of the arc created during welding and, using a neural network computer, determines if the arc is representative of normal or abnormal welding conditions. the neural network disclosed is trained to identify abnormal conditions and normal conditions and may be adaptively retrained to classify images that are not in the initial set of normal and abnormal images. in certain embodiments, other data, such as current, weld wire emission spectra, or shielding gas flow rate are also collected and the neural network is trained to monitor these data. also, in certain embodiments, an audio signal is collected from the vicinity of the welding process and is used by the neural network computer to further classify the arc as normal or abnormal. the present invention is most preferably implemented in repetitive and continuous welding operations, such as those encountered in the manufacture and rebuilding of steam turbines.",1994-02-01,"The title of the patent is automated rotor welding processes using neural networks and its abstract is methods and apparatus for monitoring an arc welding process are disclosed. in a preferred embodiment, the present invention creates a digital representation of the arc created during welding and, using a neural network computer, determines if the arc is representative of normal or abnormal welding conditions. the neural network disclosed is trained to identify abnormal conditions and normal conditions and may be adaptively retrained to classify images that are not in the initial set of normal and abnormal images. in certain embodiments, other data, such as current, weld wire emission spectra, or shielding gas flow rate are also collected and the neural network is trained to monitor these data. also, in certain embodiments, an audio signal is collected from the vicinity of the welding process and is used by the neural network computer to further classify the arc as normal or abnormal. the present invention is most preferably implemented in repetitive and continuous welding operations, such as those encountered in the manufacture and rebuilding of steam turbines. dated 1994-02-01" 5283746,manufacturing adjustment during article fabrication,"the use of neural networks has been employed to adjust processing during the fabrication of articles. for example, in the production of photolithographic masks by electron beam irradiation of a mask blank in a desired pattern, electrons scattered from the mask substrate cause distortion of the pattern. adjustment for such scattering is possible during the manufacturing process by employing an adjustment function determined by a neural network whose parameters are established relative to a prototypical mask pattern.",1994-02-01,"The title of the patent is manufacturing adjustment during article fabrication and its abstract is the use of neural networks has been employed to adjust processing during the fabrication of articles. for example, in the production of photolithographic masks by electron beam irradiation of a mask blank in a desired pattern, electrons scattered from the mask substrate cause distortion of the pattern. adjustment for such scattering is possible during the manufacturing process by employing an adjustment function determined by a neural network whose parameters are established relative to a prototypical mask pattern. dated 1994-02-01" 5283838,neural network apparatus,"when performing learning for a neural network, a plurality of learning vectors which belong to an arbitrary category are used, and self-organization learning in the category is carried out. as a result, the plurality of learning vectors which belong to the category are automatically clustered, and the contents of weight vectors in the neural network are set to representative vectors which exhibit common features of the learning vectors of each cluster. then, teacher-supervised learning is carried out for the neural network, using the thus set contents of the weight vectors as initial values thereof. in the learning process, an initial value of each weight vector is set to the representative vector of each cluster obtained by clustering. therefore, the number of calculations required until the teacher-supervised learning is converged is greatly reduced.",1994-02-01,"The title of the patent is neural network apparatus and its abstract is when performing learning for a neural network, a plurality of learning vectors which belong to an arbitrary category are used, and self-organization learning in the category is carried out. as a result, the plurality of learning vectors which belong to the category are automatically clustered, and the contents of weight vectors in the neural network are set to representative vectors which exhibit common features of the learning vectors of each cluster. then, teacher-supervised learning is carried out for the neural network, using the thus set contents of the weight vectors as initial values thereof. in the learning process, an initial value of each weight vector is set to the representative vector of each cluster obtained by clustering. therefore, the number of calculations required until the teacher-supervised learning is converged is greatly reduced. dated 1994-02-01" 5283855,neural network and method for training the neural network,"a method and apparatus are disclosed that modify [ies] and generalize [s] the use in artificial neural networks of the error backpropagation algorithm. each neuron unit first divides a plurality of weighted inputs into more than one group, then sums up weighted inputs in each group to provide each group's intermediate outputs, and finally processes the intermediate outputs to produce an output of the neuron unit. since the method uses, when modifying each weight, a partial differential coefficient generated by partially-differentiating the output of the neuron unit by each weighted input, the weight can be properly modified even if the output of a neuron unit as a function of intermediate outputs has a plurality of variables corresponding to the number of groups. since the conventional method uses only one differential coefficient, that is, the differential coefficient of the output of a neuron unit differentiated by the sum of all weighted inputs in a neuron unit, for all weights in a neuron unit, it may be said that the method according to the present invention generalizes the conventional method. the present invention is especially useful for pulse density neural networks which express data as an on-bit density of a bit string.",1994-02-01,"The title of the patent is neural network and method for training the neural network and its abstract is a method and apparatus are disclosed that modify [ies] and generalize [s] the use in artificial neural networks of the error backpropagation algorithm. each neuron unit first divides a plurality of weighted inputs into more than one group, then sums up weighted inputs in each group to provide each group's intermediate outputs, and finally processes the intermediate outputs to produce an output of the neuron unit. since the method uses, when modifying each weight, a partial differential coefficient generated by partially-differentiating the output of the neuron unit by each weighted input, the weight can be properly modified even if the output of a neuron unit as a function of intermediate outputs has a plurality of variables corresponding to the number of groups. since the conventional method uses only one differential coefficient, that is, the differential coefficient of the output of a neuron unit differentiated by the sum of all weighted inputs in a neuron unit, for all weights in a neuron unit, it may be said that the method according to the present invention generalizes the conventional method. the present invention is especially useful for pulse density neural networks which express data as an on-bit density of a bit string. dated 1994-02-01" 5285297,apparatus and method for color calibration,""" a method and apparatus for constructing, training and utilizing an artificial neural network (also termed herein a """"neural network"""", an ann, or an nn) in order to transform a first color value in a first color coordinate system into a second color value in a second color coordinate system. """,1994-02-08,"The title of the patent is apparatus and method for color calibration and its abstract is "" a method and apparatus for constructing, training and utilizing an artificial neural network (also termed herein a """"neural network"""", an ann, or an nn) in order to transform a first color value in a first color coordinate system into a second color value in a second color coordinate system. "" dated 1994-02-08" 5285523,apparatus for recognizing driving environment of vehicle,"an apparatus for recognizing driving environments of a vehicle including a plurality of sensors for detecting various parameters relating to driving conditions of the vehicle such as throttle valve open angle, vehicle running speed, brake pedal depression amount and gear shift range of an automatic transmission, first and second neuron interfaces for converting parameter values detected by the sensors into a plurality of input patterns having predetermined configuration, first and second neural networks having input layers to which corresponding input patterns are applied, hidden layers and output layers for producing recognition results, and a multiplexer for selecting one of the recognition results produced on the output layers of the first and second neural networks. the first neural network has a superior separating or recognizing and learning faculty, while the second neural network has a superior associating faculty. a accelerating pedal depression amount is detected by a sensor and a variation of the thus detected amount is compared with a reference value. when the variation is larger than the reference value, the recognition result produced by the first neural network is selected and when the variation is smaller than the reference value, the recognition result from the second neural network is selected.",1994-02-08,"The title of the patent is apparatus for recognizing driving environment of vehicle and its abstract is an apparatus for recognizing driving environments of a vehicle including a plurality of sensors for detecting various parameters relating to driving conditions of the vehicle such as throttle valve open angle, vehicle running speed, brake pedal depression amount and gear shift range of an automatic transmission, first and second neuron interfaces for converting parameter values detected by the sensors into a plurality of input patterns having predetermined configuration, first and second neural networks having input layers to which corresponding input patterns are applied, hidden layers and output layers for producing recognition results, and a multiplexer for selecting one of the recognition results produced on the output layers of the first and second neural networks. the first neural network has a superior separating or recognizing and learning faculty, while the second neural network has a superior associating faculty. a accelerating pedal depression amount is detected by a sensor and a variation of the thus detected amount is compared with a reference value. when the variation is larger than the reference value, the recognition result produced by the first neural network is selected and when the variation is smaller than the reference value, the recognition result from the second neural network is selected. dated 1994-02-08" 5285524,neural network with daisy chain control,"the present invention is a direct digitally implemented network system in which neural nodes 24, 26 and 28 which output to the same destination node 22 in the network share the same channel 30. if a set of nodes does not output any data to any node to which a second set of nodes outputs data (the two sets of nodes to not overlap or intersect), the two sets of nodes are independent and do not share a channel and have separate channels 120 and 122. the network is configured as parallel operating non-intersecting segments or independent sets where each segment has a segment communication channel or bus 30. each node in the independent set or segment is sequentially activated to produce an output by a daisy chain control signal. the outputs are thereby time division multiplexed over the channel 30 to the destination node 22.",1994-02-08,"The title of the patent is neural network with daisy chain control and its abstract is the present invention is a direct digitally implemented network system in which neural nodes 24, 26 and 28 which output to the same destination node 22 in the network share the same channel 30. if a set of nodes does not output any data to any node to which a second set of nodes outputs data (the two sets of nodes to not overlap or intersect), the two sets of nodes are independent and do not share a channel and have separate channels 120 and 122. the network is configured as parallel operating non-intersecting segments or independent sets where each segment has a segment communication channel or bus 30. each node in the independent set or segment is sequentially activated to produce an output by a daisy chain control signal. the outputs are thereby time division multiplexed over the channel 30 to the destination node 22. dated 1994-02-08" 5286947,apparatus and method for monitoring material removal from a workpiece,"an apparatus and method for monitoring material removal from a workpiece by a beam of energy during a material processing operation are disclosed. a detector is positioned for sensing optical emissions from the workpiece caused by removal of material when an energy beam pulse is incident upon the surface of the workpiece. a computing circuit, algorithm or artificial neural network is provided for determining a quantity of material removed from the sensed optical emissions in real-time during the material processing operation. analysis of the optical emission pulses caused by the material removal provides an indication of the efficiency of the material processing system and provides feedback for manual or automatic adjustment of material processing parameters during the material processing operation.",1994-02-15,"The title of the patent is apparatus and method for monitoring material removal from a workpiece and its abstract is an apparatus and method for monitoring material removal from a workpiece by a beam of energy during a material processing operation are disclosed. a detector is positioned for sensing optical emissions from the workpiece caused by removal of material when an energy beam pulse is incident upon the surface of the workpiece. a computing circuit, algorithm or artificial neural network is provided for determining a quantity of material removed from the sensed optical emissions in real-time during the material processing operation. analysis of the optical emission pulses caused by the material removal provides an indication of the efficiency of the material processing system and provides feedback for manual or automatic adjustment of material processing parameters during the material processing operation. dated 1994-02-15" 5287272,automated cytological specimen classification system and method,an automated screening system and method for cytological specimen classification in which a neural network is utilized in performance of the classification function. also included is an automated microscope and associated image processing circuitry.,1994-02-15,The title of the patent is automated cytological specimen classification system and method and its abstract is an automated screening system and method for cytological specimen classification in which a neural network is utilized in performance of the classification function. also included is an automated microscope and associated image processing circuitry. dated 1994-02-15 5287430,signal discrimination device using neural network,"a signal discrimination device using a neural network for discriminating input signals such as radar reception signals includes an adaptive code generator means for generating codes for representing the discrimination categories. the distances between the codes for closely related categories are smaller than the distances between the codes for remotely related categories. during the learning stage, the neural network is trained to output the codes for respective inputs. the discrimination result judgment means determines the categories by comparing the outputs of the neural network and the codes for the respective categories.",1994-02-15,"The title of the patent is signal discrimination device using neural network and its abstract is a signal discrimination device using a neural network for discriminating input signals such as radar reception signals includes an adaptive code generator means for generating codes for representing the discrimination categories. the distances between the codes for closely related categories are smaller than the distances between the codes for remotely related categories. during the learning stage, the neural network is trained to output the codes for respective inputs. the discrimination result judgment means determines the categories by comparing the outputs of the neural network and the codes for the respective categories. dated 1994-02-15" 5287431,neural network using liquid crystal for threshold and amplifiers for weights,a neural network type data processing system in which an optical input is received and a normalized optical output is generated. a plurality of light receiving regions of a photovoltaic material generate signals which are fed into amplifiers and summed. the gain of the amplifiers represent the synaptic weights. the output the summed amplified signals is then sent to a portion of a liquid crystal light valve where that portion of the liquid crystal light valve is used to produce a normalized light output.,1994-02-15,The title of the patent is neural network using liquid crystal for threshold and amplifiers for weights and its abstract is a neural network type data processing system in which an optical input is received and a normalized optical output is generated. a plurality of light receiving regions of a photovoltaic material generate signals which are fed into amplifiers and summed. the gain of the amplifiers represent the synaptic weights. the output the summed amplified signals is then sent to a portion of a liquid crystal light valve where that portion of the liquid crystal light valve is used to produce a normalized light output. dated 1994-02-15 5287533,apparatus for changing individual weight value of corresponding synaptic connection for succeeding learning process when past weight values satisfying predetermined condition,"the past record of the synaptic weight values set in the learning of a neural network is stored in a weight record memory. the past stored in the weight record memory is supplied to a control unit. if there exists a synaptic connection representing a record of weight values which have been used in a predetermined number of learning processes just prior to the present learning process and which satisfy a predetermined condition, the synaptic weight value used in the succeeding learning processes for the synaptic connection is re-set to a predetermined value by a weight setting unit. that is, the past record of the synaptic weight values is monitored, and the synaptic weight value which has been set in a learning process can be re-set as required.",1994-02-15,"The title of the patent is apparatus for changing individual weight value of corresponding synaptic connection for succeeding learning process when past weight values satisfying predetermined condition and its abstract is the past record of the synaptic weight values set in the learning of a neural network is stored in a weight record memory. the past stored in the weight record memory is supplied to a control unit. if there exists a synaptic connection representing a record of weight values which have been used in a predetermined number of learning processes just prior to the present learning process and which satisfy a predetermined condition, the synaptic weight value used in the succeeding learning processes for the synaptic connection is re-set to a predetermined value by a weight setting unit. that is, the past record of the synaptic weight values is monitored, and the synaptic weight value which has been set in a learning process can be re-set as required. dated 1994-02-15" 5289401,analog storage device for artificial neural network system,"an analog storage device employs an electrically erasable programmable transistor as its memory cell. the memory cell transistor has a source and a drain which are disposed spaced apart from each other on a semiconductive substrate to define a channel region therebetween, an insulated floating gate electrode which at least overlaps the channel region, and an insulated control gate electrode disposed above the insulated floating gate electrode. minority carriers are allowed to tunnel between the channel region and the insulated floating gate. the amount of carriers to be stored on the floating gate electrode is controlled such that it is in proportion to analog data to be stored therein. a variation in the internal field of the transistor which may occur when its floating gate electrode is being charged with minority carriers is monitored. when a field variation is detected, a voltage for compensating for the detected field variation is applied to the control gate electrode, whereby the linearity of analog storage is ensured.",1994-02-22,"The title of the patent is analog storage device for artificial neural network system and its abstract is an analog storage device employs an electrically erasable programmable transistor as its memory cell. the memory cell transistor has a source and a drain which are disposed spaced apart from each other on a semiconductive substrate to define a channel region therebetween, an insulated floating gate electrode which at least overlaps the channel region, and an insulated control gate electrode disposed above the insulated floating gate electrode. minority carriers are allowed to tunnel between the channel region and the insulated floating gate. the amount of carriers to be stored on the floating gate electrode is controlled such that it is in proportion to analog data to be stored therein. a variation in the internal field of the transistor which may occur when its floating gate electrode is being charged with minority carriers is monitored. when a field variation is detected, a voltage for compensating for the detected field variation is applied to the control gate electrode, whereby the linearity of analog storage is ensured. dated 1994-02-22" 5293453,error control codeword generating system and method based on a neural network,"a communication system and method that translates a first plurality of information symbols into a plurality of code words, transmits the plurality of code words through a communication channel receives the plurality of code words transmitted through the communication channel, deciphers the plurality of code words transmitted through the communication channel into a second plurality of information symbols that correspond to the first set plurality of information symbols, wherein the plurality of code words are derived from a reverse dynamical flow within a first neural network.",1994-03-08,"The title of the patent is error control codeword generating system and method based on a neural network and its abstract is a communication system and method that translates a first plurality of information symbols into a plurality of code words, transmits the plurality of code words through a communication channel receives the plurality of code words transmitted through the communication channel, deciphers the plurality of code words transmitted through the communication channel into a second plurality of information symbols that correspond to the first set plurality of information symbols, wherein the plurality of code words are derived from a reverse dynamical flow within a first neural network. dated 1994-03-08" 5293454,learning method of neural network,"a learning method of a neural network, in which from a set of learning patterns belonging to one category, specific learning patterns located at a region close to learning patterns belonging to another category are selected and learning of the neural network is performed by using the specific learning patterns so as to discriminate the categories from each other.",1994-03-08,"The title of the patent is learning method of neural network and its abstract is a learning method of a neural network, in which from a set of learning patterns belonging to one category, specific learning patterns located at a region close to learning patterns belonging to another category are selected and learning of the neural network is performed by using the specific learning patterns so as to discriminate the categories from each other. dated 1994-03-08" 5293456,object recognition system employing a sparse comparison neural network,"a neural network for comparing a known input to an unknown input comprises a first layer for receiving a first known input tensor and a first unknown input tensor. a second layer receives the first known and unknown input tensors. the second layer has at least one first trainable weight tensor associated with the first known input tensor and at least one second trainable weight tensor associated with the first unknown input tensor. the second layer includes at least one first processing element for transforming the first known input tensor on the first trainable weight tensor to produce a first known output and at least one second processing element for transforming the first unknown input tensor on the second trainable weight tensor to produce a first unknown output. the first known output comprises a first known output tensor of at least rank zero and has a third trainable weight tensor associated therewith. the first unknown output comprises a first unknown output tensor of at least rank zero and has a fourth trainable weight tensor associated therewith. the first known output tensor and the first unknown tensor are combined to form a second input tensor. a third layer receives the second input tensor. the third layer has at least one fifth trainable weight tensor associated with the second input tensor. the third layer includes at least one third processing element for transforming the second input tensor on the fifth trainable weight tensor, thereby comparing the first known output with the first unknown output and producing a resultant output. the resultant output is indicative of the degree of similarity between the first known input tensor and the first unknown input tensor.",1994-03-08,"The title of the patent is object recognition system employing a sparse comparison neural network and its abstract is a neural network for comparing a known input to an unknown input comprises a first layer for receiving a first known input tensor and a first unknown input tensor. a second layer receives the first known and unknown input tensors. the second layer has at least one first trainable weight tensor associated with the first known input tensor and at least one second trainable weight tensor associated with the first unknown input tensor. the second layer includes at least one first processing element for transforming the first known input tensor on the first trainable weight tensor to produce a first known output and at least one second processing element for transforming the first unknown input tensor on the second trainable weight tensor to produce a first unknown output. the first known output comprises a first known output tensor of at least rank zero and has a third trainable weight tensor associated therewith. the first unknown output comprises a first unknown output tensor of at least rank zero and has a fourth trainable weight tensor associated therewith. the first known output tensor and the first unknown tensor are combined to form a second input tensor. a third layer receives the second input tensor. the third layer has at least one fifth trainable weight tensor associated with the second input tensor. the third layer includes at least one third processing element for transforming the second input tensor on the fifth trainable weight tensor, thereby comparing the first known output with the first unknown output and producing a resultant output. the resultant output is indicative of the degree of similarity between the first known input tensor and the first unknown input tensor. dated 1994-03-08" 5293457,neural network integrated circuit device having self-organizing function,"an extension directed integrated circuit device having a learning function on a boltzmann model, includes a plurality of synapse representing units arrayed in a matrix, a plurality of neuron representing units, a plurality of educator signal control circuits, and a plurality of buffer circuits. each synapse representing unit is connected to a pair of axon signal transfer lines and a pair of dendrite signal transfer lines. each synapse representing unit includes a learning control circuit which derives synapse load change value data in accordance with predetermined learning rules in response to a first axon signal si and a second axon signal sj, a synapse load representing circuit which corrects a synapse load in response to the synapse load change valued data and holds the corrected synapse load value wij, a first synapse coupling operating circuit which derives a current signal indicating a product wij.multidot.si from the synapse load wij and the first axon signal si and transfers the same to a first dendrite signal line, and a second product signal indicating a product wij.multidot.sj from the synapse load wij and the second axon signal sj and transfers the same onto a second dendrite signal line.",1994-03-08,"The title of the patent is neural network integrated circuit device having self-organizing function and its abstract is an extension directed integrated circuit device having a learning function on a boltzmann model, includes a plurality of synapse representing units arrayed in a matrix, a plurality of neuron representing units, a plurality of educator signal control circuits, and a plurality of buffer circuits. each synapse representing unit is connected to a pair of axon signal transfer lines and a pair of dendrite signal transfer lines. each synapse representing unit includes a learning control circuit which derives synapse load change value data in accordance with predetermined learning rules in response to a first axon signal si and a second axon signal sj, a synapse load representing circuit which corrects a synapse load in response to the synapse load change valued data and holds the corrected synapse load value wij, a first synapse coupling operating circuit which derives a current signal indicating a product wij.multidot.si from the synapse load wij and the first axon signal si and transfers the same to a first dendrite signal line, and a second product signal indicating a product wij.multidot.sj from the synapse load wij and the second axon signal sj and transfers the same onto a second dendrite signal line. dated 1994-03-08" 5293458,mos multi-layer neural network and its design method,"disclosed is a multi-layer neural network and circuit design method. the multi-layer neural network receiving an m-bit input and generating an n-bit output comprises a neuron having a cascaded pair of cmos inverters and having an output node of the preceding cmos inverter among the pair of cmos inverters as its inverted output node and an output node of the succeeding cmos inverter as its non-inverted output node, an input layer having m neurons to receive the m-bit input, an output layer having n neurons to generate the n-bit output, at least one hidden layer provided with n neurons to transfer the input received from the input layer to the directly upper hidden layer or the output layer, an input synapse group in a matrix having each predetermined weight value to connect each output of neurons on the input layer to each neuron of the output layer and at least one hidden layer, at least one transfer synapse group in a matrix having each predetermined weight value to connect each output of neurons of the hidden layer to each neuron of its directly upper hidden layer or of the output layer, and a bias synapse group for biasing each input node of neurons of the hidden layers and the output layer.",1994-03-08,"The title of the patent is mos multi-layer neural network and its design method and its abstract is disclosed is a multi-layer neural network and circuit design method. the multi-layer neural network receiving an m-bit input and generating an n-bit output comprises a neuron having a cascaded pair of cmos inverters and having an output node of the preceding cmos inverter among the pair of cmos inverters as its inverted output node and an output node of the succeeding cmos inverter as its non-inverted output node, an input layer having m neurons to receive the m-bit input, an output layer having n neurons to generate the n-bit output, at least one hidden layer provided with n neurons to transfer the input received from the input layer to the directly upper hidden layer or the output layer, an input synapse group in a matrix having each predetermined weight value to connect each output of neurons on the input layer to each neuron of the output layer and at least one hidden layer, at least one transfer synapse group in a matrix having each predetermined weight value to connect each output of neurons of the hidden layer to each neuron of its directly upper hidden layer or of the output layer, and a bias synapse group for biasing each input node of neurons of the hidden layers and the output layer. dated 1994-03-08" 5293459,neural integrated circuit comprising learning means,"a neural integrated circuit, comprising a synaptic coefficient memory, a neuron state memory, resolving means and learning means which simultaneously operate in parallel on each of the synaptic coefficients in order to determine new synaptic coefficients. the learning means comprise means for performing a learning function on the states vj of input neurons and on a correction element si which is associated with each output neuron, and also comprise incrementation/decrementation elements which determine the new synaptic coefficients in parallel. the learning functions may be formed by logic and-gates and exclusive-or gates. the integrated circuit is used in a neural network system comprising a processing device.",1994-03-08,"The title of the patent is neural integrated circuit comprising learning means and its abstract is a neural integrated circuit, comprising a synaptic coefficient memory, a neuron state memory, resolving means and learning means which simultaneously operate in parallel on each of the synaptic coefficients in order to determine new synaptic coefficients. the learning means comprise means for performing a learning function on the states vj of input neurons and on a correction element si which is associated with each output neuron, and also comprise incrementation/decrementation elements which determine the new synaptic coefficients in parallel. the learning functions may be formed by logic and-gates and exclusive-or gates. the integrated circuit is used in a neural network system comprising a processing device. dated 1994-03-08" 5295130,apparatus and method for signal reproduction,an apparatus and method for reproducing pit information precisely from a magnetic optical disk without being adversely affected by heat accumulation. the signal reproducing apparatus reproduces signals using a neural network constituting a decoder that decodes pits on the disk. the signal reproducing method provides learning using a sigmoid function and carries out signal reproduction using a step function.,1994-03-15,The title of the patent is apparatus and method for signal reproduction and its abstract is an apparatus and method for reproducing pit information precisely from a magnetic optical disk without being adversely affected by heat accumulation. the signal reproducing apparatus reproduces signals using a neural network constituting a decoder that decodes pits on the disk. the signal reproducing method provides learning using a sigmoid function and carries out signal reproduction using a step function. dated 1994-03-15 5295197,information processing system using neural network learning function,"an information processing apparatus using a neural network learning function has, in one embodiment, a computer system and a pattern recognition apparatus associated with each other via a communication cable. the computer system includes a learning section having a first neural network and serves to adjust the weights of connection therein as a result of learning with a learning data signal supplied thereto from the pattern recognition apparatus via the communication cable. the pattern recognition apparatus includes an associative output section having a second neural network and receives data on the adjusted weights from the learning section via the communication cable to reconstruct the second neural network with the data on the adjusted weights. the pattern recognition apparatus with the associative output section having the reconstructed second neural network performs pattern recognition independently of the computer system with the communication cable being brought into an electrical isolation mode.",1994-03-15,"The title of the patent is information processing system using neural network learning function and its abstract is an information processing apparatus using a neural network learning function has, in one embodiment, a computer system and a pattern recognition apparatus associated with each other via a communication cable. the computer system includes a learning section having a first neural network and serves to adjust the weights of connection therein as a result of learning with a learning data signal supplied thereto from the pattern recognition apparatus via the communication cable. the pattern recognition apparatus includes an associative output section having a second neural network and receives data on the adjusted weights from the learning section via the communication cable to reconstruct the second neural network with the data on the adjusted weights. the pattern recognition apparatus with the associative output section having the reconstructed second neural network performs pattern recognition independently of the computer system with the communication cable being brought into an electrical isolation mode. dated 1994-03-15" 5295227,neural network learning system,"a neural network learning system is applied to extensive use in applications such as pattern and character recognizing operations, various controls, etc. the neural network learning system operates on, for example, a plurality of neural networks each having a different number of intermediate layer units to efficiently perform a learning process at a high speed with a reduced amount of hardware. a neural network system having a plurality of hierarchical neural networks each having an input layer, one or more intermediate layers and output layers is formed from a common input layer shared among two or more neural networks, or the common input layer and one or more intermediate layers and a learning controller for controlling a learning process performed by a plurality of neural networks.",1994-03-15,"The title of the patent is neural network learning system and its abstract is a neural network learning system is applied to extensive use in applications such as pattern and character recognizing operations, various controls, etc. the neural network learning system operates on, for example, a plurality of neural networks each having a different number of intermediate layer units to efficiently perform a learning process at a high speed with a reduced amount of hardware. a neural network system having a plurality of hierarchical neural networks each having an input layer, one or more intermediate layers and output layers is formed from a common input layer shared among two or more neural networks, or the common input layer and one or more intermediate layers and a learning controller for controlling a learning process performed by a plurality of neural networks. dated 1994-03-15" 5297232,wireless neural network and a wireless neural processing element,"a neural network is disclosed in which communication between processing elements occurs by radio waves in a waveguide. radio wave communication using common carrier signals by transceivers in a waveguide allows processing elements to communicate wirelessly and simultaneously. each processing element includes a radio frequency transceiver and an accompanying antenna which performs the neuron summing operation because input signals simultaneously received from plural processing elements by the antenna add. the weights on each input are provided by different spatial relationships between the transmitting processing elements and the receiving processing element which causes signal strength loses through the waveguide to be different. each receiving processing element performs a neural threshold or sigmoid operation on the summed signal received from the transceiver and then a strength (amplitude scaling) can be applied to the output before the processing element transmits that output to the other processing elements in the system. processing elements are grouped, allowing one group to transmit while the other group is receiving. wafer scale electronics including transceivers and analog processing elements are combined with a comparably sized waveguide to produce a compact device.",1994-03-22,"The title of the patent is wireless neural network and a wireless neural processing element and its abstract is a neural network is disclosed in which communication between processing elements occurs by radio waves in a waveguide. radio wave communication using common carrier signals by transceivers in a waveguide allows processing elements to communicate wirelessly and simultaneously. each processing element includes a radio frequency transceiver and an accompanying antenna which performs the neuron summing operation because input signals simultaneously received from plural processing elements by the antenna add. the weights on each input are provided by different spatial relationships between the transmitting processing elements and the receiving processing element which causes signal strength loses through the waveguide to be different. each receiving processing element performs a neural threshold or sigmoid operation on the summed signal received from the transceiver and then a strength (amplitude scaling) can be applied to the output before the processing element transmits that output to the other processing elements in the system. processing elements are grouped, allowing one group to transmit while the other group is receiving. wafer scale electronics including transceivers and analog processing elements are combined with a comparably sized waveguide to produce a compact device. dated 1994-03-22" 5298796,nonvolatile programmable neural network synaptic array,"a floating-gate mos transistor is implemented for use as a nonvolatile analog storage element of a synaptic cell used to implement an array of processing synaptic cells based on a four-quadrant analog multiplier requiring both x and y differential inputs, where one y input is uv programmable. these nonvolatile synaptic cells are disclosed fully connected in a 32.times.32 synaptic cell array using standard vlsi cmos technology.",1994-03-29,"The title of the patent is nonvolatile programmable neural network synaptic array and its abstract is a floating-gate mos transistor is implemented for use as a nonvolatile analog storage element of a synaptic cell used to implement an array of processing synaptic cells based on a four-quadrant analog multiplier requiring both x and y differential inputs, where one y input is uv programmable. these nonvolatile synaptic cells are disclosed fully connected in a 32.times.32 synaptic cell array using standard vlsi cmos technology. dated 1994-03-29" 5299285,neural network with dynamically adaptable neurons,"this invention is an adaptive neuron for use in neural network processors. the adaptive neuron participates in the supervised learning phase of operation on a coequal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse io elements. in this manner, training time is decreased by as much as three orders of magnitude.",1994-03-29,"The title of the patent is neural network with dynamically adaptable neurons and its abstract is this invention is an adaptive neuron for use in neural network processors. the adaptive neuron participates in the supervised learning phase of operation on a coequal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse io elements. in this manner, training time is decreased by as much as three orders of magnitude. dated 1994-03-29" 5299286,data processing system for implementing architecture of neural network subject to learning process,"data processing system implementing architecture of a neural network which is subject to a learning process, wherein the data processing system includes n.times.n synapses arranged in an array of j rows and i columns. a plurality of operational amplifiers respectively corresponding to the rows of the array are provided, with each operational amplifier defining a neuron. the input terminals of all of the synapses arranged in a respective column of the array are connected together and define n inputs of the neural network. the output terminals of the synapses arranged in a respective row of the array are connected together and serve as the inputs to a corresponding one of the plurality of operational amplifiers. each synapse includes a capacitor connected between ground potential and the input terminal for weighting the synapse by storing a weighting voltage applied thereto. a random access memory has digitally stored voltage values for weighting all of the synapses. a plurality of digital-analog converters, one for each column of the array of synapses, are connected to the random access memory for converting the digital voltage values for weighting the synapses into analog voltage values. the digital-analog converters provide respective outputs to the weighting terminals of the synapses of a column via respective electronic switches for each synapse. each row of the array includes a bistable circuit for driving the respective electronic switches under the control of a control section which also provides function commands and data to the random access memory.",1994-03-29,"The title of the patent is data processing system for implementing architecture of neural network subject to learning process and its abstract is data processing system implementing architecture of a neural network which is subject to a learning process, wherein the data processing system includes n.times.n synapses arranged in an array of j rows and i columns. a plurality of operational amplifiers respectively corresponding to the rows of the array are provided, with each operational amplifier defining a neuron. the input terminals of all of the synapses arranged in a respective column of the array are connected together and define n inputs of the neural network. the output terminals of the synapses arranged in a respective row of the array are connected together and serve as the inputs to a corresponding one of the plurality of operational amplifiers. each synapse includes a capacitor connected between ground potential and the input terminal for weighting the synapse by storing a weighting voltage applied thereto. a random access memory has digitally stored voltage values for weighting all of the synapses. a plurality of digital-analog converters, one for each column of the array of synapses, are connected to the random access memory for converting the digital voltage values for weighting the synapses into analog voltage values. the digital-analog converters provide respective outputs to the weighting terminals of the synapses of a column via respective electronic switches for each synapse. each row of the array includes a bistable circuit for driving the respective electronic switches under the control of a control section which also provides function commands and data to the random access memory. dated 1994-03-29" 5300770,apparatus for producing a porosity log of a subsurface formation corrected for detector standoff,"a borehole logging tool is lowered into a borehole traversing a subsurface formation and a neutron detector measures the die-away of nuclear radiation in the formation. intensity signals are produced representing the die-away of nuclear radiation as the logging tool traverses the borehole a signal processor, employing at least one neural network, processes the intensity signals and produces a standoff-corrected epithermal neutron lifetime signal to correct for standoff from the borehole wall encountered by the detector as the logging tool traverses the borehole. the signal processor further generates a porosity signal from the standoff-corrected epithermal neutron lifetime signal derived from measurements in borehole models at known porosities and conditions of detector standoff. a log is generated of such porosity signal versus depth as the logging tool traverses the borehole.",1994-04-05,"The title of the patent is apparatus for producing a porosity log of a subsurface formation corrected for detector standoff and its abstract is a borehole logging tool is lowered into a borehole traversing a subsurface formation and a neutron detector measures the die-away of nuclear radiation in the formation. intensity signals are produced representing the die-away of nuclear radiation as the logging tool traverses the borehole a signal processor, employing at least one neural network, processes the intensity signals and produces a standoff-corrected epithermal neutron lifetime signal to correct for standoff from the borehole wall encountered by the detector as the logging tool traverses the borehole. the signal processor further generates a porosity signal from the standoff-corrected epithermal neutron lifetime signal derived from measurements in borehole models at known porosities and conditions of detector standoff. a log is generated of such porosity signal versus depth as the logging tool traverses the borehole. dated 1994-04-05" 5301257,neural network,"to enable the pattern matching between a shifted input pattern and the standard pattern, a plurality of standard patterns are stored in a standard pattern associative memory network 12. a pattern shifted relative to the standard pattern is inputted to the input pattern network 11 and a restriction condition of when the input pattern is shifted relative to the standard pattern is stored in a coordinate associated network 14. in an association network 13, weights and biases are determined so that the respective units of the network 13 are activated most intensely when the input pattern and the standard pattern match correctly each other in response to the signals from the respective networks 11, 12, and 14.",1994-04-05,"The title of the patent is neural network and its abstract is to enable the pattern matching between a shifted input pattern and the standard pattern, a plurality of standard patterns are stored in a standard pattern associative memory network 12. a pattern shifted relative to the standard pattern is inputted to the input pattern network 11 and a restriction condition of when the input pattern is shifted relative to the standard pattern is stored in a coordinate associated network 14. in an association network 13, weights and biases are determined so that the respective units of the network 13 are activated most intensely when the input pattern and the standard pattern match correctly each other in response to the signals from the respective networks 11, 12, and 14. dated 1994-04-05" 5301681,device for detecting cancerous and precancerous conditions in a breast,"the present invention relates to a device for detecting and monitoring physiological conditions in mammalian tissue, and method for using the same. the device includes sensors for sensing physiological conditions and generating signals in response thereto and processor operatively associated with the sensors for receiving and manipulating the signals to produce a generalization indicative of normal and abnormal physiological condition of mammalian tissue. the processor is characterized to include a neural network having a predetermined solution spaced memory, the solution space memory including regions indicative of two (2) or more physiological conditions, wherein the generalization is characterized by the signals projected into the regions.",1994-04-12,"The title of the patent is device for detecting cancerous and precancerous conditions in a breast and its abstract is the present invention relates to a device for detecting and monitoring physiological conditions in mammalian tissue, and method for using the same. the device includes sensors for sensing physiological conditions and generating signals in response thereto and processor operatively associated with the sensors for receiving and manipulating the signals to produce a generalization indicative of normal and abnormal physiological condition of mammalian tissue. the processor is characterized to include a neural network having a predetermined solution spaced memory, the solution space memory including regions indicative of two (2) or more physiological conditions, wherein the generalization is characterized by the signals projected into the regions. dated 1994-04-12" 5303269,optically maximum a posteriori demodulator,"a system and method for optimal maximum a posteriori (map) demodulation. the present invention incorporates neural network technology, i.e., a hopfield network, (1) to replace the function of the traditional, suboptimal phase-locked loop in an fm receiver and/or (2) to optimally estimate a discrete phase value using an expected value (obtained from the mean of the prior probability distribution of the phase) and statistical dependence between different phase values in a block of samples (described by the covariance matrix of the prior phase distribution). the definition of the hopfield network includes particular bias currents, feedback weights and a sigmoid function for solving the nonlinear integral equation associated with optimal demodulation. the present invention also includes a signal classifier having a plurality of angled modulators for modeling different phase modulation processes.",1994-04-12,"The title of the patent is optically maximum a posteriori demodulator and its abstract is a system and method for optimal maximum a posteriori (map) demodulation. the present invention incorporates neural network technology, i.e., a hopfield network, (1) to replace the function of the traditional, suboptimal phase-locked loop in an fm receiver and/or (2) to optimally estimate a discrete phase value using an expected value (obtained from the mean of the prior probability distribution of the phase) and statistical dependence between different phase values in a block of samples (described by the covariance matrix of the prior phase distribution). the definition of the hopfield network includes particular bias currents, feedback weights and a sigmoid function for solving the nonlinear integral equation associated with optimal demodulation. the present invention also includes a signal classifier having a plurality of angled modulators for modeling different phase modulation processes. dated 1994-04-12" 5303311,method and apparatus for recognizing characters,a character recognition system identifies characters including hand written characters with a high degree of accuracy by use of spiral view codes for pels in the scanned character image. the spiral view codes are developed by comparing stroke length or distance to a remote stroke from a first radial view from a character pel to stroke length or distance to a remote stroke from a counterclockwise adjacent view for the same character pel. these spiral view codes are collected into a spiral view pattern for each character pel. the spiral view patterns for a character are accumulated to form a character vector. the character vector is analyzed by a linear decision network or a neural network.,1994-04-12,The title of the patent is method and apparatus for recognizing characters and its abstract is a character recognition system identifies characters including hand written characters with a high degree of accuracy by use of spiral view codes for pels in the scanned character image. the spiral view codes are developed by comparing stroke length or distance to a remote stroke from a first radial view from a character pel to stroke length or distance to a remote stroke from a counterclockwise adjacent view for the same character pel. these spiral view codes are collected into a spiral view pattern for each character pel. the spiral view patterns for a character are accumulated to form a character vector. the character vector is analyzed by a linear decision network or a neural network. dated 1994-04-12 5303328,neural network system for determining optimal solution,"a neural network system includes an input unit, an operation control unit, a parameter setting unit, a neural network group unit, and a display unit. the network group unit includes first and second neural networks. the first neural network operates according to the mean field approximation method to which the annealing is added, whereas the second neural network operates in accordance with the simulated annealing. each of the first an second neural networks includes a plurality of neurons each connected via synapses to neurons so as to weighting outputs from the neurons based on synapse weights, thereby computing an output related to a total of weighted outputs from the neurons according to an output function. the parameter setting unit is responsive to a setting instruction to generate neuron parameters including synapse weights, threshold values, and output functions, which are set to the first neural network and which are selective set to the second neural network. the operation control unit responsive to an input of a problem analyzes the problem and then generates a setting instruction based on a result of the analysis to output the result to the parameter setting unit. after the neuron parameters are set thereto, in order for the first and second neural network to selectively or to iteratively operate, the operation control unit controls operations of computations in the network group unit in accordance with the analysis result and then presents results of the computations in the network group unit on the display unit.",1994-04-12,"The title of the patent is neural network system for determining optimal solution and its abstract is a neural network system includes an input unit, an operation control unit, a parameter setting unit, a neural network group unit, and a display unit. the network group unit includes first and second neural networks. the first neural network operates according to the mean field approximation method to which the annealing is added, whereas the second neural network operates in accordance with the simulated annealing. each of the first an second neural networks includes a plurality of neurons each connected via synapses to neurons so as to weighting outputs from the neurons based on synapse weights, thereby computing an output related to a total of weighted outputs from the neurons according to an output function. the parameter setting unit is responsive to a setting instruction to generate neuron parameters including synapse weights, threshold values, and output functions, which are set to the first neural network and which are selective set to the second neural network. the operation control unit responsive to an input of a problem analyzes the problem and then generates a setting instruction based on a result of the analysis to output the result to the parameter setting unit. after the neuron parameters are set thereto, in order for the first and second neural network to selectively or to iteratively operate, the operation control unit controls operations of computations in the network group unit in accordance with the analysis result and then presents results of the computations in the network group unit on the display unit. dated 1994-04-12" 5303330,hybrid multi-layer neural networks,"a hybrid network 100 which combines a neural network of the self-organized type 110 with a plurality of neural networks of the supervised learning type 150,160,170 to successfully retrieve building address information from a database using imperfect textual retrieval keys. generally, the self-organized type is a kohonen feature map network, whereas each supervised learning type is a back propagation network. a user query 105 produces an activation response 111,112,113 from the self-organized network 110 and this response, along with a new query 151,161,171 derived from the original query 105, activates a selected one of the learning networks r.sub.1,r.sub.2,r.sub.m to retrieve the requested information.",1994-04-12,"The title of the patent is hybrid multi-layer neural networks and its abstract is a hybrid network 100 which combines a neural network of the self-organized type 110 with a plurality of neural networks of the supervised learning type 150,160,170 to successfully retrieve building address information from a database using imperfect textual retrieval keys. generally, the self-organized type is a kohonen feature map network, whereas each supervised learning type is a back propagation network. a user query 105 produces an activation response 111,112,113 from the self-organized network 110 and this response, along with a new query 151,161,171 derived from the original query 105, activates a selected one of the learning networks r.sub.1,r.sub.2,r.sub.m to retrieve the requested information. dated 1994-04-12" 5305204,digital image display apparatus with automatic window level and window width adjustment,"a digital image display apparatus for converting a pixel value of medical digital image data such as mri image data or ct image data into brightness in accordance with a display window including a window level and a window width of a display unit, determines the optimum window level and width for each image as follows. the apparatus obtains a histogram of pixel values from the digital image data and calculates brightness data of a pixel value having a highest frequency, brightness data of a pixel value at a boundary between a background and an image, area data of a portion having middle brightness within a display brightness range, area data of a portion having maximum brightness, and data indicating a ratio between an area of a portion having higher brightness than the middle brightness and an area of a portion having lower brightness than that obtained, when the digital image is to be displayed by a given display window on the basis of the histogram. the apparatus obtains image quality indicating clarity of the image displayed by the given window on the basis of the above data by using arithmetic operations or by using a neural network, thereby determining the optimum display window which provides a maximum image quality.",1994-04-19,"The title of the patent is digital image display apparatus with automatic window level and window width adjustment and its abstract is a digital image display apparatus for converting a pixel value of medical digital image data such as mri image data or ct image data into brightness in accordance with a display window including a window level and a window width of a display unit, determines the optimum window level and width for each image as follows. the apparatus obtains a histogram of pixel values from the digital image data and calculates brightness data of a pixel value having a highest frequency, brightness data of a pixel value at a boundary between a background and an image, area data of a portion having middle brightness within a display brightness range, area data of a portion having maximum brightness, and data indicating a ratio between an area of a portion having higher brightness than the middle brightness and an area of a portion having lower brightness than that obtained, when the digital image is to be displayed by a given display window on the basis of the histogram. the apparatus obtains image quality indicating clarity of the image displayed by the given window on the basis of the above data by using arithmetic operations or by using a neural network, thereby determining the optimum display window which provides a maximum image quality. dated 1994-04-19" 5305230,process control system and power plant process control system,"a process control system controls a large scale plant such as thermal power plant. this process control system includes a target setting unit for setting an operation target, a control unit for receiving a signal indicating the operation target and for outputting a controlled variable to operate the process, an evaluation unit for quantitatively evaluating operation characteristics corresponding to the operation target of the process operated on the basis of a signal indicating the controlled variable supplied from the control unit, a modification unit for extracting an optimum operation process qualitatively squaring or conforming with the evaluated value derived by the evaluation unit out of a modification rule predetermining operation unit in qualitative relation between the operation characteristics and the operation target of the process and for determining the modification rate of the control unit, a storage unit having a model of a neural network for storing a relation between the operation target and the modification rate derived by the modification unit as a connection state within a circuit, and a learning unit for making the model of the neural network learn the relation between the operation target and the modification rate.",1994-04-19,"The title of the patent is process control system and power plant process control system and its abstract is a process control system controls a large scale plant such as thermal power plant. this process control system includes a target setting unit for setting an operation target, a control unit for receiving a signal indicating the operation target and for outputting a controlled variable to operate the process, an evaluation unit for quantitatively evaluating operation characteristics corresponding to the operation target of the process operated on the basis of a signal indicating the controlled variable supplied from the control unit, a modification unit for extracting an optimum operation process qualitatively squaring or conforming with the evaluated value derived by the evaluation unit out of a modification rule predetermining operation unit in qualitative relation between the operation characteristics and the operation target of the process and for determining the modification rate of the control unit, a storage unit having a model of a neural network for storing a relation between the operation target and the modification rate derived by the modification unit as a connection state within a circuit, and a learning unit for making the model of the neural network learn the relation between the operation target and the modification rate. dated 1994-04-19" 5305235,monitoring diagnosis device for electrical appliance,"a monitoring diagnostic device for an electrical appliance such as gas insulated switchgear includes a sensor, such as an acceleration sensor, and a neural network including an input layer, an intermediate layer, and an output layer, each consisting of a plurality of neural elements. the input, intermediate and output layers are coupled to each other via a plurality of connection weights. the output of the sensor is first processed and then is supplied to the neural elements of the input layer. the connection weights are adjusted by means of learning data such that the output from the neural elements of the output layer of the neural network correctly identifies the causes of abnormality of the electrical appliance.",1994-04-19,"The title of the patent is monitoring diagnosis device for electrical appliance and its abstract is a monitoring diagnostic device for an electrical appliance such as gas insulated switchgear includes a sensor, such as an acceleration sensor, and a neural network including an input layer, an intermediate layer, and an output layer, each consisting of a plurality of neural elements. the input, intermediate and output layers are coupled to each other via a plurality of connection weights. the output of the sensor is first processed and then is supplied to the neural elements of the input layer. the connection weights are adjusted by means of learning data such that the output from the neural elements of the output layer of the neural network correctly identifies the causes of abnormality of the electrical appliance. dated 1994-04-19" 5305250,analog continuous-time mos vector multiplier circuit and a programmable mos realization for feedback neural networks,"a neuron circuit and a neural network including a four quadrant analog multiplier/summer circuit constructed in field effect transistors. the neuron circuit includes the analog multiplier/summer formed of an operational amplifier, plural sets of four field effect transistors, an rc circuit and a double inverter. the multiplier/summer circuit includes a set of four identical field effect transistors for each product implemented. this produces a four quadrant multiplication if the four field effect transistors operate in the triode mode. the output of the multiplier/summer is the sum of these products. the neural network includes a plurality of these neuron circuits. each neuron circuit receives an input and a set of synaptic weight inputs. the output of each neuron circuit is supplied to the corresponding feedback input of each neuron circuit. the multiplier/summer of each neuron circuit produces the sum of the product of each neuron circuit output and its corresponding synaptic weight. the individual neuron circuits and the neural network can be constructed in mos vlsi.",1994-04-19,"The title of the patent is analog continuous-time mos vector multiplier circuit and a programmable mos realization for feedback neural networks and its abstract is a neuron circuit and a neural network including a four quadrant analog multiplier/summer circuit constructed in field effect transistors. the neuron circuit includes the analog multiplier/summer formed of an operational amplifier, plural sets of four field effect transistors, an rc circuit and a double inverter. the multiplier/summer circuit includes a set of four identical field effect transistors for each product implemented. this produces a four quadrant multiplication if the four field effect transistors operate in the triode mode. the output of the multiplier/summer is the sum of these products. the neural network includes a plurality of these neuron circuits. each neuron circuit receives an input and a set of synaptic weight inputs. the output of each neuron circuit is supplied to the corresponding feedback input of each neuron circuit. the multiplier/summer of each neuron circuit produces the sum of the product of each neuron circuit output and its corresponding synaptic weight. the individual neuron circuits and the neural network can be constructed in mos vlsi. dated 1994-04-19" 5306893,weld acoustic monitor,"a system for real-time analysis of weld quality in an arc welding process. the system includes a transducer which receives acoustic signals generated during the welding process. the acoustic signals are then sampled and digitized. a signal processor calculates the root mean square and peak amplitudes of the digitized signals and transforms the digitized signal into a frequency domain signal. a data processor divides the frequency domain signal into a plurality of frequency bands and calculates the average power for each band. the average power values, in addition to the peak and root mean square amplitude values, are input to an artificial neural network for analysis of weld quality. arc current and/or arc voltage signals may be input to the a/d converter alone or in combination with the acoustic signal data for subsequent signal processing and neural network analysis.",1994-04-26,"The title of the patent is weld acoustic monitor and its abstract is a system for real-time analysis of weld quality in an arc welding process. the system includes a transducer which receives acoustic signals generated during the welding process. the acoustic signals are then sampled and digitized. a signal processor calculates the root mean square and peak amplitudes of the digitized signals and transforms the digitized signal into a frequency domain signal. a data processor divides the frequency domain signal into a plurality of frequency bands and calculates the average power for each band. the average power values, in addition to the peak and root mean square amplitude values, are input to an artificial neural network for analysis of weld quality. arc current and/or arc voltage signals may be input to the a/d converter alone or in combination with the acoustic signal data for subsequent signal processing and neural network analysis. dated 1994-04-26" 5307260,order entry apparatus for automatic estimation and its method,"order entry apparatus for automatic estimation a transformation model comprising a pattern composed of a plurality of parameters representing custom product specifications, production line conditions and factors for composing the estimates for production cost and completion date. the parameters of this transformation model are specified by learning, leading to estimation for the requested product specifications, in broad consideration of conditions such as those of the production line. the use of a neural network model as this pattern transformation model makes pattern transformation more flexible and pattern learning more efficient. estimation accuracy is also increased by entering values of predicted charges in production line conditions such as loads or stock obtained from resource requirements planning, process design, capacity requirements planning, etc.",1994-04-26,"The title of the patent is order entry apparatus for automatic estimation and its method and its abstract is order entry apparatus for automatic estimation a transformation model comprising a pattern composed of a plurality of parameters representing custom product specifications, production line conditions and factors for composing the estimates for production cost and completion date. the parameters of this transformation model are specified by learning, leading to estimation for the requested product specifications, in broad consideration of conditions such as those of the production line. the use of a neural network model as this pattern transformation model makes pattern transformation more flexible and pattern learning more efficient. estimation accuracy is also increased by entering values of predicted charges in production line conditions such as loads or stock obtained from resource requirements planning, process design, capacity requirements planning, etc. dated 1994-04-26" 5307444,voice analyzing system using hidden markov model and having plural neural network predictors,"an analyzing system analyzes object signals, particularly voice signals, by estimating a generation likelihood of an observation vector sequence being a time series of feature vectors with use of a markov model having a plurality of states and given transition probabilities from state to state. a state designation section determines a state i at a time t stochastically using the markov model. plural predictors, each of which is composed of a neural network and is provided per each state of the markov model, are provided for generating a predictional vector of the feature vector x.sub.t in the state i at the time t based on values of the feature vectors other than the feature vector x.sub.t. a first calculation section calculates an error vector of the predictional vector to the feature vector x.sub.t. a second calculation section calculates a generation likelihood of the error vector using a predetermined probability distribution of the error vector according to which the error vector is generated.",1994-04-26,"The title of the patent is voice analyzing system using hidden markov model and having plural neural network predictors and its abstract is an analyzing system analyzes object signals, particularly voice signals, by estimating a generation likelihood of an observation vector sequence being a time series of feature vectors with use of a markov model having a plurality of states and given transition probabilities from state to state. a state designation section determines a state i at a time t stochastically using the markov model. plural predictors, each of which is composed of a neural network and is provided per each state of the markov model, are provided for generating a predictional vector of the feature vector x.sub.t in the state i at the time t based on values of the feature vectors other than the feature vector x.sub.t. a first calculation section calculates an error vector of the predictional vector to the feature vector x.sub.t. a second calculation section calculates a generation likelihood of the error vector using a predetermined probability distribution of the error vector according to which the error vector is generated. dated 1994-04-26" 5309525,image processing apparatus using neural network,"disclosed is an image processing apparatus having an input device for inputting binary image data comprising a plurality of pixels which include a pixel of interest that is to be subjected to multivalued conversion, the plurality of pixels being contained in an area that is asymmetrical with respect to the position of the pixel of interest, and an multivalued converting device for executing processing, by a neural network, to restore the input binary image data to multivalued image data for the pixel of interest, whereby multivalued image data is estimated from binarized image data. it is possible to reduce the number of pixels referred to in arithmetic operations performed in the neural network.",1994-05-03,"The title of the patent is image processing apparatus using neural network and its abstract is disclosed is an image processing apparatus having an input device for inputting binary image data comprising a plurality of pixels which include a pixel of interest that is to be subjected to multivalued conversion, the plurality of pixels being contained in an area that is asymmetrical with respect to the position of the pixel of interest, and an multivalued converting device for executing processing, by a neural network, to restore the input binary image data to multivalued image data for the pixel of interest, whereby multivalued image data is estimated from binarized image data. it is possible to reduce the number of pixels referred to in arithmetic operations performed in the neural network. dated 1994-05-03" 5311182,method and apparatus for regenerating a distorted binary signal stream,"apparatus is shown for regenerating a signal stream of binary digits which has been distorted by intersymbol interference during passage through a channel (10 and 12) having insufficient channel bandwidth such that the channel output waveform comprises substantially an analog signal. (fig. 2 at b and d.) after equalization (24) the channel output is converted to a digital sample signal stream at analog-to-digital converter (26). the converter (26) output is supplied to shift register (28) from which successive groups of digital sample signals produced over a plurality of bit intervals of channel output are shifted to decoder (22). initialization bits that immediately precede the first group of binary digits to be regenerated also are supplied to decoder (22) through sector header reader (20) for use in decoding the first group of digital sample signals supplied to the decoder. during decoding of subsequent groups of digital sample signals, end bits (3,4 and 5) from the preceding group of regenerated binary digits are supplied to the decoder (22). the decoder includes a plurality of trained networks (40-1 through 40-5 and 50-1 through 50-m) of either the neural network or binary tree type.",1994-05-10,"The title of the patent is method and apparatus for regenerating a distorted binary signal stream and its abstract is apparatus is shown for regenerating a signal stream of binary digits which has been distorted by intersymbol interference during passage through a channel (10 and 12) having insufficient channel bandwidth such that the channel output waveform comprises substantially an analog signal. (fig. 2 at b and d.) after equalization (24) the channel output is converted to a digital sample signal stream at analog-to-digital converter (26). the converter (26) output is supplied to shift register (28) from which successive groups of digital sample signals produced over a plurality of bit intervals of channel output are shifted to decoder (22). initialization bits that immediately precede the first group of binary digits to be regenerated also are supplied to decoder (22) through sector header reader (20) for use in decoding the first group of digital sample signals supplied to the decoder. during decoding of subsequent groups of digital sample signals, end bits (3,4 and 5) from the preceding group of regenerated binary digits are supplied to the decoder (22). the decoder includes a plurality of trained networks (40-1 through 40-5 and 50-1 through 50-m) of either the neural network or binary tree type. dated 1994-05-10" 5311421,process control method and system for performing control of a controlled system by use of a neural network,"a method for controlling a controlled system by a controller such that a controlled variable can be brought into conformity with a desired value. with respect to at least one of input/output variables for a combined controlling-controlled system, which includes in combination the controller and the controlled system, and input/output variables for the controlled system, information containing its characteristics is taken out from the combined controlling-controlled system. the information with the characteristics contained therein is inputted to a neural network which has been caused beforehand to learn a correlation between the information containing the characteristics and control parameters. from the neural network, one or more of the control parameters, said one or more control parameters corresponding to a corresponding number of inputs to the neural network, are outputted to the controller.",1994-05-10,"The title of the patent is process control method and system for performing control of a controlled system by use of a neural network and its abstract is a method for controlling a controlled system by a controller such that a controlled variable can be brought into conformity with a desired value. with respect to at least one of input/output variables for a combined controlling-controlled system, which includes in combination the controller and the controlled system, and input/output variables for the controlled system, information containing its characteristics is taken out from the combined controlling-controlled system. the information with the characteristics contained therein is inputted to a neural network which has been caused beforehand to learn a correlation between the information containing the characteristics and control parameters. from the neural network, one or more of the control parameters, said one or more control parameters corresponding to a corresponding number of inputs to the neural network, are outputted to the controller. dated 1994-05-10" 5311600,method of edge detection in optical images using neural network classifier,"an image processor employing a camera, frame grabber and a new algorithm for detecting straight edges in optical images is disclosed. the algorithm is based on using a self-organizing unsupervised neural network learning to classify pixels on a digitized image and then extract the corresponding line parameters. the image processor is demonstrated on the specific application of edge detection for linewidth measurement in semiconductor lithography. the results are compared to results obtained by a standard straight edge detector based on the radon transform; good consistency is observed; however, superior speed is achieved for the proposed image processor. the results obtained by the proposed approach are also shown to be in agreement with scanning electron microscope (sem) measurements, which is known to have excellent accuracy but is an invasive measurement instrument. the method can thus be used for on-line measurement and control of microlithography processes and for alignment tasks as well.",1994-05-10,"The title of the patent is method of edge detection in optical images using neural network classifier and its abstract is an image processor employing a camera, frame grabber and a new algorithm for detecting straight edges in optical images is disclosed. the algorithm is based on using a self-organizing unsupervised neural network learning to classify pixels on a digitized image and then extract the corresponding line parameters. the image processor is demonstrated on the specific application of edge detection for linewidth measurement in semiconductor lithography. the results are compared to results obtained by a standard straight edge detector based on the radon transform; good consistency is observed; however, superior speed is achieved for the proposed image processor. the results obtained by the proposed approach are also shown to be in agreement with scanning electron microscope (sem) measurements, which is known to have excellent accuracy but is an invasive measurement instrument. the method can thus be used for on-line measurement and control of microlithography processes and for alignment tasks as well. dated 1994-05-10" 5313407,integrated active vibration cancellation and machine diagnostic system,a machine analyzer is connected to an active vibration cancellation system in order to identify the operating status of the moving machinery while using a minimum of additional parts and taking advantage of signal processing already occurring in the active vibration cancellation system. a preferred embodiment employs a neural network pattern classifier in connection with detecting operating states such as cylinder misfires in an internal combustion engine.,1994-05-17,The title of the patent is integrated active vibration cancellation and machine diagnostic system and its abstract is a machine analyzer is connected to an active vibration cancellation system in order to identify the operating status of the moving machinery while using a minimum of additional parts and taking advantage of signal processing already occurring in the active vibration cancellation system. a preferred embodiment employs a neural network pattern classifier in connection with detecting operating states such as cylinder misfires in an internal combustion engine. dated 1994-05-17 5313558,system for spatial and temporal pattern learning and recognition,"a neural network simulator that comprises a sensory window memory capable of providing a system of neuron elements with an input consisting of data generated from sequentially sampled spatial and/or temporal patterns. each neuron element comprises multiple levels, each of which is independently connected to the sensory window and/or to other neuron elements for receiving information corresponding to spatial and/or temporal patterns to be learned or recognized. each neuron level comprises a multiplicity of pairs of synaptic connections that record ratios of input information so received and compare them to prerecorded ratios corresponding to learned patterns. the comparison is carried out for each synaptic pair according to empirical activation functions that produce maximum activation of a particular pair when the current ratio matches the learned ratio. when a sufficiently large number of synaptic pairs in a level registers a high activation, the corresponding neuron is taken to have recognized the learned pattern and produces a recognition signal.",1994-05-17,"The title of the patent is system for spatial and temporal pattern learning and recognition and its abstract is a neural network simulator that comprises a sensory window memory capable of providing a system of neuron elements with an input consisting of data generated from sequentially sampled spatial and/or temporal patterns. each neuron element comprises multiple levels, each of which is independently connected to the sensory window and/or to other neuron elements for receiving information corresponding to spatial and/or temporal patterns to be learned or recognized. each neuron level comprises a multiplicity of pairs of synaptic connections that record ratios of input information so received and compare them to prerecorded ratios corresponding to learned patterns. the comparison is carried out for each synaptic pair according to empirical activation functions that produce maximum activation of a particular pair when the current ratio matches the learned ratio. when a sufficiently large number of synaptic pairs in a level registers a high activation, the corresponding neuron is taken to have recognized the learned pattern and produces a recognition signal. dated 1994-05-17" 5313559,method of and system for controlling learning in neural network,"a learning control method reduces overall learning time by displaying data related to an appropriate determination of learning protraction and a proper restoring method. prior to initiating the learning, the user is inquired about the current problem and a problem data set representing items associated with the problem is obtained. evaluation data indicating a state of learning obtained during the learning on the current problem is sequentially stored and displayed. when there is a high possibility of learning protraction during the learning, a message informing the user is displayed. when the learning is stopped by the user in this case, the problem data set and evaluation data set are stored. then, a list of restoring methods is displayed and a particular restoring method is selected by the user once the learning is stopped. the learning is restarted on the current problem in accordance with the selected restoring method.",1994-05-17,"The title of the patent is method of and system for controlling learning in neural network and its abstract is a learning control method reduces overall learning time by displaying data related to an appropriate determination of learning protraction and a proper restoring method. prior to initiating the learning, the user is inquired about the current problem and a problem data set representing items associated with the problem is obtained. evaluation data indicating a state of learning obtained during the learning on the current problem is sequentially stored and displayed. when there is a high possibility of learning protraction during the learning, a message informing the user is displayed. when the learning is stopped by the user in this case, the problem data set and evaluation data set are stored. then, a list of restoring methods is displayed and a particular restoring method is selected by the user once the learning is stopped. the learning is restarted on the current problem in accordance with the selected restoring method. dated 1994-05-17" 5315162,electrochemical synapses for artificial neural networks,an electrochemical synapse adapted for use in a neural network which includes an input terminal and an output terminal located at a distance of less than 100 microns from the input terminal. a permanent interconnect having controllable conductivity is located between the two inputs. the conductivity of the permanent interconnect is controlled by either growing or eliminating metallic whiskers between the inputs. the growth and elimination of whiskers provides a rapid and controllable electrochemical synapse. partial neural network systems are disclosed utilizing the electrochemical synapse.,1994-05-24,The title of the patent is electrochemical synapses for artificial neural networks and its abstract is an electrochemical synapse adapted for use in a neural network which includes an input terminal and an output terminal located at a distance of less than 100 microns from the input terminal. a permanent interconnect having controllable conductivity is located between the two inputs. the conductivity of the permanent interconnect is controlled by either growing or eliminating metallic whiskers between the inputs. the growth and elimination of whiskers provides a rapid and controllable electrochemical synapse. partial neural network systems are disclosed utilizing the electrochemical synapse. dated 1994-05-24 5315704,speech/voiceband data discriminator,"input signals are processed to generate a plurality of signals having different features according to whether the input signals are speech signals or voiceband data signals, and these plural signals are entered into a neural network to be determined whether they have features closer to those of speech signals or of voiceband data signals. the classifying function of the neural network is achieved by inputting samples of speech signals and voiceband data signals and learning how to obtain correct classification results.",1994-05-24,"The title of the patent is speech/voiceband data discriminator and its abstract is input signals are processed to generate a plurality of signals having different features according to whether the input signals are speech signals or voiceband data signals, and these plural signals are entered into a neural network to be determined whether they have features closer to those of speech signals or of voiceband data signals. the classifying function of the neural network is achieved by inputting samples of speech signals and voiceband data signals and learning how to obtain correct classification results. dated 1994-05-24" 5317675,neural network pattern recognition learning method,"a neural network includes an input layer composed of a plurality of cells receiving respective components of an input vector, an output layer composed of a plurality of cells representing attribute of the input vector, and an intermediate layer composed of a plurality of cells connected to all the cells of the input and output layers for producing a mapping to map a given input vector to its correct attribute. a learning method utilizing such neural network is carried out by image projecting the input vector into the partial dimensional space by a projection image operating means preliminarily prepared and by storing a coupling vector on the image projection space as well as the threshold and attribute vector.",1994-05-31,"The title of the patent is neural network pattern recognition learning method and its abstract is a neural network includes an input layer composed of a plurality of cells receiving respective components of an input vector, an output layer composed of a plurality of cells representing attribute of the input vector, and an intermediate layer composed of a plurality of cells connected to all the cells of the input and output layers for producing a mapping to map a given input vector to its correct attribute. a learning method utilizing such neural network is carried out by image projecting the input vector into the partial dimensional space by a projection image operating means preliminarily prepared and by storing a coupling vector on the image projection space as well as the threshold and attribute vector. dated 1994-05-31" 5317676,apparatus and method for facilitating use of a neural network,"a neural network development utility assists a developer in generating one or more filters for data to be input to or output from a neural network. a filter is a device which translates data in accordance with a data transformation definition contained in a translate template. source data for the neural network may be expressed in any arbitrary combination of symbolic or numeric fields in a data base. the developer selects those fields to be used from an interactive menu. the utility scans the selected field entries in the source data base to identify the logical type of each field, and creates a default translate template based on this scan. numeric data is automatically scaled. the developer may use the default template, or edit it from an interactive editor. when editing the template, the developer may select from a menu of commonly used neural network data formats, and from a menu of commonly used primitive mathematical operations. the developer may interactively define additional filters to perform data transformations in series, thus achieving more complex mathematical operations on the data. templates may be edited at any time during the development process. if a network does not appear to be giving satisfactory results, the developer may easily alter the template to present inputs in some other format.",1994-05-31,"The title of the patent is apparatus and method for facilitating use of a neural network and its abstract is a neural network development utility assists a developer in generating one or more filters for data to be input to or output from a neural network. a filter is a device which translates data in accordance with a data transformation definition contained in a translate template. source data for the neural network may be expressed in any arbitrary combination of symbolic or numeric fields in a data base. the developer selects those fields to be used from an interactive menu. the utility scans the selected field entries in the source data base to identify the logical type of each field, and creates a default translate template based on this scan. numeric data is automatically scaled. the developer may use the default template, or edit it from an interactive editor. when editing the template, the developer may select from a menu of commonly used neural network data formats, and from a menu of commonly used primitive mathematical operations. the developer may interactively define additional filters to perform data transformations in series, thus achieving more complex mathematical operations on the data. templates may be edited at any time during the development process. if a network does not appear to be giving satisfactory results, the developer may easily alter the template to present inputs in some other format. dated 1994-05-31" 5319587,computing element for neural networks,"a computing element for use in an array in a neural network. each computing element has k (k>1) input signal terminals, k input backpropagated signal terminals, k output backpropagated signal terminals and at least one output terminal. the input terminals of the computing element located in row i, column j of the array of computing elements receive a sequence of concurrent input signals on k parallel input lines representing a parallel input signal s.sub.ij having vector elements (s.sub.ij1, s.sub.ij2, s.sub.ij3, . . . , s.sub.ijk).sup.t. the k input backpropagated signal terminals are coupled to receive an m-dimensional (m1) input signal terminals, k input backpropagated signal terminals, k output backpropagated signal terminals and at least one output terminal. the input terminals of the computing element located in row i, column j of the array of computing elements receive a sequence of concurrent input signals on k parallel input lines representing a parallel input signal s.sub.ij having vector elements (s.sub.ij1, s.sub.ij2, s.sub.ij3, . . . , s.sub.ijk).sup.t. the k input backpropagated signal terminals are coupled to receive an m-dimensional (m