Overview  HNet  Performance Aspects  Mathematics  Biology
The random statistical or “Monte Carlo” test is commonly applied in engineering analysis and provides one of the more rigorous techniques in evaluating system performance. The comparison table below illustrates convergence (reduction in recall error) given 5 variables for the stimuli (5 input dimensions). Random test values are uniformly distributed within the boundary 0.0 to10. Convergence results are shown for learning of 500 stimulusresponse patterns.
Learning within an HNeT cerebellar assembly for this instance is compared against a conventional genetic neural network. As is typical within the nonlinear realm, conventional neural networks are unable to converge (cannot learn).
Random statistical tests and many real world problems are highly nonlinear, as the number of associative patterns learned can greatly exceed the dimensionality of the stimulus input (in the above example by 2 orders of magnitude). A problem colloquially termed as "hitting the wall" occurs for conventional neural nets; and application of more cells / layers / interconnections does not resolve this limitation.
For neuroholographic assemblies, learning convergence given nonlinear datasets occurs at a rate proportional to linear problem solutions, as shown by the graph to the right. Such rapid convergence in nonlinear problem spaces is revolutionary, with implications for virtually all fields of engineering that are... mildly put dramatic.
