Submitted by twovests in programming
-
deep learning neural networks are 'universal function approximators'.
-
this means, given a number of input/output samples from a function and a big enough brain, the neural network can approximate that function.
-
maybe we can use this to encode a number of things?
e.g. 3D manifolds (models with a defined inside/outside) can be approximated. input (x, y, z), output (point is in model).
e.g. music; given a time (t), give the amplitude of the sample at that time.
you can also use supplementary data. given a time (t) and the height of the previous 5 samples, a neural network might be able to better predict the next sample. (it could also diverge over time, giving a wonderfully distorting song).
i might work on this later today instead of my school work if anybody is interested
voxpoplar wrote
I mean this is basically just what human memory is right?