An artificial neural network is a computer process that can learn from examples, as people do, instead of having to be programmed before it can do something. But the network often requires thousands of examples before it can figure something out on its own.
At the University of Michigan, engineers have drastically reduced the number of examples, and shortened the time, needed to teach a neural net. Engineers started with a chip called a memristor, which regulates the flow of electricity and remembers the amount of current that flowed through it. This allows the chip to perform functions as well as store data. They combined the memristor with a technique called reservoir computing, which stores a large amount of data with a neural network program. Reservoir systems have required large optically based systems to enter and hold data, but the Michigan breakthrough makes it faster and easier to manage data and train the network.
In a test, the system needed only 88 memristors – a conventional neural net would have needed thousands – to recognize written characters with 91 percent accuracy.
The developers plan to focus their creation first on, among other things, speech recognition and analyzing past patterns to make predictions about the future. This may include telling you what word you’re going to say before you say it.
TRENDPOST: Speeding and simplifying reservoir computing brings us closer to a time when computers can more accurately predict everything from stock-market moves to election results.