Inexact Processing? This channels my inner nerd.

Tip of the hat to Fark for this gem.

Some processing tasks just don’t require absolute precision or accuracy in calculation, namely, human interface tasks. Enter “Inexact Processing”. Through some interesting cuts to the processor, fiddling with power and managing the types of errors that are allowed to be introduced designers were able to wring up to 15x more efficiency out of a processor.

The concept is deceptively simple: Slash power use by allowing processing components — like hardware for adding and multiplying numbers — to make a few mistakes. By cleverly managing the probability of errors and limiting which calculations produce errors, the designers have found they can simultaneously cut energy demands and dramatically boost performance.

This may go down as one of those ideas that in 25 years everyone says “well, duh”. For tasks associated with video or audio processing to the interface (speaker, monitor) the level of accuracy necessary for human consumption simply does not approach that needed on the back end. Our eyes and ears are lossy sensors anyway and as has been proven time and time again, more “fidelity” is not necessarily “better”. Sight and hearing tend to be context dependent due to ambient noise, light, contrast and other distractions which is why the most important part of the system is the giant error correction device between and behind the sensors: our brains. We lose our sight and hearing over time, further reducing the need for high fidelity. There are plenty of exceptions but the vast majority of human consumption of computational output is watching or listening to content that is perfectly acceptable to mung up with an occasional error. Our modern telephone systems have limits like these built in; extending the thought process to computational tasks seems to me a natural extension.

The designers are already targeting processors like these for low cost tablet platforms. The use cases are potentially much wider. The implications for battery life or just lowering the cost to extend human interfaces to applications that are cost prohibitive today seem pretty exciting.

 

This entry was posted in Future of IT, Science and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *