Com­pleted Theses - De­tails

Fault Tol­er­ance Ana­lys­is of Ap­prox­im­ate Ar­ti­fi­cial Neur­al Net­works

Studierender: Emanuel Rheinert
Betreuer: Alexander Sprenger

Abstract

It has often been claimed that artificial neural networks (ANNs) are inherently fault tolerant, but most research only considers high-level errors; for instance, random noise in signals or parameters. Little effort has been made to investigate the effect of low-level hardware faults. For this thesis, I have simulated gate-level stuck-at faults in a hardware implementation of ANNs, and measured their effect on the high-level functional performance. I can report that most faults are indeed tolerated.

To reduce the hardware cost, the precise implementation can be replaced with approximate hardware, which introduces random noise in signals and parameters. I have found that the fault tolerance of such an approximate ANN is still present, but reduced.

As a usage of fault tolerance, I propose a test time reduction strategy: Only test for faults which cause significant performance degradation. I can report that the test time for a precise hardware ANN can be reduced by more than 80 %, and by more than 40% using approximate hardware.