Com­pleted Bach­el­or-Theses

Studierende: Alisa Stiballe

Betreuer: Somayeh Sadeghi-Kohan

Studierende: Magdalina Ustimova

Betreuer: Somayeh Sadghi-Kohan

Student: Christopher Schenk

Supervisor: Jan Dennis Reimer

Student: Khalil Errahman Ben Ameur

Supervisor: Dr. Somayeh Sadeghi-Kohan

Student: Abdulkarim Ghazal

Supervisor: Dr. Somayeh Sadeghi-Kohan

Student: Abdalrham Badran

Supervisor: Dr. Somayeh Sadeghi-Kohan

Student: Alexander Piel
Supervisor: Alexander Sprenger


By compacting circuit structures and making the best possible use of space during chip design, the distance between neighboring interconnects is reduced. This can lead to parasitic effects such as crosstalk. Crosstalk is noticeable in the form of delay errors. Since delay errors can also be caused by other factors such as process variations (e.g. deviating channel lengths in transistors), it is important to distinguish between them during tests. Chips with delays caused by process variations can still be used for fields with lower performance requirements, while delay errors caused by crosstalk lead to an accelerated aging process in the chip and must therefore be sorted out.
In order to identify fault locations susceptible to crosstalk and to parameterize the fault sizes, this work develops a tool based on data of previous layout syntheses.
The tool analyzes the sizes of the parasitic capacitances of a SPEF (Standard Parasitic Exchange Format) file and localizes them using a DEF (Design Exchange Format) file. As soon as a predefined critical value is exceeded, position and size are printed.
It was shown that the majority of the coupling capacitances are ≤ 2fF and a maximum value of 6 fF is not exceeded. Furthermore, it could be shown that the number of coupling defects is proportional to the number of elements within a circuit.

Student: Alexander Coers

Supervisor: Dr. Somayeh Sadeghi-Kohan


In today's digital systems, interconnect has an important role and affects the performance, power consumption, and reliability of the circuit. The coupling elements in the interconnects result in glitch and delay faults and therefore functional errors. It is also shown that crosstalk elements increase the current density of wires and therefore, they participate in reliability degradation of the wire and accelerate the wire aging.

Analyzing crosstalk for wires in logic circuits is particularly challenging since all wires in a logic unit need to be considered. This is because all wires between logic gates experience crosstalk and hidden interconnect defects which are normally not considered. One of the strongest tools for modeling and simulating wires and logic elements is HSPICE. However, it suffers from a very slow simulation time because it considered all details of the elements that usually are not required in the first design steps. Therefore, higher-level models are required to accelerate the simulation time for the first design exploration steps.

In this project, our goal is to extract a minimum set of useful information from HSPICE modeling to measure the current and delay of the wire elements, and find the relation between them and extract some formula to be used in the high-level modeling. We need this information because and HSPICE full analysis, including this low-level information in a complex system, requires a lot of effort. To access the effectiveness of our models, they will be tested by utilizing them in a high-level model simulator (C++) and the simulation time and results' precision will be compared with the results of the same system implemented in HSPICE.

Studierende: Melanie Jung
Betreuer: Jan Dennis Reimer

Beschleunigung des PaderSAT-ATPGs mithilfe von inkrementeller Instanzgenerierung.

Studierende: Melanie Jung
Betreuer: Jan Dennis Reimer

In dieser Bachelorarbeit soll der vorhandene WaveSAT Algorithmus erweitert werden, indem die Testmustergenerierung für ausgewählte Testzeitpunkte ermöglicht wird. 

Student: Jan van den Heuvel
Supervisor: Alexander Sprenger


Due to the increasing density of circuit structures in modern chipdesign, unintentional interactions between wires are augmenting. This phenomenon, called crosstalk, could cause delays. Delays itself could also be induced by process variations (such as inconstancies of the channel length of transistors). Whereas crosstalk leads to a higher current consumption and as a possible consequence of this to an early failure of the device, this is not the case for process variations. Chips, where the only reason for delays are process variations, can potentially be used in other, less challenging applications.
To distinguish the cause of delays, in this work, measurements in different operating points are utilized. One operating point is characterized by a certain temperature and supply voltage. On the basis of this measurements and by means of an artificial neural network, it is determined, whether crosstalk occurs or not. In this study, fully connected and convolutional architectures are applied.
With regard to practical usage it is also investigated, which measurements could be excluded and how such a reduction of input data (feature reduction) impacts the reliability of the classifications.
This study was able to show that in principle, it is possible to identify or exclude crosstalk as a source of delays by taking measurements in different operating points into account and using neural networks. Especially on small circuits such as benchmark circuit s27, accuracies around 99% had been reached. It turned out that fully connected architectures are particulary suitable for this task. Even after a massive feature reduction the results were satisfying (in parts even improved).

Student: Konstantins Franckevics
Supervisor: Matthias Kampmann

The goal of this thesis is to analyze pseudo-random pattern generation for Faster-than-at-Speed Test (FAST). Several approaches from the literature should be implemented, compared and evaluated with respect to their usefulness for FAST.

Student: Manuel Boschmann
Supervisor: Alexander Sprenger


The Faster-than-At-Speed Test (FAST) examines the circuit under test for small delay faults. These small delay faults can indicate early life failures, which is why it is important, especially in application areas that have high reliability requirements, that these faults are detected. One challenge is varying X-rates, which arise through the use of different observation times. Existing approaches to process X values are not capable of efficiently processing these varying X values. Therefore, the reconfigurable misr introduced in this thesis was implemented and analyzed. The approach is to use a reconfigurable misr and sort the scan chains in ascending order of X-rates. During FAST, e.g. after half of the observation times, the reconfigurable misr splits into several small misrs. Since the scan chains are sorted in ascending order, the scan chains with the lowest X-rates run into the first misr and the scan chain with the highest X-rates run into the last misr. If only the first misr is taken into account, and the remaining misrs are neglected, a high level of fault efficiency with a low intermediate signature storage size should be achieved.
Since the characteristic polynomial of the reconfigurable misr has to be adjusted when it splits, the effect of the characteristic polynomial on the fault efficiency and the intermediate signature storage size is examined. The simulation results have shown that the selection of the characteristic polynomial can increase the fault efficiency. In addition, it has been found that high fault efficiency also means that much intermediate signature storage is needed. In one example, the fault efficiency was increased by 6.7 % by changing the characteristic polynomial. However, the intermediate signature storage size has increased by 4.5 %. The misr size also strongly influences the observation variables. The smaller the misr, the higher the fault efficiency.
In the analysis of the reconfigurable misr, it has been found early on that many X-bits are associated with many D-bits. Therefore, the fault efficiency of the first misr into which the scan chains with low X-rates are running is too low, so the reconfigurable misr must be considered as a whole. The fault efficiency of a reconfigured misr, in which no single misr is neglected, is much higher than the fault efficiency of a non-reconfigurable misr of the same size. This can be exploited to increase the fault efficiency during FAST, reusing the misr of another test.

External Bachelor Thesis in cooperation with dSPACE GmbH, Rathenaustraße 26, 33102 Paderborn

Student: Fabian Winkel
Supervisor: Prof. Dr. Sybille Hellebrand


Autonom fahrende Kraftfahrzeuge werden die Zukunft des Personenverkehrs maßgeblich beeinflussen. Im Rahmen der Bachelorarbeit soll das Problem der automatischen Abstandsregelung untersucht werden. Dazu soll eine Softwarekomponente entwickelt werden, die aus den Bildern einer Stereokamera den Abstand zu anderen Verkehrsteilnehmern bestimmen kann.

Nach der Entwicklung der Bildverarbeitungsverfahren werden diese auf eine Embedded Systems Plattform gebracht und mit Hilfe eines echtzeitfähigen PCs auf ihre Eignung für hochautomatisiertes Fahren getestet. Da die Plattform über mehrere CPUs und GPUs verfügt, sollen bei der Evaluierung insbesondere Laufzeit und Energiebedarf auf CPUs und GPUs miteinander verglichen werden.

Student: Thomas Mertens
Supervisor: Matthias Kampmann

Moderne Fertigungsprozesse und Technologien erlauben es, Schaltungselemente immer kleiner und dichter auf Chips zu integrieren. Die damit verbundenen Vorteile (geringere Leistungsaufnahme, höhere Betriebsfrequenz) werden allerdings dadurch erkauft, dass diese Schaltungen anfällig selbst gegenüber sehr kleinen, natürlichen Parameterschwankungen im Herstellungsprozess werden. Zusätzlich steigt das Risiko, einen frühzeitigen Systemausfall (engl. Early Life Failure, ELF) zu erleiden. Ein Indikator für ELFs sind kleine Verzögerungsdefekte, welche als kleine Verzögerungsfehler (engl.: Small Delay Fault, SDFs) modelliert werden. SDFs können versteckt sein, d. h. sie können nicht mit der Betriebsfrequenz des Geräts erkannt werden. Deshalb nutzt man den Test mit erhöhter Betriebsfrequenz (engl.: Faster-than-at-Speed Test, FAST), um solche versteckte kleine Verzögerungsfehler (engl.: Hidden Delay Fault, HDFs) zu erkennen. Dabei wird die Schaltung im Wesentlichen übertaktet, während der Test durchgeführt wird.

Eine Herausforderung von FAST ist, dass beim Übertakten der Schaltung einige Ausgänge zum Beobachtungszeitpunkt ihre Berechnungen noch nicht abgeschlossen haben. In diesem Fall muss der Logikwert des Ausgangs als unbekannt angenommen werden. Dieser unbekannte Logikwert wird auch als X-Wert bezeichnet. X-Werte können unter FAST gehäuft auftreten, so dass die Konfiguration der Prüfpfade unter Berücksichtigung dieser X-Werte sinnvoll ist. Im Fachgebiet Datentechnik gibt es bereits Ansätze, eine Prüfpfadkonfiguration für FAST durchzuführen. Zudem werden neue Verfahren entwickelt, um beim Kompaktieren der Testantworten X-Werte zu berücksichtigen.


In dieser Bachelorarbeit soll eine Umgebung entwickelt werden, in welcher die verschiedenen Komponenten des FAST-Ablaufs zusammengeführt und analysiert werden. Insbesondere soll dabei untersucht werden, wie sich das Maskieren von Prüfpfaden mit vielen X-Werten auf die Fehlerabdeckung und die benötigte Testzeit auswirkt. Außerdem sollen die Kosten für die zusätzlich benötigte Infrastruktur zum Maskieren der Prüfpfade untersucht werden.


  • Einarbeitung in den Stand der Technik über FAST
  • Implementierung einer Simulationsumgebung für Prüfpfadkonfigurationen
  • Berechnen der Fehlerabdeckung anhand der Simulationen
  • Berechnen von zusätzlichen Testdaten mit Werkzeugen zur automatischen Testmustererzeugung (engl.: Automatic Test Pattern Generation, ATPG)


  • Interesse an der Mitarbeit an einem aktuellen Forschungsthema
  • Interesse an Testverfahren von hochintegrierten Schaltungen
  • Kenntnisse in Programmiersprachen wie C++ (bevorzugt) oder Java


  • S. Hellebrand et al. FAST-BIST: Faster-than-at-Speed BIST targeting hidden delay defects. Proceedings of the IEEE International Test Conference (ITC), 2014. pp 1-8
  • A. Singh, C. Han, X. Qian. An output compression scheme for handling X-states from over-clocked delay tests. Proceedings of the IEEE VLSI Test Symposium (VTS), 2010. pp 57-62

Student: Roman Borisenko
Supervisor: Matthias Kampmann


Im Rahmen dieser Bachelorarbeit soll ein Verfahren zur Analyse von Signaturen entwickelt und implementiert werden. Dafür wird zunächst ein eeignetes Multiple-Input-Signature-Register (MISR) in C++ realisiert. Dabei werden die Testmusterantworen in kleinere Vektoren unterteilt (Slices). Diese werden dann sequenziell in den MISR geschoben, der die intermittierenden Signaturen für die jeweiligen (Faster-Than-At-Speed-Test (FAST)) Gruppen erzeugt. Aus fehlerfreien und fehlerhaften Signaturen möglicher Fehlerkandidaten werden dann die Differenzvektoren berechnet. Diese Informationen sind ausreichend um zu bestimmen, ob ein vermuteter Fehlerkandidat für die beobachtete Signatur verantwortlich ist.

Student: Viktor Tran
Supervisor: Matthias Kampann


Mit dem Fortschritt der technologischen Entwicklung ist es möglich, integrierte Schaltun- gen mit höherer Integrationsdichte herzustellen, da die Strukturgrößen heutiger Bauele- mente kleiner ausfallen. Aufgrund der kleinen Strukturgrößen kann es jedoch beim Her- stellungsprozess wegen Ungenauigkeiten zu Parametervariationen kommen. Die Abwei- chungen wirken sich, neben anderen Effekten wie Leitungsübersprechen und Rauschen in der Versorgungsspannung, auf das zeitliche Verhalten der Schaltungselemente (Gatter), in Form von Verzögerungsdefekten, aus. Diese kleinen Verzögerungsdefekte (engl. Small Delay Defects, SDDs) werden als kleine Verzögerungsfehler (engl. Small Delay Faults, SDFs) modelliert und können auf den frühzeitigen Ausfall (engl. Early Life Failure, ELF) eines Systems hinweisen. Einige SDFs können jedoch wegen der Schaltungstopologie beim Test mit nominaler Betriebsfrequenz nicht erfasst werden, da sie sich auf Pfaden be- finden deren Ausgänge vor der nächsten Taktflanke stabil werden. Für die Erkennung der versteckten kleinen Verzögerungsfehler (engl. Hidden Delay Fault, HDFs) wird der Test mit erhöhter Betriebsfrequenz (engl. Faster-than-at-Speed Test, FAST) genutzt. Hierbei wird die Testfrequenz durch Übertaktung erhöht und der Beobachtungszeitpunkt künst- lich verschoben. Durch FAST können selbst HDFs auf kurzen Pfaden erkannt werden. Die Herausforderung bei diesem Verfahren ist, dass es Pfade gibt die ihre Berechnungen zum Beobachtungszeitpunkt noch nicht abgeschlossen haben. Deren Ausgänge zeigen somit selbst im fehlerfreien Fall einen unbekannten Logikwert, genannt X-Wert, an. Um alle HDFs zu erfassen werden verschiedene Testfrequenzen benötigt. Dies führt zu vari- ierenden Raten von X-Werten an den Ausgängen. Für das Kompaktieren der Testantwort stellen solche unbekannten Logikwerte ein Problem dar. Obwohl moderne Kompaktie- rungsverfahren mit hohen X-Raten umgehen können, sind sie nicht in der Lage variie- rende Raten zu verarbeiten. Das Ziel dieser Arbeit ist es, den Kompatorentwurf zu unter- stützen, indem mit einem dichtebasierten Clusteringverfahren (DBSCAN) die Prüfpfade neu angeordnet werde, um möglichst gleichlange Prüfpfade zu erhalten. Für die Gruppie- rung werden alle Prüfzellen als Punkte im mehrdimensionalen Raum betrachtet, wobei die X-Raten der jeweiligen Frequenzen die Koordinaten der Punkte darstellen. Mit Hilfe von Distanzmetriken werden anschließend naheliegende Punkte zusammen gruppiert. Die neue Anordnung ermöglicht es, Prüfpfade mit hohen Raten zu blockieren. Die Ergebnisse zeigen, dass die erstellten Prüfpfade im Vergleich zur zufälligen Anordnung gute Resul- tate aufweisen. Die erzeugten Prüfpfadkonfigurationen erzielen bis zu dreimal so hohe X-Reduktionen wie zufällige Anordnungen. Auch wenn die Fehlereffizienz geringer aus- fällt, so sind die Abweichungen doch vernachlässigbar klein.

Student: Helmut Ngawa
Supervisor: Matthias Kampmann


Die ständig wachsende technologische Entwicklung ermöglicht es, immer mehr Kompo- nenten auf einem Schaltkreis (engl. IC) zu integrieren. Durch diesen Fortschritt entstehen allerdings aufgrund der Miniaturisierung weitere Arten von Defekten. Kleine Verzöge- rungsfehler (engl. Small Delay Fault, SDF), die kleine Verzögerungsdefekte (engl. Small Delay Defect, SDD) modellieren, weisen auf einen möglichen frühzeitigen Ausfall des Systems (engl. Early Life Failure, ELF) hin.

Es gibt SDFs, die mit der nominalen Taktfrequenz des Systems nicht erkannt wer- den können, da sie zu klein sind, oder alle Pfade einen zu großen Slack haben. Sie werden als versteckte kleine Verzögerungsfehler (engl. Hidden Delay Fault, HDF) bezeichnet. Zur Detektion von HDF wird der Test mit erhöhter Betriebsfrequenz (engl. Faster-At-Speed Test, FAST) verwendet, d. h. das System wird im Lauf des Tests übertaktet. FAST ist ein effizientes Verfahren, um alle HDFs zu erkennen. Eine Herausforderung von FAST ist, dass zum Abtastzeitpunkt manche Ausgänge in einem unbekannten Zustand sind. Diese unbekannten logischen Werte werden als X-Werte bezeichnet. Die Anzahl an X-Werten hängt vom Abtastzeitpunkt ab. Um weiter unter FAST arbeiten zu können, werden die X-Werte kompaktiert, um möglichst X-freie Daten zu gewinnen. Dies macht FAST zu einem teuren Testverfahren.

Ein naives Testverfahren testet für alle Beobachtungszeitpunkte die vollständige Testmustermenge. Dies ist jedoch sehr teuer. Sinnvoll wäre es, nur eine Teilmenge der initialen Testmustermenge für einen bestimmten Abtastzeitpunkt auszuwählen, sodass immer noch alle HDFs für den gewählten Beobachtungspunkt erkannt werden. Dieses Problem ist NP-vollständig und somit schwer zu lösen. Es existieren schon Verfahren, die dieses Problem lösen, ohne Rücksicht auf X-Werte zu nehmen. In dieser Arbeit werden weitere Verfahren entwickelt, die dieses Problem noch geschickter lösen und zusätzlich die Anzahl der resultierenden X-Werte berücksichtigen.

Genetische Algorithmen (GA), die in vielen Bereichen unter anderem in der Auto- mobilindustrie eingesetzt werden, sind metaheuristische Verfahren zur Lösung sehr kom- plexer Optimierungsprobleme. Auch in dieser Arbeit wird ein GA erfolgreich umgesetzt. Die Ergebnisse dieses GA werden anhand von drei Faktoren gemessen: der verstrichenen Zeit, um die Lösung zu finden, der Anzahl der gefundenen Testmuster und der Anzahl der resultierenden X-Werte. Dieser GA lässt sich sehr genau steuern, sodass durch geeignete Parametereinstellungen je nach Wunsch einer dieser Faktoren optimiert werden kann.

Die Ergebnisse bestätigen in den meisten Fällen eine Verbesserung des Zeitauf- wands von ungefähr 75 %. Bezüglich der Anzahl an Testmustern nähern sich die Ergeb- nisse um circa 2 % der optimalen Lösung und bezüglich der Anzahl der X-Werte sind die Ergebnisse um circa 4 % besser als die Referenz. Als Referenz wird ein optimaler Hypergraph-Algorithmus verwendet.

Student: Jan Dennis Reimer
Supervisor: Dr.-Ing. Thomas Indlekofer

Com­pleted Mas­ter-Theses

Studierender: Kai Arne Hannemann

Betreuer: Jan Dennis Reimer


Die Idee, neuronale Netze zur Sumilation von Gate-All-Around (GAA)-Transistoren zu verwenden, ist ein aufstrebendes Forschungsfeld, das darauf abzielt, die Einschränkungen traditioneller Simulationsmethoden für GAA-Transistoren zu überwinden. GAA-Transistoren sind eine vielversprechende Technologie zur Verbesserung der Leistung elektronischer Geräte, ihre Simulation ist jedoch aufgrund der komplexen 3D-Geometrie und den variierenden Materialeigenschaften der Transistoren schwierig, insbesondere im Hinblick auf die Gate-Kapazität, die für diese Art von Transitoren von besonderer Bedeutung ist. Traditionell werden GAA-Transistoren mittels SPICE-Simulation simuliert. SPICE-Simulationen sind laufzeittechnisch nicht möglich und sind daher nicht in der Lage, das Zeitverhalten für Multi-Millionen-Transistor-Netzlisten zu beschreiben. Traditionelle Switch-level-Simulationen können das Zeitverhalten von GAAs nicht präzise simulieren. Deshalb werden neue präziser Simulationstechniken benötigt, die sich auf den Bereich zwischen Switch-Level und elektrischen Level konzentrieren. Diese Arbeit untersucht daher die Machbarkeit von neuen effizienten, auf neuronalen Netzen basierenden Simulationstechniken, die auf GPUs verwendbar sind, um die Simulation von GAA-Transistoren zu ermöglichen.

Student: Amjad Alsati
Supervisor: Alexander Sprenger


Small delay faults (SDFs) are a significant problem in high-speed integrated circuits (ICs). Testing SDFs targeting long delay paths can be easily done at the nominal frequency of the target circuit under test (CUT). SDFs residing on short paths are hidden delay faults (HDFs) since they are not detectable at the nominal frequency. Faster-than-at-speed test (FAST) targets HDFs by overclocking the CUT to a higher frequency than the nominal frequency. On the one hand, FAST minimizes slacks of tested short paths and helps to detect HDFs. On the other hand, long paths produce unknown logic values known as X-values due to observing the signals of long paths before their arrival times. X-values obstruct the compaction of test pattern responses due to their dominance on the XOR gates of the compactors. As a result, faults detected before compaction become undetected after compaction. State-of-the-art compactors such as the stochastic space compactor (SSC) and the X-canceling MISR can tolerate those X-values. In this work, three approaches of pattern scheduling are presented to study their effectiveness in supporting the SSC in detecting faults after compaction. A probability-based schedule (PBS) approach is introduced to increase the number of detected faults after stochastic space compaction. The PBS is compared to two simpler scheduling approaches, namely naive and covering schedules. The PBS achieves better fault efficiency than naive and covering schedules while significantly utilizing fewer FAST patterns than the naive one. The covering schedule is also compared to the naive schedule to demonstrate that scheduling is not a simple covering problem due to the randomness of the stochastic space compaction. Moreover, a pattern ordering optimization technique is presented to show the effectiveness of changing the order of the patterns in supporting the SSC. The ordering optimization approach shows its effectiveness in increasing the number of detected faults after stochastic space compaction while also reducing the number of X-values on the outputs of the SSC. The case study of the ordering approach also investigates the effectiveness of pattern ordering when an X-canceling MISR is added after the SSC in the compactor chain. The study concludes that attaining a high fault efficiency and X-reduction ratio of the SSC will not always lead to a high fault efficiency when X-canceling MISR is added after the SSC. This suggests that pattern ordering should simultaneously be performed for both compactors.

Studierender: Viktor Tran
Betreuer: Jan Dennis Reimer 

Das Ziel dieser Arbeit ist, verschiedene SAT-basierte Verfahren zur automatisierten Testmustererzeugung (ATPG) miteinander zu kombinieren, um ein pareto-optimales Werkzeug zu erschaffen. Dabei sind kleine Verzögerungsfehler der Fokus dieser Arbeit. Mögliche ATPG-Optimierungsziele sind

  • Minimierung von unbekannten Logikwerten (X-Werten) bei der Simulation
  • Minimierung von relevanten (care) Bits im Testmuster
  • Maximierung der sensibilisierten Pfadlänge für einen Verzögerungsfehler
  • ...

Mit Hilfe von Gewichtungen sollen diese Optimierungsziele gegeneinander abgewägt und so die pareto-optimalen Lösungen gefunden werden.

Student: Emanuel Rheinert
Supervisor: Alexander Sprenger


It has often been claimed that artificial neural networks (ANNs) are inherently fault tolerant, but most research only considers high-level errors; for instance, random noise in signals or parameters. Little effort has been made to investigate the effect of low-level hardware faults. For this thesis, I have simulated gate-level stuck-at faults in a hardware implementation of ANNs, and measured their effect on the high-level functional performance. I can report that most faults are indeed tolerated.

To reduce the hardware cost, the precise implementation can be replaced with approximate hardware, which introduces random noise in signals and parameters. I have found that the fault tolerance of such an approximate ANN is still present, but reduced.

As a usage of fault tolerance, I propose a test time reduction strategy: Only test for faults which cause significant performance degradation. I can report that the test time for a precise hardware ANN can be reduced by more than 80 %, and by more than 40% using approximate hardware.

Student: Yuan Zhang
Supervisor: Alexander Sprenger


The Topic of the Master thesis is "Convolutional Compaction for Faster-than-At-Speed-Test (FAST)". During this thesis the compaction scheme "Convoltuional Compaction" is applied to Faster-than-At-Speed-Test. Therefor the compaction scheme has to be implemented, simulated, investigated and adapted.

Student: Jan Dennis Reimer
Supervisor: Matthias Kampmann

The goal of this thesis is to extend a known SAT-based algorithm for timing-accurate test pattern generation with variable fault sizes. The original method uses multiple literals to model a signal waveform in discrete time steps. However, the effect of a small delay fault can only be injected for a single fixed fault size so far.

This extension should be applied to the use case of circuit diagnosis in which faulty responses are analyzed to determine possible causes (small delay faults with arbitrary size). This allows to possibly determine a minimum fault size that created the faulty behavior as well as an improved diagnostic resolution.

Student: Helmut Ngawa
Supervisor: Alexander Sprenger


Many automobile manufacturers are increasingly working in cooperation with research institutions to make the dream of fully self-driving vehicles possible. There are still many aspects to explore along the way. Among them, special hardware solutions are needed to develop functions of autonomous driving. As a competent provider of hardware and software solutions for the development and testing of electronic control devices, dSPACE wants to make dedicated prototyping systems for the development of autonomous driving functions and data acquisition.

One system developed by dSPACE for this purpose is called eSPU 2 (embedded Sensor Processor Unit 2), which disposes among other components of an LTE-interface. As with any wireless technology, error-free data transmission over LTE cannot be always guaranteed. For example, the data communication can be affected by other inserted radio modules. Therefore, it is recommended to test every device with integrated wireless technology, even if the isolated wireless module has already been tested by the chip manufacturer. The aim of this work is to develop and implement a test concept for the LTE-interface of the eSPU 2.

The implemented test concept is divided into two parts. One part is the product test during the product development, which is used to validate a correct integration of the module in the system by measuring transmit power, occupied channel bandwidth and adjacent-channel power. The second part is the production test in the production, in which the operability of the LTE module in the system is verified. The test results undoubtedly validate module integration in the system, because the tests did not detect any unexpected behavior on the LTE modules. In their entirety, all measurements were unanimous. The production test was also successfully integrated into the dSPACE-tool PTFE (Produktion Test Frontend) and is available in dSPACE production.


This work was done in collaboration with dSpace GmbH (

Student: Mehak Aftab
Supervisor: Matthias Kampmann


PDF-Version der Beschreibung

Faster-than-at-Speed Test (FAST) is an approach to detect small delay faults, which can indicate an early life failure of a system. During FAST, the circuit under test is overclocked, which causes the simulation to generate more unknown logic values (X-values), since some outputs did not finish their calculation at the target observation time. These X-values are a challenge for test response compaction, hence it makes sense to reduce these values as much as possible.

One solution to reduce the number of X-values lies in selecting specific test patterns tuned to the target observation time. These patterns should produce only few X-values. There are already some methods to select patterns out of a base set, which are based on greedy and genetic algorithms.

Solution approach:

In this thesis, the method to select test patterns should be extended to further optimize the reduction in X-values. For instance, one could select special "essential" patterns, which produce a lot of X-values and replace them with newly generated patterns (by using a commercial pattern generator), optimized for FAST. For a Bachelors thesis, it is sufficient to check standard approaches (e.g. n-detect). For a Masters thesis, the essential patterns need to be analyzed further, such that the ATPG tool can be guided towards optimized patterns, e.g. by generating special constraints.

Solution aspects:

  • Literature survey of the state of the art of FAST
  • Find essential patterns which require replacement
  • Generate new test patterns with a commercial tool
  • Evaluate the method by means of simulation


  • S. Hellebrand, T. Indlekofer, M. Kampmann, M. A. Kochte, C. Liu und H.-J. Wunderlich. "FAST-BIST: Faster-than-At-Speed BIST Targeting Hidden Delay Defects." Proceedings of the 2014 IEEE International Test Conference (ITC). Seattle, USA. Oktober 2014, pp. 1-8
  • M. Kampmann, M. A. Kochte, E. Schneider, T. Indlekofer, S. Hellebrand and H.-J. Wunderlich. "Optimized Selection of Frequencies for Faster-than-at-Speed Test." Proceedings of 24th IEEE Asian Test Symposium (ATS). Mumbai, Indien. November 2015, pp. 109-114


Studierender: Moritz Schniedermann
Betreuer: Matthias Kampmann

In dieser Arbeit soll ein Algorithmus zur Testmustererzeugung (engl. Automatic Test Pattern Generation, ATPG) für Übergangsfehler in digitalen Schaltungen entwickelt werden. Dabei soll der Algorithmus auf der Theorie der Booleschen Erfüllbarkeit (engl. Satisfiability, SAT) basieren. In der Literatur gibt es einige effiziente SAT-basierte ATPG Werkzeuge.


  • Umwandeln einer Schaltung in eine KNF-Form für SAT-Solver (Tseitin-Transformation)
  • Analyse und Auswahl von verfügbaren SAT-Solvern (z.B. OpenSource Solver wie MiniSAT)
  • Erweiterung des Solvers um die Fähigkeit, Übergangsfehler zu erkennen
  • Evaluation des Algorithmus anhand von SImulationen und vergleich mit der Literatur


  • Gute C++ Kenntnisse
  • Kenntnisse im Bereich des Tests hochintegrierter Schaltungen
  • Kenntnisse im Bereich Boolescher Theorie / Boolescher Erfüllbarkeit sind von Vorteil (aber keine Voraussetzung)


  • S. Eggersglüß und R. Drechsler, "High Quality Test Pattern Generation and Boolean Satisfiability", Springer, New York, 2012

Student: Ratna Kumar Gari
Supervisor: Matthias Kampmann

State of the art manufaction processes and technologies allow for much tighter integration densities on the chips. This has the advantages of reduced power dissipation and increased operating frequencies, but has the drawback of chips being very sensitive even to small, natural process variations. Furthermore, Early Life Failures (ELFs) are becoming a dominant problem in applications with high reliability. One indicator for ELFs is the Small Delay Fault (SDF). These faults can be hidden when the test is performed at-speed. To overcome this problem, Faster-than-At-Speed Test (FAST) was introduced. Essentially, in FAST the test is performed while overclocking the chip.

FAST can also be implemented as a Built-in Self-Test (BIST). The Computer Engineering research group published several conference papers about FAST-BIST in cooperation with the university of Stuttgart. However, all the approaches use deterministic test patterns, which need to be stored in an on-chip memory. For BIST, an attractive method of generating test patterns with low hardware overhead is to use a Linear Feedback Shift Register (LFSR). It produces a stream of pseudo-random test patterns which are applied to the chip. In the industry, this practice is commonly called Logic Built-in Self-Test (LBIST).

Problem description:

In this master’s thesis, the usability of LFSRs should be analyzed with respect to FAST. The challenge here is to find maximum-length sequences in the stream of pseudo-random patterns such that each sequence can be applied to the chip at a single test frequency. Ultimately, the fault coverage should be maximized with this technique while at the same time reducing the required hardware overhead for FAST-BIST.

Key aspects:

  • Analysis of deterministic test patterns to find "strong" patterns
  • Analysis of LFSR output streams
  • Implementation of an algorithm to find sequences in the stream
  • Experimental case study to validate the results


  • Knowledge about VLSI testing, BIST and especially LFSRs
  • Programming skills, preferrably in C++
  • Motivation to work on a current research topic


  • S. Hellebrand et al. FAST-BIST: Faster-than-at-Speed BIST targeting hidden delay defects. Proceedings of the IEEE International Test Conference (ITC), 2014. pp 1-8
  • M. Kampmann et al. Optimized Selection of Frequencies for Faster-Than-at-Speed Test. Proceedings of the IEEE Asian Test Symposium (ATS), 2015, pp 109-114

Student: Sunil Kumar Veerappa
Supervisor: Alexander Sprenger


The error rate of microchips is very high during their early lifetime. Indicators for such a early life failure (ELF) are small delay faults (SDFs). Dependent on the delay, SDFs are not detectable at normal speed. So it might be possible that there is a SDF but the signal is stable at the nominal observation time anyway and therefore not detectable. Such faults are called hidden delay faults (HDFs).

One possibility to detect HDFs is to overclock the microchip. But if the chip is overclocked some outputs have to be con- sidered as unknown. This is because the output has not yet stabilized at the new observation time. This unknown values are also called X-values.

To distinguish between faulty and fault free chips the test response of a chip has to be compared with an correct "golden" test response. This is either done using an automatic test equipment (ATE) or directly on chip if a Built-In Self-Test (BIST) is used. For reducing the needed amount of data, compaction schemes, like an so called X-Canceling MISR, are used.

Job description

During this thesis the locality of X-values should be exploited. Therefore an so called X-Canceling MISR should be split up into several smaller X-Canceling MISRs and the effect on the X-reduction and fault coverage should be investigated. Additionally the hardware overhead in relation to common compaction schemes should be analyzed.


  • Get familiar with state of the art X-tolerant compaction schemes
  • Design and implement an X-tolerant compaction scheme for FAST
  • Evaluate your scheme with simulations


  • Interest in working on a topical issue
  • Interest in VLSI testing
  • Basic knowledge in programming languages, such as C, C++ or Java
  • Interest in compaction scheme

Student: Mohammad Urf Maaz
Supervisor: Alexander Sprenger


Recent advances in fabrication and production of devices have also introduced smaller delay defect sizes. Hidden delay defects are of particular interest as they are not large enough to cause timing failure under normal condition. Small delay defects would not be of much concern as they do not introduce significant delays at normal frequency of operation. However, hidden delay defects are indicative of imperfections occurring in device. These imperfections can lead to early life failures in many devices and therefore detection of such defects is crucial.

A solution to this is performing Faster-than-At-Speed Testing (FAST). In FAST the test is run at significantly higher frequencies than normal operation. At these high frequencies, these delay defects are detected. However, working at higher frequencies comes with its own complications. Since the test responses are gathered at significantly higher frequencies, there are many intermediate value arriving particularly from longer paths. This means that the number of unknowns (X) increase in FAST. These X’s arrive in the test responses and present a series of issues to evaluating results.

There are several techniques to handle these unknowns such that the test results can be somewhat effectively evaluated. Few researchers have proposed modifying the CUT however that is not very practical for general purpose. Instead, more efficient approaches target not the elimination of X’s from the circuit, but the handling of X’s in the outputs. Bawa et al. present X-Canceling MISR techniques with partial masking in X-chains as and effective approach of removing the effect of several unknown values while losing only a few of the known values. The X-Canceling MISR proposed earlier by Touba is effective for small densities of X’s while Bawa’s improved approach of partial masking handles the higher densities more effectively. As a result, we have fewer test vectors and better compression of test data. However, this is still not FAST-ready.

Rajski et al. present convolution compaction of test responses as an approach to handling X’s. Datta and Touba present an X-stacking method to reduce the cost of handling the unknown values in test responses. There are several other methods similar and different from the aforementioned techniques. This thesis plans on extensive research on the mentioned approaches for reduction of X’s in output responses for FAST. In this first stage of the thesis, contemporary approaches will be studied and investigated. These will provide a good basis. Literature review will also involve in-depth understanding of FAST. This would be followed by implementing few of the suitable techniques and comparing the results of each. This study would provide a valuable insight to the superiority of each method and the suitability for the high X density in FAST responses. Each important study must be followed with implementation in the existing framework using C++ and comparison with the other contemporary methods. Finally documentation of the thesis will be compiled and presented.

Evaluation Methodology

In order to test and verify the implemented techniques some parameters need to be set and investigated. Since the approaches differ significantly from each other, some evaluating ground must be established.

The first trivial aspect is the pure reduction or removal of unknowns. The implemented work will serve as a block that receives outputs with high percentages of X’s and in turn reduces them to create an output stream that contains a tolerable number of X’s. This output stream may be fed to an X-Canceling MISR, which would work efficiently with the low number of X’s. The X-reduction must then be measured across the block to be implemented i.e. number of X’s from the incoming stream to the reduced number of X’s in the stream fed to the X-Canceling MISR.

Next a similar aspect is to evaluate the fault coverage achieved through the masking or reduction procedure. Having a high fault coverage is the ultimate goal and therefore evaluating the comparative fault coverage for the implemented scheme serves as a good basis for evaluation.

Another important aspect that can be evaluated is the hardware overhead that is dedicated for the implemented scheme. While the scheme must be effective in itself it must also be feasible. For example, one of the evaluating parameters in the partial masking techniques proposed by Bawa was the total number of control bits required. In their approach, number of X-chains was also varied in order to see the effect on the number of control bits required. In the X-Stacking approach by Datta and Touba, additional test vectors were required to achieve 100% coverage and their relative percentage increase investigated. Similarly for other approaches, having a feasible hardware overhead would be beneficial and therefore can serve as an additional evaluating parameter.

The implemented schemes shall first be tested on simpler circuits before moving to industrial boards if possible. Also tweaking with the frequency of testing can also give a good idea of the optimum frequency range i.e. where maximum number of faults are covered with the minimum number of X’s produced. These are some of the potential evaluating aspects that can be used in order to compare the contemporary schemes.

Student: Rohan Narkhede
Supervisor: Rüdiger Ibers

Auch ein fehlerfreier Entwurf und die Verifikation eines Schaltungsdesigns können nicht sicherstellen, dass eine produzierte Schaltung hinterher auch wirklich funktioniert. Fehler treten auch während der Produktion auf, z.B. durch Staubkörner, welche Kurzschlüsse verursachen oder falsch ausgerichtete Masken. Nicht alle diese Fehler sind nach der Produktion direkt sicher und können mit entsprechenden Tests detektiert werden. Verengte Leitungen oder Isolationen müssen z.B. durch ein Burn-In zum Durchbruch gebracht werden, bevor die Fehler erkennbar werden. Da immer kleinere Schaltungsstrukturen ein effektives Burn-In erschweren, können fehlertolerante Schaltungen als alternative Betrachtet werden. Sie können aber nicht nur manche Produktionsfehler kompensieren, sondern auch Schaltungsalterung bis zu einem gewissen Grad erkennen und kompensieren.

Das Ziel der Arbeit ist es, in VHDL eine Arithmetisch Logische Einheit (ALU) für einen Prozessor zu entwickeln, welche in der Lage ist, durch Zeitliche-, Algorithmische-, oder Hardware-Redundanz Fehler zu tolerieren, ohne dabei die Schaltung übermäßig zu vergrößern.


  • Kenntnisse in VHDL
  • Kenntnisse in C++

Student: Muhammad Asim Zahid
Supervisor: Prof. Dr. Sybille Hellebrand