Chris J. Myers

Design of Asynchronous Genetic Circuits

University of Colorado Boulder

Bio: Chris J. Myers received a BS in EE and Chinese history from Caltech, and MSEE and PhD degrees from Stanford. Before becoming Chair of ECEE at CU Boulder in 2020, he was a professor and associate chair in ECE at the University of Utah. Myers is the author of over 200 technical papers and the textbooks Asynchronous Circuit Design and Engineering Genetic Circuits. He is also a co-inventor on four patents. His research interests include asynchronous design, formal verification, and genetic circuit design. Myers received an NSF Fellowship in 1991, an NSF CAREER award in 1996, best paper awards at the 1999 and 2007 Symposiums on Asynchronous Circuits and Systems, and is a fellow of the IEEE. He is a leader in the development of standards for systems and synthetic biology. In particular, he has served as an editor for the SBML standard, chair of the steering committee for the SBOL standard, and chair of the COMBINE coordination board.

Abstract: Researchers are now able to engineer synthetic genetic circuits for a range of applications in the environmental, medical, and energy domains. Crucial to the success of these efforts is the development of methods and tools for genetic design automation (GDA). While inspiration can be drawn from experiences with electronic design automation (EDA), design with a genetic material poses several challenges. In particular, genetic circuits are composed of very noisy components making their behavior more asynchronous, analog, and stochastic in nature. This talk presents our research in the development of GDA tools that leverage our past experiences in asynchronous circuit synthesis and formal verification. These tools enable synthetic biologists to construct models, efficiently analyze and visualize them, and synthesize a genetic circuit from a library of parts. Each step of this design process utilizes standard data representation formats enabling the ready exchange of results.

Matthias Pflanz

Challenges for highly-reliable High-Performance Processor Development

IBM Research & Development, Boeblingen/ Germany


Master Degree in Electrical Engineering from Brandenburgische Technische Universität Cottbus-Senftenber
1996 – 2001
Graduate Research and Teaching Assistant at Dept. Computr Science (Prof. Vierhaus)
Research in Reliable Computer Systems & Hardware, Hardware Error Detection & Correction, Dependable Computing
PhD in Computer Engineering – degree „summa cum laude” Doctoral Thesis thesis available as printed book from publisher Springer „On-line Error Detection and Fast Recover Techniques for Dependable Embedded Processors” ISBN 0302-9743
2001 – now
IBM Research & Development Lab Boeblingen (BW), Germa Logic Design and Design for Test: Playstation 3 gaming processor (Sony-Toshiba-IBM)
Logic Design and Manager for various System Z and System P High-end processor units and VLSI-Test Chip RAS development for Power (P10) processor, Chip RAS lead for P-Future Chip RAS lead for System Z processor in 5nm and 3nm technology

Abstract: More than two thirds of Fortune 100 companies and other large enterprises use IBM Z and IBM Power systems as the backbone for their hybrid cloud infrastructure. Current core pieces of these systems running critical workloads are the Telum processor for Z-systems and the Power10 processor for P-systems. In addition to the best possible performance and high energy efficiency (performance per watt), it is crucial for these processors to offer industry-leading availability and uncompromising reliability. In particular, the high reliability of processor functions is becoming more and more of a challenge for developers due to increasing number of transistors and advanced scaling of transistor feature sizes down to 7nm and below.

This talk will outline how IBM uses pre-silicon evaluation methods to develop its processor hardware to be robust and fail-safe against cosmic radiation (soft errors) as well as possible permanent defects (hard errors).

Close collaboration between research and development is the basis for innovative methods and their implementation. The modeling of the latest technologies (7nm and below), simulation of error probabilities (hard and soft) and (event and rule based) verification of RAS-structures are essential elements in the development of an error-robust and reliable processor design.

A call to the academic community to focus research on future challenges in developing highly efficient and highly reliable chips in nanosheet technology (3 nm and below) will conclude this talk.

Tom Waayers

DfT for achieving 0 DPPB, are we there yet ?

NXP semiconductors

Bio: Tom Waayers is part of the central Design Enablement team in NXP semiconductors. NXP is a major player in the semiconductor industry, with a wide range of products. Amongst others, NXP is a leading provider of solutions for the automotive industry, including microcontrollers, power management ICs, and secure in-vehicle communication. Tom received the MSc degree in Electrical Engineering from Technical University in Eindhoven. In 1995 he started working on Design for Test methodology in Philips research laboratories. He contributed in IEEE Std 1500 and is co-author of The Core Test Wrapper Handbook-Springer. Tom is NXP notable inventor with 17 US patents and presented multiple conference papers on test. Since 2014 he is leading the DfT and Test innovation and design automation in NXP semiconductors.

Abstract: Achieving 0 dppb (defective parts per billion) in testing or manufacturing requires a very high level of precision and accuracy, and is a challenging goal. It requires a thorough understanding of the product and its design, as well as the manufacturing process and any potential sources of variability or error. To achieve 0 dppb, it is necessary to implement advanced testing and measurement techniques, such as specialized instrumentation and calibration techniques, to ensure that the product is consistent and meets the required specifications. It also requires careful process control and monitoring, as well as rigorous quality assurance and control measures to identify and address any issues that may arise during the manufacturing process. Consequently, achieving 0 ppb requires a robust design for test (DFT) strategy to ensure that the product can be easily and accurately tested and debugged. This involves testability features into the product, as well as test equipment and procedures to accurately measure and evaluate the product’s performance. The required high levels of precision and quality come with a cost. The cost of quality is not limited to the cost we incur to produce a certain quality level. It needs a mindset to produce quality in which DfT is applied to increase quality, as well as to prevent costs and further reduce costs in the product life cycle. This talk will introduce the evolution in DfT to address the challenges to achieve 0 DPPB. It will discuss cost of quality and the role of DfT in cost prevention. It will address the possible impact of functional safety and security features, as well as further market demands like speed and flexibility in design integration. It is not likely that the talk will answer the overall question, however the listener may get to understand where we are, and may need to go…