KEYNOTE PRESENTATIONS

First Day Key Talks

March 24, 2026

Dr. Mark M. Tehranipoor

Opening Keynote: The Future of Semiconductor Innovation: GenAI in Design and Security

Fellow of IEEE, ACM, NAI, AAIA, AIIA

Intel Charles E. Young Preeminence Endowed Chair Professor in Cybersecurity

Co-founder: IEEE HOST, IEEE AsianHOST, and IEEE PAINE

Director: ECI Transition Center

Co-director: CYAN and MEST Centers

Chair of Department of Electrical Computer Engineering

University of Florida, Gainesville, Florida

Date: March 24, 2026

LinkedIn:

Abstract: Designing functionally correct, high-performance, and provably secure system-on-chips (SoCs) has become a strategic imperative for modern computing infrastructure. However, escalating complexity, heterogeneous integration, and evolving security threats are pushing traditional design and verification methodologies beyond their practical limits. The emergence of large language models (LLMs) offers a transformative opportunity for SoC automation. Beyond code generation, LLMs enable architectural reasoning, specification refinement, vulnerability analysis, and design-space exploration. Yet chip design is inherently multidisciplinary and iterative, requiring more than a single monolithic model. An agentic paradigm-where specialized AI agents collaborate within a coordinated framework-enables modular reasoning, cross-layer verification, and adaptive security validation across the SoC lifecycle. This keynote introduces a multi-agent intelligent assistant system designed to automate and augment SoC design and security verification. By integrating synthesis, threat modeling, formal reasoning, runtime monitoring, and hardware-software co-verification, this framework moves us toward self-optimizing, security-aware, and continuously verified silicon-redefining how next-generation microelectronic systems are conceived, built, and trusted.

Bio: Mark M. Tehranipoor is currently the Distinguished Professor, the Intel Charles E. Young Preeminence Endowed Chair Professor and the Sachio Semmoto Chair of the Department of Electrical and Computer Engineering (ECE) at the University of Florida. His research interests include: GenAI Security, hardware security and trust, supply chain security, IoT security, VLSI design, test and reliability. He has 50+ patents issued/pending, 20 books, and 700+ conference/journal publications. He is a recipient of 25 best paper awards and nominations, as well as the 2008 IEEE Computer Society (CS) Meritorious Service Award, the 2012 IEEE CS Outstanding Contribution, the 2009 NSF CAREER Award, and the 2014 AFOSR MURI award, the 2022 IEEE CS TTTC Bob Madge Innovation Award, and the 2026 IEEE CS Taylor Booth Education Award. He received the 2020 University of Florida Innovation of the year as well as teacher/scholar of the year awards. He co-founded the IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), IEEE AisanHOST, and IEEE International Conference on Physical Assurance and Inspection of Electronics (PAINE). He serves on the program committee of more than a dozen leading conferences and workshops. He has also served as Program and General Chair of a number of IEEE and ACM sponsored conferences and workshops. He is currently serving as a founding co-EIC for Journal on Hardware and Systems Security (HaSS) and served as Associate Editor for TC, JETTA, JOLPE, TODAES, IEEE D&T, TVLSI. He served as a founding director for Florida Institute for Cybersecurity Research (FICS) and a number of other centers with focus on microelectronics security. Dr. Tehranipoor is the recipient of the Semiconductor Research Corporation (SRC) Aristotle Award, a Fellow of the IEEE, a Fellow of the ACM, a Fellow of the National Academy of Inventors (NAI), a Fellow of AAIA, a Fellow of AIIA, a Fellow of ASEMFL, a Golden Core Member of IEEE Computer Society, and Member of ACM SIGDA.

Dr. Selçuk Köse

Dr. Selçuk Köse

A Keynote Talk
Title: Thrusting the Quantum Computer: Security Threats from Side-channel Leakage to PUFs and Trojans

Professor- Department of Electrical and Computer Engineering

University of Rochester, Rocheste, NY

Date: March 24, 2026

LinkedIn:

Bio: Dr. Selçuk Köse received his PhD degree in Electrical and Computer Engineering from the University of Rochester in 2012. After spending nearly seven years at the University of South Florida (USF), he joined the Department of Electrical and Computer Engineering at the University of Rochester where he is currently a Professor. He previously worked at TUBITAK, NXP semiconductor, Intel corporation, and Eastman Kodak. Dr. Köse is a recipient of the NSF CAREER award (2014), USF College of Engineering Outstanding Junior Research Achievement Award (2014), USF Outstanding Faculty Award (2016), Cisco Research Award (2015, 2016, and 2017) and USF Outstanding Research Achievement Award (2017). His research interests include VLSI circuit design, hardware security, cryogenic electronics, and quantum computing. His research has been funded by NSF, DARPA, Department of Energy, SRC, Cisco, Intel, and TSMC.

Abstract: Superconducting digital electronics provide the classical control and readout infrastructure for many quantum computing platforms, operating in close proximity to superconducting qubits at cryogenic temperatures. This talk will begin with an overview of superconducting digital logic and the physical realization of superconducting qubits, establishing the hardware foundations of contemporary quantum computing systems. After reviewing key operating principles, the focus will shift to the interface circuits that connect superconducting digital controllers to qubit and readout subsystems. These interface circuits define critical security boundaries by translating ultrafast superconducting signals into accessible electrical quantities. The talk will examine side-channel leakage mechanisms arising from superconducting digital control and interface circuitry, followed by a discussion of hardware Trojans that may be embedded within the digital control and readout stack. Recent work on physical unclonable functions (PUFs) implemented in superconducting digital control circuits will also be presented as a hardware-rooted approach to device identification and authentication. Looking ahead, this talk argues that hardware trust must be treated as a fundamental design requirement in superconducting quantum computing systems. As these platforms scale, security considerations will increasingly influence the co-design of qubits, superconducting digital logic, and interface circuits, shaping how reliable and trustworthy quantum computers are ultimately built.

Dr. Matthew Areno

Dr. Matthew Areno

CTO of Rickert-Areno Engineering, LLC

Co-Chair, Midwest Microelectronics Consortium (MMEC)

Date: March 24, 2026

LinkedIn:

Bio: Dr Matthew Areno is the CEO and co-owner of Rickert-Areno Engineering and Consulting. Dr. Areno completed his Bachelor's and Master's degrees at Utah State University in 2007 and took a position with Sandia National Laboratories. At Sandia, he focused on vulnerability assessment and reverse engineering of embedded systems primarily utilizing ARM-core processors. During this time, he also completed his PhD at the University of New Mexico with dissertation work on strengthening embedded system security through the use of PUF-enhanced cryptographic units. In 2013, Dr. Areno took a position with Raytheon Cyber Security Innovations in Austin, TX; he served as a Chief Architect for a number of anti-tamper solutions, with specific expertise in establishing trust in COTS equipment. In 2019, he joined Intel where he served as a Senior Principal Engineer and had roles including the Senior Director of Security Assurance and Cryptography, Chief Security Architect, and Anti-Tamper Lead. Dr Areno serves on the Board of Advisors for Augusta University School of Computer and Cyber Sciences, as the co-chair of the Secure Edge Working Group under the Midwest ME-Commons Consortium, and on the Editorial Board for the Journal of Hardware and Systems Security. And if you're still awake at this point, Geaux Tigers!

Dr. Farimah Farahmandi

Dr. Farimah Farahmandi

A Keynote Talk
Title: Keynote Talk:Agent-Based Security Verification of SoCS

Wally Rhines Endowed Professor of Hardware Security

Associate Director, Florida Institute for Cybersecurity

Department of Electrical Computer Engineering

University of Florida, Gainesville, Florida

Date: March 24, 2026

LinkedIn:

Abstract: As modern system-on-chip (SoC) designs grow increasingly complex, ensuring security throughout the silicon development lifecycle has become a critical yet challenging task. Traditional verification techniques often lack security awareness and remain time-consuming, costly, and prone to human error, necessitating a shift toward automation. This talk explores AI-driven security verification as a transformative approach, leveraging machine learning (ML) and generative AI to automate vulnerability detection, enhance formal verification, and strengthen threat modeling. By integrating AI into security workflows, engineers can significantly reduce development costs while improving the accuracy and efficiency of security validation. The discussion will also explore the future outlook of AI-driven security solutions, offering practical strategies for engineers and practitioners to reinforce hardware security at various design and verification stages. Additionally, the talk will highlight open research challenges and opportunities for academics, paving the way for future advancements in AI-powered security verification.

Bio: Dr. Farimah Farahmandi is the Walden Rhines Endowed Professor in Hardware Security in the ECEdepartment at the University of Florida. She also serves as the Associate Director of the Florida Institutefor Cybersecurity (FICS) at UF. Her research focuses on hardware security verification, formal methods, and fault-injection attack analysis resulting in 8 books and over 150 publications in these fields. For her contributions, she is a recipient of 11 best paper and nomination awards, and was recognized with theBest Assistant Professor Award at UF (2024), Pramod Khargonekar Excellence Award for the Best Assistant Professor going through tenure process (2025), UF 40 Under 40 Gator Outstanding Alumni Award, the ECE Excellence in Service Award (2023), and the ECE Excellence in Research Award (2022) at UF. She also received the prestigious ACM/IEEE DAC Under 40 Innovators Award (2024), Young Faculty Award from SRC (2022), the NSF CAREER Award (2024), and Office of Naval Research Young Investigator Award (2026).

Dr. Sandip Ray

Dr. Sandip Ray

A Keynote Talk
Title: AI and Edge Computing: Emerging Security Challenges in the New Era

Warren B. Nelms Endowed Professor

Department of Electrical Computer Engineering

University of Florida, Gainesville, Florida

Date: March 24, 2026

LinkedIn:

Abstract: The Internet-of-Things (IoT) regime arguably began about a decade back, when the number of “smart”, connected, electronic devices exceeded the human population. Today, our environment includes billions of such systems, coordinating and communicating to implement applications of unprecedented scale and diversity, including intelligent homes, smart biomedical devices, self-driving automobiles, and smart cities. The trend is towards even more proliferation of these systems with estimates of trillions within the next decade, representing the fastest growth for any sector at any time in human history. Given the scale of computing in the IoT regime, it is crucial to our well-being to ensure that participating systems operate (or process, store, and communicate information) safely, reliably, securely, and as intended. Unfortunately, traditional techniques for system architecture and design to address these requirements are often inadequate for the needs of the new era. System architecture challenges arise from complex interplay of a variety of constraints from reliability, energy-efficiency, security, software enablement, validation, in-field configurability, and many others. To address these challenges, fundamentally new approaches are necessary that cross-cut several traditional areas of computer science and engineering including computer architecture, hardware/software co-design, verification, and machine learning, in addition to drawing ideas from areas as diverse as mechanical engineering, biomedical engineering, and device physics. In this talk, we will look at architectural and design challenges and approaches in ensuring efficient, reliable, and trustworthy behavior of computing systems in the IoT regime. We will deep-dive into one of these challenges, resulting from the shift in computing paradigm from owned and cloud-based infrastructure to a model where AI is being applied on personalized and intimate data on the edge. We discuss some interesting challenges in safety and security specifically on AI accelerators in the edge. We will show impending security vulnerabilities as we move towards this new computing paradigm, and some new results both in offensive and defensive security in the area.

Bio: Dr. Sandip Ray is Warren B. Nelms Endowed Professor at the Department of Electrical and Computer Engineering, University of Florida at Gainesville, Florida, USA. His research involves developing correct, dependable, secure, and trustworthy computing through cooperation of specification, synthesis, architecture, and validation technologies. He focuses on next generation computing applications, including Internet-of-Things applications, autonomous automotive systems, smart homes, intelligent implants, etc. Before joining University of Florida, Dr. Ray was a Senior Principal Engineer at NXP Semiconductors, where he led the R&D on security architecture and validation of hardware platforms for automotive and IoT applications. Prior to that, he was a Research Scientist at Intel Strategic CAD Labs, where he led research on pre-silicon and post-silicon validation technologies for security and functional correctness of SoC designs, and design-for-security and design-for-debug architectures. In addition to NXP and Intel, his research has found application in several other companies including AMD, Galois, IBM, Microsoft, and Collins. Dr. Ray is the author of three books and over 120 publications in international journals and conferences. He has given over 60 invited and keynote presentations in a variety of international conferences and meetings. He has served as a program committee member in more than 80 international conferences;as Track Chair for International Conference on VLSI Design, Microprocessor Test and Verification Workshop, and ACM Great Lakes Symposium in VLSI; and as program chair for Formal Methods in Computer-Aided Design, International Workshop on ACL2 Theorem Prover and Its Applications, IFIP Internet-of-Things Conference, International Conference on Embedded Software Systems, and International Symposium on VLSI. He has served as guest editor of special issues in IEEE Design & Test, ACM Transactions on Design Automation of Electronic Systems, Journal of Electronic Testing Theory and Applications, and Journal of Hardware and Systems Security; and as Associate Editor for IEEE Transactions on Multi-scale Computing Systems and Springer Journal of Hardware and System Security. Dr. Ray has a Ph.D. from University of Texas at Austin and is a Senior Member of IEEE.

Dr. Birhanu Eshete

Dr. Birhanu Eshete

A Keynote Talk
Title: The Provenance of Trust: Securing the Deep Learning Lifecycle from Data to Decision

Associate Professor of Computer Science

University of Michigan-Dearborn

Date: March 24

LinkedIn:

Abstract: As AI systems are increasingly entrusted with high-stakes decisions in healthcare, autonomous systems, cybersecurity, and finance, it begs a fundamental question: how do we know why a model made a particular decision, or whether its foundations can be trusted at all? Today's dominant approach to AI assurance relies on testing inputs and evaluating outputs. But testing alone cannot reveal whether training data was subtly corrupted, whether model parameters were influenced by malicious samples, or how information flows through a network at inference time. In short, modern AI systems lack a chain of custody. In this keynote, I introduce a provenance-centric framework for trustworthy AI that treats traceability as a first-class design principle across the model lifecycle. First, I present PoisonSpot, a fine-grained training provenance technique that tracks the lineage of parameter updates to identify and isolate clean-label backdoor attacks that evade traditional detection. By exposing the hidden influence patterns of malicious samples, PoisonSpot secures the model's foundation. I then introduce DeepProv, a system for constructing inference provenance graphs that capture runtime information flow within neural networks. By analyzing structural decision pathways, DeepProv enables behavioral characterization and targeted repair strategies that enhance robustness, privacy, and fairness, transforming opaque neural networks into debuggable and auditable computational artifacts. Together, these works argue that trust in AI cannot be reduced to accuracy metrics or post-hoc audits, but must be traced across the AI pipeline.

Bio: Dr. Birhanu Eshete is an Associate Professor of Computer Science in the College of Engineering and Computer Science at the University of Michigan-Dearborn, where he directs the Data-Driven Security & Privacy Laboratory. His research develops methods and systems to identify, characterize, and mitigate security, privacy, safety, transparency, and ethical risks in AI systems with emphasis on high-stakes applications such as autonomous vehicles, predictive diagnostics, financial forecasting, and cyber-attack detection. Dr. Eshete's research has been published in all leading venues in security, privacy, and AI, including IEEE S&P, ACM CCS, USENIX Security, ISOC NDSS, IEEE/IFIP DSN, ACM PETS, IEEE ACSAC, and IEEE SaTML, and featured in widely accessible venues such as Science Magazine. His expertise has also contributed to national efforts, including the U.S. National Institute of Standards and Technology (NIST) Trustworthy & Responsible AI Resource Center. His contributions have been recognized with highly competitive awards and funding, including the 2024-2025 Fulbright U.S. Scholar Award from the U.S. Department of State, the 2024-2025 Faculty Excellence in Research Award from the College of Engineering and Computer Science, the 2023 NSF CAREER Award from the U.S. National Foundation, the 2018 USENIX Security Symposium Distinguished Paper Award, and was a finalist for the Best Applied Security Research Award in North America in 2018.

Dr. Boyang Wang

Dr. Boyang Wang

An Keynote Talk
Title: Deep Learning Side-Channel Analysis

Associate Professor

Department of Electrical and Computer Engineering

University of Cincinnati, Cincinnati, OH

Date: March 24, 2026

LinkedIn:

Abstract: Side-channel analysis can recover encryption keys from a device, such as a microcontroller or an FPGA (Field-Programmable Gate Arrays) by analyzing correlations between power consumption and intermediate values of encryption, such as AES (Advanced Encryption Standard). Recent studies show that machine learning, particularly deep learning, can offer new advantages in side-channel analysis compared to traditional attacks, such as Correlation Power Analysis and Template Attacks. Despite recent research process, deeplearning side-channel analysis still face challenges in cross-device scenarios or require comprehensive neural networks.In this talk, Dr. Wang will share recent findings on deeplearning side-channel analysis, including (1) the impacts of cross-device scenarios and how to mitigate those impacts and (2) how to reduce the number of parameters in a neural network but still recover keys successfully. Dr. Wang will also discuss pre-silicon side-channel analysis on simulated traces from hardware designs of AES.

Bio: Dr. Boyang Wang is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Cincinnati (UC), Cincinnati, OH. He received his Ph.D. in ECE from the University of Arizona in 2017 and his Ph.D. in Cryptography from Xidian University (China) in 2014. His current research focuses on side-channel analysis, machine learning, network security, wireless security, binary analysis, and applied cryptography. He has published more than 50 peer-reviewed papers atconferences and journals, including ACM WiSec, ACM CODASPY, ACM AsiaCCS, IEEE HOST, IEEE INFOCOM, IEEE CNS, IEEE TIFS, IEEE TDSC, IEEE TSC, and IEEE TCC. Dr. Wang has received funding from multiple agencies and industry, including National Science Foundation, NSF IUCRC CHEST Center, IoTeX, and Ohio Cyber Range. He received NSF CISE CRII (Research Initiation Initiative) award in 2020. He serves as the Program Director for Cybersecurity Engineering Program at UC. He is a member of IEEE and ACM.

Dr. William (Will) Zortman

Dr. William (Will) Zortman

A Keynote Talk
Title: Digital Assurance for High Consequence Systems

Directed Research and Development Office

Sandia National Laboratory

Date: March 24, 2026

LinkedIn:

Abstract: High consequence systems-such as those used in national security, aerospace, energy, and critical infrastructure-must operate with exceptional levels of reliability, resilience, and trust. As these systems become increasingly digital and interconnected, traditional assurance approaches are no longer sufficient to manage evolving cyber and digital risks. This talk introduces the concept of Digital Assurance for High Consequence Systems (DAHCS), a mission-driven research initiative focused on integrating digital assurance directly into the discipline of systems engineering. Rather than treating cybersecurity as a separate compliance function, DAHCS promotes a framework in which digital risk is evaluated alongside performance, cost, schedule, safety, and other system-level trade-offs. The approach enables systems engineers, program managers, and risk decision-makers to make informed engineering trade-offs between digital risk and other mission-critical risks. The presentation will highlight research directions, methodological advancements, and practical strategies for embedding digital assurance throughout the system lifecycle-from design and development to deployment and sustainment-supporting resilient, trustworthy systems in high consequence environments.

Bio: William (Will) Zortman is the Digital Assurance for High Consequence Systems (DAHCS) Campaign Manager for Sandia National Laboratories' Laboratory Directed Research and Development Office. The DAHCS Mission Campaign is fundamental and developmental research focused on integrating digital assurance into the discipline of systems engineering so that systems engineers, program managers and risk acceptors can make engineering trade-offs between digital risk and other system risks.

Second Day Key Talks

March 25, 2026

Dr. Irfan Ahmed

Dr. Irfan Ahmed

A Keynote Talk
Title: From Reliability to Risk: Security Debt in Modern Industrial Control Systems

Professor of Computer Science

Director, Security and Forensics Engineering (SAFE) Lab

Virginia Commonwealth University

Date: March 25, 2026

LinkedIn:

Abstract: Industrial Control Systems (ICS) were engineered for determinism, availability, and safety, not adversarial resilience. Over decades, reliability-driven design assumptions in programmable logic controllers (PLCs) and control networks have accumulated into a form of security debt that modern cyber-physical threats can exploit. This talk synthesizes insights from real-world PLC firmware analysis, memory forensics, and control-logic integrity research to expose structural weaknesses: implicit trust in runtime execution, limited integrity verification, insecure update and communication pathways, and minimal forensic visibility. These systemic gaps enable stealthy manipulation of physical processes while preserving outwardly normal behavior, challenging traditional detection and safety mechanisms. The talk argues that incremental patching cannot resolve insecurity rooted in architecture. Instead, the field must move toward redesigning ICS for security, grounded in verifiable execution semantics, strong integrity guarantees, and forensic-ready industrial infrastructures that can sustain long-term trust and resilience.

Bio: Dr. Irfan Ahmed is a Professor of Computer Science in the College of Engineering at Virginia Commonwealth University (VCU) and a leading researcher in cybersecurity, digital forensics, and the security of industrial control and cyber-physical systems. His work focuses on PLC firmware analysis, control-logic integrity, memory forensics, and adversarial threats to critical infrastructure and advanced manufacturing. He has received multiple national recognitions, including the USCYBERCOM Commander, Guardian, and Defender Awards, for developing innovative defensive technologies for operational technology environments. Dr. Ahmed collaborates with national laboratories, industry partners, and government agencies to advance secure-by-design industrial platforms and workforce development in critical infrastructure protection. His research aims to restore trust, resilience, and forensic readiness across next-generation industrial control systems and advanced manufacturing.

Dr. Jeyavijayan (JV) Rajendran

Dr. Jeyavijayan (JV) Rajendran

A Keynote Talk
Title: The Silicon Double Agent: Securing the AI-Augmented Chip Lifecycle

Associate Professor and ASCEND Fellow

Director, Secure and Trustworthy Hardware (SETH) Lab

Department of Electrical and Computer Engineering

Texas A&M University, College Station, Texas

Date: March 25, 2026

LinkedIn:

Abstract: The semiconductor industry is at a historic inflection point. Integrating Generative AI into mobile ecosystems transforms silicon design, with Large Language Models (LLMs) writing Verilog and Reinforcement Learning (RL) agents optimizing netlists. This revolution creates a paradox: productivity-boosting tools also empower sophisticated attacks. In this talk, we explore the dual role of AI in hardware security, focusing on three pillars: AI-Powered Vulnerability Hunting. We demonstrate how Reinforcement Learning and Graph Neural Networks autonomously red-team chips. These techniques identify exploitable timing vulnerabilities and stealthy Trojans that traditional static analysis misses, finding "hard fails" before they reach silicon. The Evolving Threat Landscape. We examine how LLMs enable black-box IP piracy and hardware intent recovery. We conclude with how generative hardware fuzzing identifies software-exploitable vulnerabilities at the architecture level, ensuring the future of AI is built on a foundation of trusted silicon. Trustworthy Generative Design. We examine leveraging LLMs for RTL while ensuring code is free of copyright poisoning and backdoors. This includes new pathways to copyright-infringement-free Verilog and the role of watermarking in securing automated design flows.

Bio: Dr. Jeyavijayan (JV) Rajendran is an Associate Professor and an ASCEND Fellow in the Department of Electrical and Computer Engineering at Texas A&M University, where he leads the Secure and Trustworthy Hardware (SETH) Lab. His research sits at the critical intersection of hardware security and artificial intelligence. He focuses on developing AI-driven tools, including Large Language Models and Reinforcement Learning, to automate hardware trust and secure the next generation of high-performance SoCs. Dr. Rajendran is a widely recognized leader in the field. He is the recipient of the NSF CAREER Award, the ONR Young Investigator Award, the IEEE CEDA Ernest Kuh Early Career Award, and the ACM SIGDA Outstanding Young Faculty Award. His commitment to industry-relevant research has been recognized with the Intel Academic Leadership Award and numerous best paper and dissertation awards. Beyond his research, he is a dedicated member of the national engineering community. He is an alumnus of the National Academy of Engineering's Frontiers of Engineering and serves on NASEM and NAE committees. He is also the co-founder of Hack@DAC, the premier hardware security competition, which bridges the gap between academic research and industry practice to secure the global semiconductor supply chain.

Dr. Ibrahim (Abe) Baggili

Dr. Ibrahim (Abe) Baggili

A Keynote Talk
Title: When Machines Misbehave - The Emerging Science of AI Forensics

Chair, Computer Science Department

Louisiana State University

Date: March 25, 2026

LinkedIn:

Abstract: Machine Learning (ML) and Artificial Intelligence (AI) have become transformative forces, shaping every aspect of our society-from business and academia to the public and private sectors, including IoT devices. Yet, alongside their benefits, the failures of AI are an undeniable reality, demanding urgent attention from forensic researchers and practitioners. When AI goes rogue, who steps in to investigate? While AI and ML are celebrated for enhancing digital forensic processes, a critical shift is needed: focusing on the forensics of AI itself. In this keynote, we explore the emerging field of AI forensics, a vital sub-discipline within digital forensics. By examining the foundations of this evolving field and highlighting key research challenges, we will shed light on the critical importance of developing forensic methodologies to address AI-related incidents.

Bio: Dr. Ibrahim (Abe) Baggili is a first generation Arab American. He is the Chair of the Computer Science and Engineering Division and Roger Richardson Professor of Computer Science at Louisiana State University and the founder of the BiT Lab (Baggili Truth Lab) where he holds a joint appointment between the Division of Computer Science & Engineering and the Center for Computation and Technology. Dr. Baggili has won numerous awards including the CT Civil Medal of Merit, the Medal of Thor from the Military Cyber Professional Association, CT 40 under 40, and is a fellow of the European Alliance for Innovation (EAI). He was also elected to the Connecticut Academy of Science and Engineering (CASE) and has also been a TEDx Speaker. He received his BSc, MSc and PhD all from Purdue University where he worked as a researcher in the Center for Education and Research in Information Assurance (CERIAS) and received the Bilsland Dissertation Award during his PhD. Dr. Baggili has been involved in over $14 Million dollars of sponsored research and is a prolific scholar in the domain of digital forensics, cybersecurity, and cybersecurity education. Work with his students has uncovered vulnerabilities that impact over a billion people worldwide and has been featured in news and TV outlets in over 20 languages and he has published extensively in the domain of digital forensics. To learn more about the BiT Lab, you can visit https://csc.lsu.edu/~baggili.

Dr. Darren Pulsipher

Dr. Darren Pulsipher

A Keynote Talk
Title: A GEAR-Informed Architectural Approach to Governing Secure and Resilient IT/OT Convergence in the Industry 5.0 Era

Chief Enterprise Architect for Public Sector at Intel

Date: March 25, 2026

LinkedIn:

Abstract: The convergence of IT and OT environments is exposing deep architectural, cultural, and governance fractures in cybersecurity practices. Misaligned training, inconsistent taxonomies, and competing drivers-uptime, safety, reliability, and security-continue to undermine cyber resilience. OT environments have long relied on the Purdue Model as an isolation boundary, but data-driven operations, Industry 5.0, and AI-enabled systems are collapsing this assumption. This presentation argues that Industry 4.0 failed because it optimized technology without resolving architectural ownership and organizational alignment. It concludes by presenting a GEAR-informed, People-Process-Technology architectural approach that enables secure IT/OT convergence while preserving the distinct operational constraints essential to safe and resilient systems-reframing cybersecurity as an architectural discipline rather than a collection of controls.

Bio: Dr. Darren Pulsipher is the Chief Enterprise Architect for Public Sector at Intel, where he focuses on secure digital transformation across government, critical infrastructure, and mission-critical environments. He is also the Chairman of the Open Digital Transformation Forum at The Open Group, where he leads global efforts to advance architecture-driven approaches to digital transformation, governance, and cybersecurity. Dr. Pulsipher's work centers on the secure convergence of Information Technology (IT) and Operational Technology (OT) systems. He specializes in addressing architectural, cultural, and governance challenges created by misaligned taxonomies, competing operational drivers such as uptime, safety, and security, and long-standing organizational separation between IT and OT domains. His research and practice emphasize treating cybersecurity as an architectural discipline, applying People, Process, and Technology principles-grounded in the GEAR model-to enable resilient convergence while preserving the operational individuality required in cyber-physical systems.

Dr. Samir Iqbal

Dr. Samir Iqbal

A Keynote Talk
Title: Building Translational Innovation Ecosystems: From Discovery to Real World Impact

Associate Dean of Research

College of Computing

Grand Valley State University

Grand Rapids, MI

Date: March 25, 2026

LinkedIn:

Abstract: Translating research discoveries into real world impact requires intentional ecosystems that connect invention, innovation, and implementation. As technology advances rapidly across disciplines, academic, industry, and community partners must work collaboratively to ensure that emerging knowledge is transformed into solutions that address societal, economic, and workforce challenges. This keynote explores strategies for strengthening translational research pipelines by focusing on partnership formation, proposal competitiveness, and structured commercialization pathways that support sustainable innovation. Drawing from experience managing national funding programs that advance applied research and entrepreneurial science, this presentation will highlight approaches for bridging the gap between foundational research and deployment ready technology. Special attention will be given to test bed models, collaborative networks, and funding mechanisms that enable researchers and innovators to move ideas from concept to application. The session is intended to benefit academic and professional audiences by providing insights into building multidisciplinary partnerships, strengthening translational research capacity, and enhancing the broader societal and economic value of scientific and technological advances.

Bio: Dr. Samir Iqbal is a researcher in nanotechnology, biosensing, and biomedical image analytics. He served as one of the founding Program Directors of the NSF Technology, Innovation, and Partnerships (TIP) Directorate, which focuses on advancing use inspired research, innovation ecosystems, workforce development, and pathways that connect academic discovery to societal impact. He earned his PhD from Purdue University.

Dr. Bayley King

Dr. Bayley King

A Keynote Talk
Title: Side-Channel Attacks on Machine Learning Hardware

Senior Hardware Security and Data Science Researcher

Riverside Research, Dayton, OH

Date: March 25, 2026

LinkedIn:

Abstract: This talk will focus on hardware-based attacks against machine learning systems, with particular emphasis on Side-Channel Analysis (SCA) as a mechanism for extracting sensitive model parameters from embedded AI accelerators. As machine learning models are increasingly deployed on edge platforms such as the Google Coral TPU, they become susceptible to physical-layer attacks traditionally associated with cryptographic hardware. The presentation will introduce the architectural foundations of edge AI implementations, including 8-bit quantization and neural network deployment considerations, before demonstrating how power and timing leakage can be exploited to recover model weights and compromise intellectual property. Practical demonstrations will illustrate how SCA techniques can be adapted from cryptographic attacks to machine learning systems. By grounding the discussion in current open-source research and active R&D efforts, this talk highlights emerging security risks at the intersection of hardware assurance, embedded systems, and applied AI, and identifies critical research challenges in securing next-generation intelligent systems.

Bio: Dr. Bayley King is a Senior Research Scientist at Riverside Research, where he works at the intersection of hardware security, embedded systems, and data science, supporting national security-focused research and development. He works as a capture manager, principal investigator and program manager for Microelectronic strategy and technology development. He currently serves as an Adjunct Professor in the Department of Computer Science and Engineering at Wright State University and is also an Adjunct Professor of Computer Science at the University of Dayton. Prior to joining Riverside Research, Dr. King completed his Ph.D. in Computer Science and Engineering at the University of Cincinnati, where he conducted research in machine learning and security-focused computing for the Air Force Research Lab. His work spans hardware assurance, secure embedded systems, and applied AI, with publications including research on securing third-party HDL IP and security-related data-driven methods.

Call for Papers

Submit Your Research to SATC 2026!

  • CyberInfrastructure
  • Internet of Things (IoT)
  • Microelectronics
  • Artificial Intelligence
Submit Now

Publishing Partner