Senior Cyber Security Research Engineer
Riverside Research, Dayton, OH
Date: March 24
Abstract: As embedded systems and hardware platforms increasingly underpin mission-critical defense, aerospace, and consumer technologies, cybersecurity must extend beyond software protections to address vulnerabilities at the hardware layer. This talk will explore emerging threats and assurance challenges in embedded and hardware-focused security, drawing from experience across government, defense, and commercial sectors. The presentation will examine practical methodologies for cyber vulnerability assessment of weapon systems and embedded devices, microelectronics trust and assurance validation, and hardware-centric attack surfaces, including reverse engineering and system-level exploitation. Emphasis will be placed on real-world security validation strategies and lessons learned from both defense research environments and large-scale commercial device security programs. By connecting applied research, national security applications, and hardware system design considerations, this talk highlights the evolving landscape of hardware-rooted cybersecurity risks and the need for integrated assurance approaches across the system lifecycle.
Bio: Dr. Paul Simon is a Senior Cyber Security Research Engineer at Riverside Research in Dayton, Ohio, where he conducts advanced cybersecurity research with an emphasis on embedded systems and hardware-focused security. His work centers on microelectronics trust, system assurance, vulnerability analysis, and secure hardware design for high-consequence and mission-critical systems. Prior to joining Riverside Research in 2025, Dr. Simon served as a Senior Hardware Security Engineer at Amazon (Device & Services Security), where he was part of a specialized team focused on securing consumer devices and services at scale. Before Amazon, Dr. Simon supported multiple government and defense-related programs in and around the Air Force Institute of Technology (AFIT) and the Air Force Research Laboratory (AFRL) through roles at HII Technical Solutions and Applied Research Solutions. His work included cyber vulnerability assessments of weapon systems, microelectronics trust and assurance validation, reverse engineering, hardware system design, and cybersecurity research for embedded devices. Dr. Simon earned his Bachelor of Science (1997) and Master of Science (2011) degrees in Electrical Engineering from the University of Dayton. He received his Ph.D. in Electrical Engineering from the Air Force Institute of Technology (AFIT) in 2022. He has been a member of IEEE for over 15 years.
Professor and Dean
College of Engineering and Computer Science
Wright State University
Date: March 24
Abstract: As defense and critical infrastructure systems increasingly integrate cyber, physical, and autonomous components, ensuring these systems are trusted, resilient, and mission-ready requires rigorous test, evaluation, and validation (TEVV) methodologies. Traditional testing approaches are insufficient for cyber-physical systems operating under uncertainty, adaptive threats, and dynamic operational conditions. This talk focuses on the role of testbeds, operational experimentation, and data-driven evaluation frameworks in assessing the performance, reliability, and cyber resilience of trusted cyber-physical systems. Drawing on experience across Department of Defense test and evaluation organizations and academic research environments, it highlights how stochastic modeling, simulation, and analytics can be used to characterize system behavior, quantify risk, and validate performance across realistic mission scenarios. The presentation emphasizes the importance of integrated test architectures that combine cyber testing, physical system evaluation, and operational validation to support trusted autonomy and mission assurance in contested environments.
Bio: Darryl K. Ahner, Ph.D. is the Dean of the College of Engineering and Computer Science at Wright State University, a role he assumed in July 2023. Prior to joining Wright State, he served as Dean for Research at the Air Force Institute of Technology (AFIT) at Wright-Patterson Air Force Base, where he led the Office of Research and Sponsored Programs and managed a $36 million research portfolio. Dr. Ahner is a professor of stochastic operations research and has held senior leadership roles across defense and academic institutions, including Director of the Office of the Secretary of Defense Scientific Test and Analysis Techniques within the Test and Evaluation Center of Excellence, Director of the Center for Operational Analysis, faculty member at the U.S. Military Academy at West Point, and Director of Research at the U.S. Army Research Center at the Naval Postgraduate School. In 2021, he received the Secretary of Defense Medal for Exceptional Civilian Service for his contributions to workforce development and national security research. His research focuses on test and evaluation optimization, autonomous and cyber-physical systems, reliability, stochastic modeling, simulation, and defense analytics. He has authored many peer-reviewed journal articles, conference proceedings, and several book chapters. He earned a B.S. in Mechanical Engineering (Aerospace) from the U.S. Military Academy at West Point, served 22 years in the U.S. Army, and holds a Ph.D. in Systems Engineering from Boston University, along with advanced degrees from Rensselaer Polytechnic Institute and a graduate certificate from AFIT.
Chief Technology Officer at SIMS-iLink Digital
Vice Chair of IEEE Houston Section
Date: March 24
Abstract: AI continues to reshape how businesses operate, it is also transforming the threat landscape just as rapidly. The future of cybersecurity lies in anticipation, automation, and resilience, where human judgment and AI-driven intelligence work together to stay ahead of evolving risks.
Bio: Thangaraj Petchiappan Chief Technology Officer at SIMS-iLink Digital, has helped many Fortune 500 clients and industries innovate and transform for the digital future. He is dedicated to enhancing infrastructure automation and integrating advanced cybersecurity solutions, continually driving improvements in platforms and processes. He plays a pivotal role in leading the development of innovative solutions that demonstrate the potential of new and emerging technologies such as AI and ML. Thangaraj mentors team members, advises them on how to apply AI in Cybersecurity based on real-world experience, and helps them grow professionally by improving their skills. Thangaraj and his team are at the forefront of innovation in cybersecurity, creating cutting-edge solutions that use AI to combat cyber threats. They harness the power of natural language processing, computer vision, deep learning, and graph analytics to build tools for phishing detection, malware analysis, threat intelligence, and network security monitoring.
The 5G Broadband and Connectivity Center
Ohio State University
Date: March 24, 2026
Abstract: This talk will cover the increased strengths and weaknesses of the 3GPP 5G core, edge, and radio network as realtime functions are emerging as AI applications. While US deployment of 5G handsets and FR2 band capability are not new, standalone (SA) core platforms and open radio access networks (Open RAN or ORAN) certainly are. The 5G ecosystem defined by 3GPP has superior handset authentication and key management - as well as effective Quality of Service management; the introduction of xApps and rApps for realtime adaptive control are ideally situated for AI, but training and execution instances introduce new potential threat spaces. Within this talk, the 5G security capability set per 3GPP WG3 is introduced, as is the MITRE FiGHT encyclopedic framework for presumptive tactics and techniques. Finally, the state of experimentation at OSU and peer institutions is discussed.
Bio: Dr. Hoag is a faculty member in Computer Science affiliated with the 5G Broadband and Connectivity Center at The Ohio State University. He brings extensive experience spanning academia, research leadership, and state-level engagement in advanced computing and connectivity systems. Dr. Hoag earned his Bachelor's degree in Computer Science from The University of Akron and completed both his M.S. in Industrial and Systems Engineering and Ph.D. in Integrated Systems Engineering at The Ohio State University. His academic career includes prior service on the faculty at Ohio University, followed by leadership roles as a Professor and Department Chair at a private college in Virginia. Most recently, he served as a Center Director at Ohio State University, contributing to research and workforce initiatives in broadband, connectivity, and systems engineering. Dr. Hoag maintains an active research agenda, serves on multiple State advisory panels, and is a Senior Member of the IEEE. His work reflects a strong commitment to applied research, interdisciplinary education, and workforce development.
Assistant Professor of Mechanical Engineering Technology
Purdue University Fort Wayne
Date: March 24, 2026
Bio: Dr. Osama M. Fakron is an Assistant Professor of Mechanical Engineering Technology at Purdue University Fort Wayne. He earned his Ph.D. in Mechanical Engineering from Washington State University in 2014, where his research focused on SEM-based electron tomography of fiber and nanostructured systems. He also holds an M.S. in Applied Mathematics from Washington State University and graduate degrees in Mechanical Engineering from the University of Benghazi. Dr. Fakron's research spans micro/nanomaterials characterization, additive manufacturing, and mathematical modeling in engineering systems. He has authored multiple peer-reviewed journal publications in materials processing, ultramicroscopy, and advanced manufacturing. His teaching expertise includes measurements and instrumentation, dynamics, materials science, and engineering mechanics. With interdisciplinary training in engineering and applied mathematics, Dr. Fakron integrates modeling, experimentation, and workforce-focused education to prepare students for careers in advanced manufacturing and microelectronics industries.
Abstract: This presentation explores the fundamental role of mathematical modeling in the analysis, design, and optimization of microelectronic devices and systems. As semiconductor technologies continue to scale into nanometer regimes, predictive modeling has become essential for understanding complex physical phenomena that govern device performance, reliability, and manufacturability. The presentation introduces physics-based modeling frameworks including Poisson's equation, drift-diffusion transport equations, and continuity equations that form the foundation of semiconductor device simulation. Advanced MOSFET modeling techniques are discussed, covering threshold voltage prediction, drain current formulation, and short-channel effects in modern nanoscale transistors. The presentation further examines numerical solution techniques such as the Finite Difference Method (FDM) and Finite Element Method (FEM), highlighting their application in Technology Computer-Aided Design (TCAD) tools used in industry. Thermal modeling and reliability analysis, including electromigration and electro-thermal coupling, are addressed to demonstrate how Multiphysics modeling ensures device longevity and performance stability. Emerging approaches such as quantum mechanical modeling and artificial intelligence-driven inverse design are also introduced, emphasizing their importance in next-generation semiconductor research. Overall, this presentation demonstrates how mathematical modeling bridges physics, computation, and engineering practice, enabling innovation in microelectronics manufacturing, advanced materials, and integrated circuit design.
Director of AI Hardware Research
Parallax Advanced Research, Dayton, OH
Date: March 24, 2026 (Online)
Abstract: As artificial intelligence transitions from cloud-based systems to edge-deployed autonomous platforms, traditional computing architectures face significant limitations in power efficiency, adaptability, and resilience. Neuromorphic hardware, inspired by biological neural systems, offers a transformative approach to enabling real-time cognition, ultra-low-power computation, and mission-resilient AI at the edge. This presentation explores the development of neuromorphic microchips based on Spiking Neural Networks (SNNs) and nanoscale circuit architectures designed to support trusted, secure, and adaptive AI systems. Emphasis is placed on hardware-software co-design, energy-efficient processing, and architectural innovations that enable robust operation in contested and high-consequence environments. Applications across aerospace, ISR, autonomous systems, and human-autonomy teaming are discussed, along with the technical challenges of scaling neuromorphic solutions for defense and commercial deployment. The findings highlight how next-generation AI hardware can enhance system resilience, reduce latency, and enable secure, trustworthy autonomous capabilities in microelectronics-driven environments.
Bio: Dr. Steven D. Harbour is the Director of AI Hardware Research at Parallax Advanced Research, where he leads advanced neuromorphic computing and artificial intelligence hardware initiatives within the Center of Excellence. With more than 30 years of experience in aerospace engineering, microelectronics, and defense research, he brings deep technical leadership across neuromorphic engineering, microchip design, AI/ML hardware acceleration, cybersecurity, human-autonomy teaming, avionics, and autonomous systems. He serves as Principal Investigator of the BRAIN program, a cutting-edge neuromorphic chip effort focused on nanoscale architectures and Spiking Neural Network (SNN)-based microcircuit design for resilient, low-power AI hardware. Dr. Harbour has authored over 50 publications in nanotechnology, nanoscale systems, and AI-enabled hardware design. Previously, he served as Global Hawk Chief of Avionics Engineering, supporting the Air Force Research Laboratory Sensors Directorate and the Air Force Life Cycle Management Center at Wright-Patterson Air Force Base. A former USAF fighter flight test pilot with over 5,000 flight hours in multiple aircraft platforms, he combines operational aviation experience with advanced research leadership. He holds a Ph.D. in Neuroscience (Neuromorphics, AI/ML), an M.S. in Aerospace Engineering & Mathematics, and a B.S. in Electrical Engineering, and also teaches at the University of Dayton and Sinclair College.
Chief Technology Officer, AgriTech Forward Inc.
Editor-in-Chief, Green Technology Resilience and Sustainability, Springer Nature
Date: March 24, 2026
Abstract: As energy systems grow in complexity due to the increasing integration of renewable energy and the electrification of transportation, Artificial Intelligence (AI) and digitalization have arisen as the prime technologies to address emerging challenges. Machine learning, multi-agent simulations, optimization and game theory methods are being successfully applied to a number of energy management tasks, with the support of robotics, Internet of Things (IoT) and cloud/edge computing technologies. The goal of this talk is to provide a venue for a deep dive into the AI and digitalization ecosystem for energy management through a review of the state-of-the-art and the identification of future directions.
Bio: With over three decades of experience, Dr. Sanfilippo specializes in advancing sustainable AI solutions in renewable energy, food resilience, homeland security, and healthcare. Currently, he is Chief Technology Officer at AgriTech Forward Inc. and serves as the Editor-in-Chief for Springer Nature's Green Technology Resilience and Sustainability journal. From 2014 through 2025, he was Chief Scientist at Qatar Environment and Energy Research Institute, where he led the energy management program. Under his leadership, the Program established renewable energy and smart grid capabilities that have become national benchmarks, including a large network of solar monitoring stations and a 100 kWp microgrid testbed. Prior to QEERI, Dr. Sanfilippo was Chief Scientist at the Pacific Northwest National Laboratory in the US, where he was awarded the Laboratory Director's Award for Exceptional Scientific Achievement. He led multidisciplinary research projects for various government agencies (DHS, NIH, DOE, NSF) and directed an advanced laboratory research initiative on predictive analytics focused on security, energy and environment applications. Dr. Sanfilippo has also held positions as Research Director in the private sector, Senior Consultant at the European Commission, and Research Supervisor and Group Manager at SHARP Laboratories of Europe. He conducted his early research in Computational Linguistics at the University of Cambridge, and the University of Edinburgh where he completed his PhD in Cognitive Science. He holds 9 patents and has over 200 publications.
Texas A&M University System Regents Professor
AT&T Endowed Professor
Department of Electrical and Computer Engineering
Prairie View A&M University (PVAMU)
Date: March 25, 2026
Abstract: Analog in-memory computing (AIMC) is a promising compute paradigm to improve speed and power efficiency of neural network inference beyond the limits of conventional von Neumann-based architectures. However, AIMC introduces fundamental challenges such as noisy computations and there lacks a comprehensive understanding of how analog inference generalizes across task domains, precision settings, and architectural scales, and how core design parameters such as cell bits, ADC resolution, and tile size jointly influence model reliability and efficiency. In this talk, we start with benchmarking inference performance of pretrained deep learning models on AIMC simulators. We conduct a comprehensive evaluation of analog inference across both vision and language tasks using three state-of-the-art simulators-CrossSim, AIHWKIT, and MemTorch. Then we systematically quantify how cell precision, ADC resolution, and crossbar tile size influence model accuracy, stability, and efficiency under realistic non-idealities. Results demonstrate that analog inference can achieve within 2-5% of digital baselines when parameters are tuned regarding to layer sensitivity and workload structure. Furthermore, we derive task-aware design guidelines recommendations for vision models and transformer-based NLP tasks based on our findings.
Bio: Dr. Lijun Qian is Texas A&M University System Regents Professor and AT&T Endowed Professor in the Department of Electrical and Computer Engineering at Prairie View A&M University (PVAMU). He is also the Director of the Center of Excellence in Research and Education for Big Military Data Intelligence (CREDIT Center). He received BE from Tsinghua University, MS from Technion-Israel Institute of Technology, and PhD from Rutgers University. Before joining PVAMU, he was with Bell-Labs Research in Murray Hill, New Jersey. He was a visiting professor of Aalto University, Finland. He led the CREDIT Center to win the first place in the AI tracks at Sea challenge organized by the US Navy, and the first place in the IEEE CyberC Big Data Competition organized by the IEEE Big Data Initiative. He received Best Paper Award in IEEE CAMAD 2025, AIxHEART 2025, IEEE RoboCom 2023, and IEEE Globecom 2017. His research interests are in the areas of big data processing, artificial intelligence, quantum information science and quantum machine learning, wireless communications and mobile networks, network security and intrusion detection, and computational and systems biology.
Professor of Computer Science and Director, Center for Intelligent Innovation (CII)
Taylor's University, Malaysia
Date: March 25, 2026
Bio: Dr. Noor Zaman Jhanjhi is a distinguished Senior Professor of Computer Science at Taylor's University, Malaysia, where he specializes in Artificial Intelligence and Cybersecurity. As the Director of the Research Center, Center for Intelligent Innovation CII, and Program Director for Postgraduate Research Degree Programmes, he plays a pivotal role in shaping academic excellence and driving cutting-edge research initiatives. Globally acclaimed for his scholarly contributions, Prof. Jhanjhi has been consistently ranked among the world's top 2% research scientists (2022, 2023, 2024, and 2025) and stands as one of Malaysia's top computer science researchers. He has been named amongst the top 0.05% of all scholars worldwide according to the 2025 ScholarGPS rankings. His exceptional work has earned him prestigious accolades, including the Outstanding Faculty Member Award (MDEC Malaysia, 2022) and the Vice Chancellor's Best Research Citations Award (Taylor's University, 2023). A prolific author and editor, Prof. Jhanjhi has published over 80 research books with leading publishers such as Springer, Elsevier, IGI Global, Bentham, IET, and Wiley, etc., amassing 1,000+ impact factor points. His mentorship spans 45 postgraduate completions, and he has examined 70+ Ph.D. and Master's theses worldwide. As an Editor-in-Chief, Associate Editor, and Editorial Board member for top-tier journals (PeerJ Computer Science, IEEE Access, CMC Computers), he advances scholarly discourse. His leadership extends to securing 40+ international research grants, underscoring his influence in innovation. A dynamic keynote speaker, Prof. Jhanjhi, has delivered 100+ invited talks and chaired major conferences. His decade-long engagement with ABET, NCAAA, and NCEAC accreditation bodies highlights his dedication to global academic standards. Combining research brilliance, academic leadership, and a passion for mentorship, Prof. Jhanjhi continues to inspire the next generation of computer scientists while shaping the future of AI and cybersecurity.
Moon Shepherd Baker
MSB Insurance Agency, Inc
Date: March 25, 2026
Abstract: Artificial intelligence is increasingly embedded in cybersecurity and risk assessment workflows, influencing how organizations detect, prioritize, and respond to cyber threats. As AI-driven systems become more operationally significant, challenges related to trust, explainability, governance, and auditability have emerged as critical concerns, particularly in regulated and high-assurance environments. This invited talk examines the intersection of artificial intelligence and cybersecurity from an industry perspective, focusing on how AI-enabled risk assessment systems can be designed to support secure, accountable, and auditable operations. Drawing on real-world experience in regulated enterprise environments, including property and casualty insurance, the talk highlights where AI adds value in identifying and prioritizing cyber risk signals while preserving human oversight and control mechanisms. Common failure modes, governance gaps, and assurance challenges are discussed to illustrate why accuracy alone is insufficient for building trustworthy AI in cybersecurity contexts.
Bio: Mr. Afroz Mohammed is an analytics and risk professional specializing in the application of artificial intelligence to cybersecurity and risk assessment challenges in regulated industries. His work focuses on designing AI-enabled risk frameworks that integrate machine learning insights with governance, explainability, and auditability requirements. Afroz has extensive industry experience in property and casualty insurance, where AI systems are increasingly used to support cyber risk evaluation, operational decision-making, and regulatory compliance. His research on AI-driven risk assessment frameworks has been submitted for peer review with IEEE, and he regularly presents to professional audiences on the practical implications of trustworthy AI, governance, and assurance. Afroz's perspective bridges applied AI research with real-world cybersecurity and enterprise risk operations.
DoD SMART Scholar
Lead Computer Scientist
United States Air Force (Wright-Patterson AFB)
Date: March 25, 2026
Bio: Al Amin Hossain is a DoD SMART Scholar and Computer Scientist with the United States Air Force at Wright-Patterson Air Force Base in Dayton, Ohio. He currently serves as a Lead Computer Scientist supporting the Air Force Life Cycle Management Center (AFLCMC), where he leads technical strategy, planning, and execution for mission-critical modernization efforts, including enterprise logistics and sustainment systems. His work spans Zero Trust and secure AI, low-code/no-code application development (Appian), robotic process automation (RPA) (UiPath, Automation Anywhere), and modernization of enterprise platforms to improve interoperability, scalability, and compliance with DoD standards. He has led multiple RPA and Generative AI initiatives that reduce manual workload and improve operational efficiency, and has contributed to major acquisition and engineering documentation aligned with complex program requirements. He is also pursuing a Ph.D. in Computer Engineering at Wright State University (2023-2027), with research interests at the intersection of AI and cybersecurity. He has received recognition for his research contributions, including a Best Paper Award at IEEE ICAMAC 2025 (Dubai, UAE) for work exploring Zero Trust approaches in modern AI.
Abstract: The widespread adoption of artificial intelligence (AI) across mission-critical systems, Internet of Things (IoT) environments, and microelectronics-enabled infrastructures has substantially expanded the cyber-attack surface. Traditional perimeter-based security models are no longer adequate to protect modern AI systems that are highly distributed, data-centric, and continuously evolving. Zero Trust Architecture (ZTA), grounded in the principle of "never trust, always verify," offers a robust, adaptive security paradigm to address these challenges. Accordingly, it is essential to examine the role of Zero Trust as a foundational enabler of secure and trustworthy AI technologies. Our study analyzes the application of Zero Trust principles to AI-enabled enterprise systems, cloud-native platforms, and large language models (LLMs) across the AI lifecycle, including data ingestion, model training, inference, and deployment. Emphasis is placed on least-privilege access, continuous verification, micro-segmentation, and data integrity. The spectacular findings demonstrate that integrating Zero Trust into AI architectures enhances system resilience, mitigates adversarial threats, and enables secure AI deployment within IoT and microelectronics-driven environments.
Program Director, Workforce Development
SEMI Foundation
Date: March 25, 2026
Abstract: As the semiconductor industry continues to grow, the challenge is no longer simply raising awareness of careers in the field. The real challenge is helping individuals successfully progress from initial exposure to meaningful participation in the workforce. Too often, workforce efforts operate as isolated programs rather than as part of a coordinated pathway. In this session, the SEMI Foundation will highlight how its portfolio of student engagement, career exploration, training, and apprenticeship initiatives works together to support the semiconductor talent pipeline. Underpinning these efforts is the Beyond Awareness framework, an operating model that helps map how individuals move from discovery and curiosity to exploration, training, and ultimately workforce entry. Participants will see how this framework helps connect SEMI Foundation programs across the talent journey, strengthen handoffs between experiences, and provide a clearer way to understand workforce impact. By aligning industry, education, and workforce partners around a shared structure, the SEMI Foundation is helping the semiconductor ecosystem build a more visible, coordinated, and scalable pathway from awareness to careers.
Bio: Mike Glavin is Program Director of Workforce Development at the SEMI Foundation, where he leads global, employer-driven strategies to strengthen the semiconductor talent pipeline. He oversees SEMI's designation as a U.S. Department of Labor National Registered Apprenticeship Group Sponsor and works with member companies, education systems, and public partners across regions to scale apprenticeship models, build career pathways, and align workforce systems with industry demand. His work supports long-term sector growth in a globally strategic industry. Previously, Mike served as Vice President of Talent at the Greater Cleveland Partnership, leading regional workforce initiatives to expand apprenticeships, internships, and work-based learning across Northeast Ohio, supported by U.S. Department of Labor funding. Earlier, he held leadership roles at Associated Builders and Contractors, advancing apprenticeship policy and industry-led training models. He holds a Bachelor of Science from the University of Richmond and has built his career at the intersection of business, education, and public policy, focused on scalable workforce solutions that enhance economic competitiveness.
ASIC/SoC Micro-architectural and Security Engineer with Meta
Date: March 25, 2026 (Online)
Abstract: The integration of diverse, specialized processing units within high-performance Heterogeneous System-on-Chip (HSoC) architectures has significantly expanded the micro-architectural attack surface. As performance requirements necessitate the extensive sharing of hardware resources, components such as Network-on-Chip (NoC) fabrics, PCIe interconnects, and complex multi-level cache hierarchies have emerged as primary vectors for side-channel leakage and cross-domain exploitation. Centering on the inherent "security-performance trade-off" in modern silicon, this talk examines how resource contention and interconnect congestion can be leveraged to bypass traditional isolation boundaries and countermeasures for securing the next generation of energy-efficient, high-performance computing platforms.
Bio: Usman Ali, Ph.D. is a researcher and engineer in high-performance computer architectures and hardware security, with a focus on micro-architectural security in heterogeneous systems-on-chip (SoCs). His work investigates vulnerabilities in Network-on-Chip (NoC), interconnects (PCIe), memory subsystems, and cache hierarchies, and develops innovative defenses to secure high-performance computing platforms. Dr. Ali holds a Ph.D. in Electrical Engineering from the University of Connecticut and has published in top-tier hardware security and computer architecture venues, including IEEE HOST, SEED, and ICCD. His research advances the state of the art in secure, energy-efficient heterogeneous SoC designs, providing insights into both attacks and countermeasures. Dr. Ali's work bridges academia and industry, shaping secure and high-performance computing architectures that influence both research and practical systems design.
Device Engineer at Intel
Date: March 25, 2026 (Online)
Abstract: Advances in microelectronics, heterogeneous integration, and AI enabled hardware have intensified the need for robust mechanisms that ensure assurance, reliability, and trust across device lifecycles. As fabrication processes grow more complex and supply chains expand, semiconductor technologies face heightened risks related to performance variability, reliability degradation, and potential hardware manipulation-concerns strongly emphasizing deep dives in secure, trustworthy microelectronics infrastructures. This talk outlines how combining rigorous device level reliability analysis with assurance practices strengthens the foundation of trusted microelectronics. Drawing from real examples of compromised hardware in communication systems and demonstrated hardware Trojan insertions, the talk illustrates vulnerabilities that can emerge without strong assurance principles. By integrating reliability engineering, electrical characterization, and supply chain trust concepts, this session offers a unified perspective on building predictable, secure, and resilient microelectronic systems that support next generation IoT, edge, and critical infrastructure technologies.
Bio: Dr. Darpan Verma is a device engineer at Intel, specializing in device characterization, electrical reliability analysis, and the interaction between fabrication processes and device performance. His work focuses on translating experimental electrical and structural measurements into engineering insights that guide the development of advanced logic and memory technologies. He earned his Ph.D. in Materials Science and Engineering from The Ohio State University, where he investigated wide bandgap (GaN, beta-Ga2O3) and 2D semiconductor devices. His doctoral research included developing a photocurrent based electric field mapping methodology that enabled direct visualization of electric field distribution by using Frankz-Keldysh Effect, thus opening pathways to analyze reliability limits and breakdown mechanisms in power and optoelectronic devices without breaking or damaging them in the process. Before his Ph.D., Dr. Verma taught undergraduate electronics courses and mentored students, and he remains passionate about supporting early career researchers and engineers as they navigate technical and professional development paths.
Technical Manager, Fazeal
Ph.D. Candidate, Computer Science and Engineering
Wright State University, Dayton, OH
Date: March 25, 2026 (Online)
Abstract: Malware obfuscation techniques such as encryption, polymorphism, metamorphism, packing, and control-flow transformations significantly challenge traditional signature- and heuristic-based detection systems. These techniques enable adversaries to evade both static and dynamic analysis tools, particularly in resource-constrained IoT and embedded environments. This talk presents emerging AI-assisted approaches for detecting and mitigating obfuscated malware by integrating deep learning, behavioral analytics, and explainable AI (XAI). It will demonstrate how large-scale, synthetically generated obfuscated malware datasets can be leveraged to train models that identify structural, semantic, and behavioral anomalies rather than relying solely on surface-level signatures. Additionally, the talk will explore AI-driven deobfuscation techniques aimed at reverse-engineering transformed code patterns and uncovering malicious intent. Finally, practical examples will be presented to illustrate how AI-enhanced detection frameworks improve resilience against evolving obfuscation strategies while providing interpretable insights into malware behavior, ultimately supporting trusted and secure IoT system deployment.
Bio: Khaled Saleh is a technology leader with expertise in software development, enterprise architecture, and secure payment systems. He currently serves as Technical Manager at Fazeal, where he leads platform development, product strategy, and infrastructure decisions to build scalable, secure, and user-friendly systems. He previously worked as a Senior Software Engineer at Abercrombie & Fitch and has extensive experience in payment processing, tokenization, system integration, and backend architecture. He is a Ph.D. candidate in Computer Science and Engineering at Wright State University and holds a Master's degree in Computer Science from Franklin University. His technical expertise includes Java, REST APIs, databases, DevOps, and enterprise commerce platforms.