R&D Lab »Data Analysis and Artificial Intelligence«

In the research and development lab »Data Analysis and Artificial Intelligence«, the research team is dedicated to the application of artificial intelligence and the challenges that arise.

We specifically address the most important questions in four focal areas:

Focus »HPC Support for AI«

The central problem of current computer infrastructures for training large AI models is the GPU and cross-node parallelization as well as the efficient provision of data. In the cross-section between computer science and mathematics, research is intensified by closer networking with other working groups at the TU Kaiserslautern and industrial implementation is strengthened.       

The focus here is on increasing the performance of current deep learning methods by combining innovative HPC methods, file systems and new algorithmic approaches.

Focus »Explainability / Reliability of AI Systems«

Trustworthiness is a prerequisite for people and societies to develop, deploy, and use AI systems. AI systems are developed fundamentally differently than classical software-based systems. This is where the established methods and techniques of classical systems and software engineering reach their limits. We are working on innovative approaches to make AI systems more explainable and reliable.

Focus »Data-driven Modeling & Analysis of Big Data«

From initial business ideas to the deployment of AI systems in a production environment, many challenges await companies. Such projects require the interplay of various competencies, such as Big Data and high-performance computing infrastructures, data analytics, machine learning as well as software development. We bundle our expertise in these topics and support companies on their way to AI-based production.

Focus »Simulation and Machine Learning«

In order to predict and understand systems, they need to be modeled. In this context, machine learning and simulations are fundamental technologies. The interaction of both approaches enables novel and innovative simulation capabilities, which we further develop in this focus area.

Focus  »Embedded AI & Neuromorphic Computing«

Embedded systems must meet a wide range of requirements in terms of latency, energy efficiency, and performance. In order to find a suitable AI model, we are conducting research into Neural Architecture Search (NAS) to generate optimal models for both classic hardware and neuromorphic systems. 

Server room
© freepik
In the research lab »Data Analysis and Artificial Intelligence«, we bundle our competencies in Big Data, High Performance Computing, Machine Learning and AI.

Our Research on Data Analysis and Artificial Intelligence at a Glance:

  • HPC support for AI / machine learning / deep learning
    • Scalable methods (e.g., multi-GPU, multi-node)
    • Efficient training / inference
    • Support for heterogeneous hardware (HPC to edge computing)
  • Explainable / reliable AI
    • Robust methods
    • Definitions and evaluation of specific quality aspects of AI and big data systems
    • Evaluation of model uncertainty
    • Validation of AI-based systems
    • Generation of test data
  • Efficient learning AI
    • Use of continual learning to reduce training times and combat catastrophic forgetting
  • ML and simulation 
    • Simulation-based training
    • Surrogate modeling
    • Augmented vision 
  • Data-driven modeling & analysis of large data sets
    • Analysis of the potential of applied AI and big data projects
    • Data preparation (preprocessing and feature engineering)
    • Supervised and unsupervised anomaly detection
  • Embedded AI & neuromorphic computing
    • Multi-criteria neural architecture search (NAS)
    • Neuromorphic computing systems and algorithms 
    • Embedded AI systems 

Cooperations

We work together with BioNTech, Governikus, proALPHA and TRON, among others.