Workshop: Trusted Artificial Intelligence
Background and goals
Artificial Intelligence is reshaping research, industry, and public services at an unprecedented pace, yet its widespread adoption raises fundamental questions about safety, reliability, transparency, and compliance with emerging regulatory frameworks. The European AI Act and related initiatives place strong emphasis on trustworthiness, requiring systems that are robust, explainable, and aligned with human values, while also mitigating bias and unintended risks across the entire AI lifecycle.
This workshop aims to bring together researchers and practitioners developing AI technologies that are safe, reliable, and compliant with EU regulations on AI usage. It seeks to foster discussion on advanced learning paradigms, methods for interpretability and transparency, and techniques for the responsible customization and deployment of large-scale models. Contributions addressing both foundational research and real-world applications are welcome. The workshop is supported by the Romanian Hub for Artificial Intelligence (HRIA) project, Smart Growth, Digitization and Financial Instruments Program (MySMIS no. 351416).
Relevant topics
Relevant topics include, but are not limited to:
1. Advanced learning techniques
Description: Supervised, self-supervised, unsupervised, and continual learning approaches for processing diverse data types, including images, video, text, voice, unstructured and numerical data.
Research focus: Learning paradigms that generalize across modalities, adapt to evolving data distributions, and support robust knowledge acquisition in data-rich and data-scarce scenarios.
2. Explainable AI
Description: Creation of interpretable systems that provide transparency in decision-making, enhance understanding of deep learning models, and ensure non-discriminatory outcomes.
Research focus: Interpretability methods for deep architectures, fairness-aware learning, auditability, and techniques that foster trust between AI systems and their users.
3. Large language models
Description: Customization and extension of LLMs with improved safety, domain-specific functionality, bias mitigation, and risk assessment capabilities.
Research focus: Fine-tuning and alignment strategies, domain adaptation, bias detection and mitigation, and methods for systematic evaluation of risks in deployed language models.
Workshop: Intelligent and Autonomous Systems
Background and goals
Intelligent and autonomous systems, ranging from mobile robots and self-driving vehicles to aerial and underwater platforms, are increasingly deployed in complex, dynamic, and human-populated environments. Their effective operation requires tight integration of perception, reasoning, and control, as well as the ability to cooperate with humans and with other autonomous agents. Recent advances in machine learning, computer vision, and simulation have opened new opportunities to design systems that are more capable, adaptive, and transparent.
This workshop aims to bring together researchers working on the perception, learning, and control aspects of intelligent and autonomous systems. It provides a forum for discussing methods that enable autonomous agents to understand their environment, make informed decisions, collaborate with humans, and operate reliably across aerial, terrestrial, and underwater domains. Contributions addressing data generation and validation methodologies that support the development of such systems are also encouraged. The workshop is supported by the Romanian Hub for Artificial Intelligence (HRIA) project, Smart Growth, Digitization and Financial Instruments Program (MySMIS no. 351416).
Relevant topics
Relevant topics include, but are not limited to:
1. Semantic visual perception in autonomous systems
Description: Perception pipelines for robots, vehicles, and drones that extract high-level semantic information from visual data.
Research focus: Scene parsing, object detection and tracking, semantic and instance segmentation, and multi-modal perception for situational awareness in dynamic environments.
2. Machine learning for scene understanding and autonomous control
Description: Learning-based methods that link perception to decision-making and control in autonomous platforms.
Research focus: End-to-end and modular learning architectures, representation learning for scene understanding, and data-driven control policies for autonomous navigation and manipulation.
3. Explainable robot behavior in human-robot collaboration
Description: Techniques that make the behavior of autonomous robots understandable and predictable during collaboration with humans.
Research focus: Interpretable decision-making, transparent motion and task planning, intent communication, and trust-aware interaction in shared human-robot workspaces.
4. Intelligent control in aerial, terrestrial, and underwater systems
Description: Advanced control strategies for autonomous platforms operating in diverse physical environments.
Research focus: Adaptive, learning-based, and model-predictive control; navigation and coordination of aerial, ground, and underwater vehicles; robustness to uncertainty and disturbances.
5. Synthetic dataset generation and validation
Description: Methodologies for creating and validating synthetic datasets to support the development of intelligent and autonomous systems.
Research focus: Simulation-based data generation, domain randomization, sim-to-real transfer, and validation protocols that ensure the quality and representativeness of synthetic data.
Workshop: AI for Medical Imaging and Diagnosis
Background and goals
Medical imaging is undergoing a profound transformation driven by advances in artificial intelligence. Modern clinical practice relies on a heterogeneous ecosystem of imaging modalities — including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), ultrasound (US), X-ray, digital histopathology, dermatoscopy, retinal imaging, and endoscopy — each capturing complementary anatomical, functional, or molecular information. The ability of AI systems to jointly exploit these modalities, or to generalize robustly across them, has become a central research frontier with direct clinical impact on diagnosis, treatment planning, prognosis, and image-guided interventions.
Recent progress in deep learning, self-supervised learning, vision-language models, and generative AI has opened new opportunities for building medical imaging systems that are more accurate, data-efficient, and clinically trustworthy. Foundation models pretrained on large multi-modal corpora are beginning to provide strong priors that transfer across organs, diseases, and acquisition protocols. At the same time, multi-modal architectures that fuse imaging with electronic health records, genomic data, and clinical text enable integrative patient-level reasoning. Generative models — including diffusion and latent diffusion models — support image synthesis, cross-modality translation, data augmentation, and privacy-preserving data sharing. Longitudinal and intraoperative imaging, federated learning across hospitals, and explainable AI further extend the reach of machine learning into real clinical workflows.
Despite this momentum, significant challenges remain: scarcity and heterogeneity of labeled data, domain shifts between scanners and institutions, regulatory constraints under the EU AI Act and MDR, the need for clinically meaningful uncertainty quantification, and the integration of AI tools into existing radiology, pathology, and surgical pipelines. Addressing these challenges requires interdisciplinary collaboration between computer scientists, radiologists, pathologists, clinicians, medical physicists, and industry partners.
This workshop aims to bring together researchers, engineers, and clinical practitioners working at the intersection of AI and medical imaging. The session seeks to promote interdisciplinary dialogue on how AI models — whether operating on a single imaging modality or on multi-modal combinations — can be developed, validated, and translated into clinical practice. We welcome contributions from both academia and industry, covering methodological advances, clinical AI based applications, benchmarks, and deployment experiences. The workshop is supported by the Romanian Hub for Artificial Intelligence (HRIA) project, Smart Growth, Digitization and Financial Instruments Program (MySMIS no. 351416).
Relevant topics
Relevant topics include, but are not limited to:
1. Foundation models for medical imaging
Description: Large-scale pretrained models spanning multiple imaging modalities (CT, MRI, PET, X-ray, US, histopathology, ophtalmic imaging, dental data, dermatology, etc.) that serve as general-purpose backbones for downstream clinical tasks.
Research focus: Self-supervised and masked image modeling on heterogeneous medical corpora, scaling laws for medical data, domain adaptation across scanners and protocols, parameter-efficient fine-tuning, medical-specific tokenization and patch encoding.
2. Vision-language models for radiology and pathology
Description: Models that jointly understand medical images and clinical text, enabling automatic report generation, visual question answering, cross-modal retrieval, and image–report alignment.
Research focus: CLIP-style contrastive pretraining on medical image–report pairs, radiology-specific large multimodal models, ophthalmology-specific VLMs such as FLAIR on fundus image–report pairs, report generation for OCT and panoramic dental radiographs, mitigation of clinical hallucinations, grounded report generation, alignment with structured reporting standards.
3. Multi-modal fusion of imaging with clinical, genomic, and laboratory data
Description: Integrative patient-level models that combine medical images with electronic health records, genomic and transcriptomic data, and laboratory biomarkers for diagnosis, prognosis, and treatment response prediction.
Research focus: Radio-genomics, radio-pathomics, heterogeneous token architectures, early vs. late fusion strategies, missing-modality robustness, interpretability of multi-modal attributions.
4. Cross-modality image translation and synthesis
Description: Generative models that translate between imaging modalities (e.g., CT↔MRI, low-dose↔full-dose CT, MRI↔PET) to enable dose reduction, data augmentation, and protocol harmonization.
Research focus: Diffusion and latent diffusion models, conditional GANs, paired and unpaired translation, anatomical consistency constraints, validation against clinical endpoints.
5. Multi-organ and multi-modal 3D and 2D segmentation
Description: Universal segmentation models that operate across organs, pathologies, and imaging modalities, including promptable and interactive segmentation systems.
Research focus: 2D and 3D Segmentation models, promptable and text-driven segmentation, generalization to unseen anatomical structures, few-shot and zero-shot segmentation.
6. Longitudinal and multi-timepoint medical imaging
Description: Models that process temporal sequences of medical images to monitor disease progression, treatment response, and post-surgical follow-up.
Research focus: Change detection, learned image co-registration, temporal transformers and recurrent architectures, tumor growth modeling, neurodegenerative disease trajectories, survival analysis from imaging.
7. Generative models for synthetic data, augmentation, and anonymization
Description: Generative AI as a tool for addressing data scarcity, class imbalance, and privacy in medical imaging.
Research focus: Diffusion and latent diffusion models for medical image synthesis, controllable generation conditioned on clinical attributes, rare-disease augmentation, cross-device harmonization, synthetic data for benchmarking, anonymization via synthesis, evaluation of clinical utility of synthetic data.
8. AI for intraoperative imaging and image-guided interventions
Description: Real-time AI models applied to ultrasound, endoscopy, fluoroscopy, optical imaging (and others) for surgical guidance and interventional procedures.
Research focus: Low-latency inference, real-time segmentation and tracking, registration of pre-operative CT/MRI with intra-operative imaging, workflow analysis, AI for robotic and minimally invasive surgery.
9. Integrated radiology and digital pathology (radio-pathomics)
Description: Models that jointly exploit whole-slide histopathology images and radiological imaging (CT, MRI, PET) for tumor characterization, subtyping, and treatment response prediction.
Research focus: Multi-scale learning, case-level alignment between radiology and pathology, multiple-instance learning, graph-based patient representations, prediction of molecular and clinical endpoints.
10. Benchmarks, datasets, and evaluation for medical imaging AI
Description: Development and critical analysis of datasets, challenges, and evaluation protocols for clinically meaningful assessment of medical imaging AI.
Research focus: Multi-center and multi-modal benchmarks, evaluation beyond pixel-level metrics, clinical utility and reader studies, robustness to distribution shift, reproducibility, data cards and model cards for medical AI.
Important dates
| Event | Date |
|---|---|
| Workshops proposals deadline | 10th May 2026 |
| Paper submission deadline | 15th July 2026 |
| Notification of acceptance | 8th September 2026 |
| Camera-ready submission | 22nd September 2026 |
| Conference dates | 22nd–24th October 2026 |
Submission guidelines
Papers will be submitted through the conference submission system, specifying the corresponding workshop title (“Trusted Artificial Intelligence” or “Intelligent and Autonomous Systems” or “AI for Medical Imaging and Diagnosis”). Official submission link.