Health Systems Initiative
Decision-grade intelligence for safe, scalable AI in healthcare
Evidence-based research translating advances in AI and next-generation health technologies into actionable guidance for healthcare leaders and policymakers across the Euro-Mediterranean and the MENA regions
UPM Innovation is a research think tank focused on the safe, scalable integration of AI and next-generation technologies across health systems.
We move beyond model performance to assess deployment readiness, governance, interoperability, infrastructure, and measurable clinical value.
Our work supports healthcare leaders, policymakers, research partners, and technology stakeholders across Europe, the Mediterranean, and MENA.
AI and Robotics in Cancer Diagnostic Imaging
A flagship publication on the infrastructure, governance, and deployment conditions required for trustworthy AI in cancer diagnostics.
Medical imaging · Clinical AI infrastructure · Governance & deployment
AI in Diagnostics & Medical Imaging
We assess the clinical, technical, and organisational conditions required to deploy AI safely and effectively across diagnostic pathways. Our work moves beyond model performance to workflow integration, reproducibility, auditability, and measurable value in real imaging environments.
Why it matters: medical imaging is one of the clearest paths from AI development to high-impact clinical deployment.
Assistive Robotics & Human-Robot Interaction
We examine how assistive robotics can be integrated responsibly into care settings, with close attention to safety, human oversight, operational workflows, and professional practice. The focus is on systems that are usable, trusted, and aligned with real clinical and institutional needs.
Why it matters: robotics in healthcare will scale only when technical capability and human trust are designed together.
Clinical AI Infrastructure & Compute
We analyse the infrastructure, compute, and deployment choices required to support resilient digital health systems and production-grade clinical AI. This includes scalability, interoperability, privacy-preserving architectures, and the operational foundations needed to move from pilots to dependable implementation.
Why it matters: production AI in healthcare depends on secure, scalable infrastructure as much as on model quality.
Digital Trust & Health Data Systems
We study the trust architectures required for secure, interoperable, and accountable health data ecosystems. Our research focuses on data governance, consent, digital identity, secure exchange, and institutional trust as core conditions for scalable innovation across health systems.
Why it matters: trusted data systems are the foundation for cross-institutional AI and long-term adoption.
UNIVERSITÉ POUR LA MÉDITERRANÉE
A platform for scientific cooperation and innovation capacity across Euro-Med
UPM Innovation is anchored within the Université pour la Méditerranée (UPM), an independent academic consortium that provides its institutional foundation. Established in 2015, UPM convenes researchers, clinicians, public institutions, and strategic partners to strengthen cross-border scientific cooperation and translate evidence into real-world impact.
Working across Europe and the Mediterranean, UPM advances applied, policy-relevant collaboration in health, science and technology, and ethics, building the partnerships and implementation pathways needed to scale innovation responsibly.

At a glance
Founded in 2015 as an independent academic consortium
Focus areas: health, science & technology, ethics, and innovation capacity
Active across the Euro-Mediterranean region and neighbouring systems
Host institution of UPM Innovation (research think tank)
Governance & Legal Documents
View UPM’s core legal and governance documents..
Mission
To strengthen scientific cooperation across the Euro-Mediterranean region by building durable research partnerships and translating evidence into guidance that improves health outcomes, innovation capacity, and quality of life.
We focus on practical pathways from research to adoption: governance, implementation, and institutional readiness.
We aim to support institutions with research, convening, and decision-grade analysis that strengthen responsible adoption in practice.
Areas of Focus
UPM advances interdisciplinary research and cooperation at the intersection of healthcare, science and technology, and ethics.
Priority themes include responsible innovation, the social and governance dimensions of scientific progress, and the conditions required for trustworthy adoption: patient safety, interoperability, and public trust.
How we work
UPM delivers its mission through convenings and expert working groups, collaborative programmes, and evidence-based outputs. We partner with universities, research centres, healthcare institutions, and public authorities to support rigorous, policy-relevant work—linking research excellence to implementation pathways and measurable impact.
Outputs: Research reports • Policy briefs • Expert dialogues • Partnership briefs
OUR TEAM
Leadership, Research Team, and Scientific Advisors
We bring together clinical leaders, researchers, and technical advisers to support evidence-based research and practical guidance on AI-enabled healthcare innovation. The Scientific Committee provides academic and clinical oversight, ensuring methodological rigor and decision-grade outputs.
LEADERSHIP

Véronique Bouté, MD
First Secretary General, UPM
Hospital radiologist and breast imaging specialist based in a cancer treatment centre, with a focus on diagnostic pathways and clinical quality. President of Astarté (Trans-Mediterranean Association Women and Breast Cancer). She supports programme execution at the intersection of clinical adoption, patient communication, and the practical governance requirements needed for real-world deployment.

Khaled Meflah, MD
President, UPM
Professor of biochemistry and cancer researcher specialising in resistance to programmed cell death. Former Chief Executive Officer of the Centre François Baclesse, he has led major oncology research networks, including the Cancéropôle du Grand Ouest. He also chairs ARCADE Association and was appointed Knight of the National Order of the Legion of Honour (2021).

Yannis Constantinidès, PhD
Second Secretary General, UPM
Philosopher, agrégé, and PhD in philosophy, trained at the École Normale Supérieure. Trainer in medical ethics and member of GREM. He supports structured ethical analysis that can be operationalised in clinical and institutional settings, focusing on professional responsibility, decision frameworks under uncertainty, and governance principles that remain robust as technologies evolve.

Kinda Chebib, MSc
Head Of Research, UPM Innovation
Kinda Chebib founded UPM Innovation and leads its research agenda on clinical AI, diagnostics, and health data systems, integrating bioethics through GREM’s ethics reflection work. She produces decision-grade analyses and guidance for healthcare leaders and partners, translating emerging technologies into practical deployment, governance, and integration requirements. She has extensive experience researching emerging technologies and previously served the French Government in economic and defence-related roles, bringing a policy, security, and implementation lens to health-system innovation..
Scientific & Technical AdvisorS.

Michel Rastbeen, PhD
Head of Academic Relations
Michel Rastbeen is the Founder and President of the Paris Academy of Geopolitics and serves as Director of the Scientific Council of Géostratégique. He leads UPM Innovation’s academic relations by strengthening partnerships with universities, research centres, and scholarly networks, supporting academic collaborations, convenings, and publication pathways. His work helps ensure UPM Innovation’s research programmes are connected to high-quality academic ecosystems across the Euro-Mediterranean region and beyond.

Alexis Chebib, MD
Vice President, UPM
Anas Alexis Chebib is Chief of Radiology, recognised for expertise in breast imaging and cancer diagnosis. He founded the Université pour la Méditerranée (UPM) consortium, strengthening cross-border scientific cooperation and the translation of research into societal benefit across the Euro-Mediterranean region. Mandated by UNESCO to contribute to research and reflection in bioethics, he focuses on clinical governance and the ethical integration of emerging technologies in healthcare practice.
Researchers

Hazem Wannous, PhD
Researcher, Robotics & AI
Hazem Wannous is a Full Professor of Computer Science at IMT Nord Europe and a researcher at CRIStAL Laboratory (UMR CNRS 9189). His work focuses on robotics and AI systems for real-time perception and decision-making, including computational architectures and high-performance methods relevant to medical imaging, human–robot interaction, and large-scale data infrastructures. He contributes technical expertise on deployable AI—model efficiency, latency, and system integration—supporting research that moves from algorithmic performance to operational clinical environments.

Dominique Gros
Clinical Adviser (Oncology & Senology)
Dominique Gros is an oncologist and senologist, and an author whose work explores the relationship between cancer, the body, and lived experience. His publications include Cancer du sein : Entre raison et sentiments (French edition, 2009), reflecting on how breast cancer extends beyond clinical care into questions of meaning, truth, and resilience. He contributes a clinically grounded, human-centred perspective that strengthens UPM Innovation’s ability to translate complex scientific and ethical issues into clear, patient-relevant communication.

Carine Segura, MD
Oncology, clinical research, and patient pathways
Physician in a cancer treatment centre, specialising in breast oncology and clinical research. Secretary General of Astarté and board member of the French Society of Psycho-Oncology. She contributes an on-the-ground view of care delivery and evidence generation, helping align research with clinical workflows, patient experience, and the practical requirements for adoption in oncology services.
Scientific & Technical AdvisorS.
UPM Innovation’s Scientific & Technical Advisors provide independent guidance on research standards and technical assumptions, helping ensure outputs remain rigorous, reproducible, and usable in real-world health-system settings. Their oversight spans programme framing, evidence thresholds, deployment constraints, data governance, workflow integration, and auditability, with a focus on responsible clinical adoption across institutions and partners.

Dominique Goutte
Scientific & Technical Advisor
Dominique Goutte serves as Vice-President for Economic Development, Research, and Higher Education for Caen la Mer Urban Community, supporting the translation of research into implementation through regional innovation and institutional coordination. Associated with CEA, he contributes an adoption and delivery lens across partnerships, governance, and scale-up constraints. His work helps align research outputs with the practical conditions required for operational viability, sustainability, and cross-sector deployment.

Pascal Buleon
Scientific & Technical Advisor
Pascal Buleon is a CNRS Research Director and Director of the CNRS MRSH at University of Caen Normandy. He brings expertise in research governance, academic structuring, and methodological standards, supporting programme design and evidence thresholds across interdisciplinary workstreams. He strengthens the rigor and credibility of outputs, ensuring they are clearly framed, reproducible, and suitable for publication and institutional decision-making.
RESEARCH programmes
Research programmes for scalable, trusted clinical AI
Research Approach From model performance to system performances
UPM Innovation structures its research around the conditions required for safe, scalable integration of AI and advanced computational systems into clinical environments. Our work focuses not only on technical capability, but on the governance, infrastructure, interoperability, and institutional conditions required for safe and sustainable deployment.
Rather than assessing algorithms in isolation, we analyse how technologies perform within real-world health systems, where regulatory frameworks, workflow integration, data constraints, compute capacity, and organisational readiness determine whether innovation can scale responsibly and deliver measurable impact.
Through evidence-based publications and strategic analysis, we provide decision-grade guidance for healthcare leaders, regulators, and institutional partners across Europe, the Mediterranean, and MENA.
Clinical AI & Deployment Architecture From validation to system integration
We examine the full lifecycle of clinical AI deployment, from model validation and bias mitigation to workflow integration, auditability, and continuous monitoring in hospital environments. Our research identifies the technical, organisational, and regulatory conditions required for systems to scale safely across institutions.
Beyond algorithmic performance, we analyse infrastructure capacity, cybersecurity resilience, human oversight mechanisms, procurement models, and alignment with regulatory standards. The objective is to move from isolated pilot projects toward structured, durable implementation at system level.
This approach underpins our flagship research on artificial intelligence and robotics in cancer diagnostic imaging, which explores how image-guided diagnostic infrastructures must evolve to support accountable and reproducible clinical performance . Read further →
The goal is dependable clinical use: clearer accountability, safer workflows, and measurable impact on care pathways. We translate evaluation into deployment requirements—monitoring, audit trails, and workflow integration—so AI supports clinicians under real time, workload, and risk constraints.
Digital Trust, Compute & Health System Resilience
Secure foundations for scalable clinical AI
Sustained deployment of clinical AI depends on trusted digital foundations. We study secure data-sharing architectures, digital identity and consent frameworks, interoperability standards, and governance models that enable cross-institutional collaboration while safeguarding patient rights and public trust.
Our work also examines the computational foundations of modern clinical AI, including efficient deep learning, privacy-preserving methods, distributed and edge deployment strategies, and infrastructure choices affecting reliability, cost, sustainability, and scalability.
By analysing how health ecosystems transition from fragmented digital initiatives to coordinated, interoperable networks, we identify the institutional capabilities required to maintain performance, resilience, and accountability over time.
We connect governance to the realities of performance at scale, secure data movement, predictable inference, and continuous monitoring across heterogeneous compute. This makes clinical AI resilient in production, whether deployed in data centres, private cloud, or hospital edge environments.
BIOETHICS
GREM: Bioethics and governance for clinical AI and emerging technologies
Clinical bioethics
Ethics for complex care and clinical decision-making
UPM Innovation’s bioethics research and advisory capability is led through GREM (Groupe de Réflexion sur l’Éthique en Méditerranée), hosted by the Université pour la Méditerranée (UPM). GREM links ethical analysis to operational realities, accountability, transparency, and patient trust, so governance supports real-world deployment
We examine how clinical decisions remain safe and consistent in high-uncertainty contexts, including end-of-life care, consent in complex therapeutic pathways, vulnerability and autonomy, and emerging interventions in reproductive medicine. The focus is practical: decision frameworks that clinicians and institutions can apply as technology and care pathways evolve.
Responsible AI in diagnostics
Governance for AI and robotics in medical imaging
In close collaboration with UPM Innovation, GREM examines the governance questions raised by AI and robotics in diagnostic imaging, especially in oncology—where models influence triage, detection, reporting, and downstream clinical decisions.
We focus on what makes clinical AI deployable at scale: representativeness of training data, bias and performance drift, transparency appropriate for clinical use, auditability, and clear allocation of responsibility across clinicians, institutions, and developers. We also study oversight in real settings, including workflow integration and monitoring that maintains safety as protocols, populations, and imaging systems evolve.
Digital health systems and data governance
Ethics of digital health systems, data infrastructures, and scale
Beyond imaging, GREM assesses the ethical implications of the digital foundations that clinical AI depends on: data-sharing architectures, identity and consent models, interoperability, cybersecurity boundaries, and the governance of cross-institutional collaboration.
We connect infrastructure choices to clinical and operational outcomes—how privacy, security, reliability, and accountability change across edge-to-cloud deployments, distributed processing, and accelerated clinical AI pipelines in high-performance compute environments. Where relevant, GREM contributes to institutional deliberations and partner requests, including workstreams supported by UNESCO.
