Submission Information

For decades, experts in computer science and medical informatics have explored machine learning techniques to harness data in ways that could propel clinical medicine forward. Recent breakthroughs in machine learning —including advancements in theory, methods, and tools— along with the growth of digital health technologies like electronic health records (EHRs), wearable devices, mobile health apps, public datasets, and the involvement of tech-savvy clinicians, have paved the way for significant progress in applying machine learning to healthcare.

Achieving this vision, however, requires overcoming a number of challenges:

(i) handling complex data types (such as images, sensor data, and patient records that include both raw and unstructured data collected irregularly);

(ii) delivering actionable insights (including support for decision-making and robust causal analysis of intervention effects); and

(iii) examining the clinical, social, and technical interactions of ML models with healthcare workflows, essential for understanding the broader impact of ML and AI in healthcare.

For machine learning to truly fulfill its potential in healthcare, collaboration among technical researchers, clinicians, and social scientists is crucial. This collaboration is necessary to identify key problems, curate relevant datasets, and validate findings to ensure solutions work effectively in practice. While machine learning has made progress in handling complex data, much more remains to be done, especially in transitioning from predictive and generative models to practical tools that positively impact clinical decision-making.

The Machine Learning for Healthcare Conference (MLHC) serves as a leading venue dedicated to this dynamic intersection. Since its inception, MLHC has brought together thousands of researchers in machine learning and clinical fields to share pioneering work (archived in the Proceedings of Machine Learning Research) and foster new partnerships.

While it's impossible to enumerate every conceivable MLHC problem of interest, MLHC’s guiding principle is that accepted papers should provide important new generalizable insights about machine learning in the context of healthcare.

MLHC invites submissions to a full, archival Research Track and a non-archival Clinical Abstracts Track. Submissions accepted to both tracks will be presented at the proceedings event; and for both tracks, at least one author is required to attend, should their work be accepted.

Both the Research Track and Clinical Abstracts Track are received through OpenReview:
https://openreview.net/group?id=mlforhc.org/MLHC/2025/Conference

Hence, all submitting authors are required to have active OpenReview profiles by the submission deadlines.

MLHC 2025 TIMELINE

  • OpenReview account creation deadline: March 20th, 2025 (If you do not already have an OpenReview account, please register by this date, otherwise it cannot be guaranteed that your account will be activated in time.)

  • Pre-submission intent deadline (Research and Clinical Abstracts Tracks): April 4th, 2025

  • Full submission deadline (Research and Clinical Abstracts Tracks): April 11th, 2025

  • Review period: April 11th – May 14th, 2025

  • Author Rebuttal period: May 26th – June 9th, 2025

  • Reviewer - AC discussion: June 9th – June 23th, 2025

  • Paper decision notifications: July 3rd, 2025

  • Conference dates: August 15-16th, 2025

RESEARCH TRACK

The research track is organized with three main research themes:

(i) Novel methods that tackle fundamental problems arising in healthcare data, including predictive modeling, generative AI, sparsity, multimodal data, class imbalance, temporal dynamics, distribution shift across populations, fairness, and causal inference.

(ii) Experimental design, validation studies and pilot evaluations of machine learning solutions integrated into clinical practice or workflow, including works assessing socio-technical challenges. This theme covers studies that explore new ML solutions and measure their clinical and operational impact, and equity in the community; insightful evaluations of existing ML methods with results of interest to the community; in-vivo analyses of systems deployed in the wild; and research that examines the social and technical interactions of ML models with healthcare stakeholders, including patients, clinicians, and organizational leaders. Note that MLHC emphasizes contributions that provide generalizable insights to machine learning in health: tool-specific analysis without such insights are discouraged.

(iii) Benchmark and reproducibility studies, including new datasets or replication studies, i.e., evaluation studies using previously proposed methods to assess whether results consistent with the original work can be obtained. Survey papers which simply summarize existing methods will not be accepted. Please contact the organizers prior to submission within this theme to ensure that your paper is within scope and reviewed under the appropriate track.

MLHC is not tailored to evaluate machine learning for purely biological problems, though submissions with translational impact will still be considered; feel free to contact PCs at organizers@mlforhc.org if unsure if your submission qualifies.

Additional Context for Clinicians: We realize that conferences in medicine tend to be abstract-only, non-archival events. This is not the case for MLHC: to be a premier health and machine learning venue, all research-track papers submitted to MLHC will be rigorously peer-reviewed for scientific quality —for that, a suitably complete description of the work is necessary. We call for submissions that describe the problem, cohort, features used, methods, results, etc. Multiple reviewers will provide feedback on the submission. If accepted, you will have the opportunity to revise the paper before submitting the final version. If you wish to submit a shorter, non-archival paper, see the Clinical Abstracts Track below.

Additional Context for Computer Scientists: MLHC is a machine learning conference, and we expect submissions of the same level of quality as those that would be sent to a conference, rather than a workshop.

Research Track Review Process

All Research Track submissions will be rigorously peer-reviewed by both clinicians and ML researchers, with an emphasis on what generalizable insights the work provides about machine learning in the context of healthcare.

At least one author from each submission will be required to review, similar to policies from other machine learning conferences. Reviewing for MLHC is double-blind: the reviewers will not know the authors’ identity and the authors will not know the reviewers’ identity.

The Research Track goes through a double-blind peer review following an editorial decision. Preliminary desk rejects will be based on severe formatting violations (including use of LLM generated content), irrelevance to MLHC topics of interest, or if the program committee deems the quality of the contribution is not at par with MLHC. 

To facilitate the review process, one week prior to the full submission deadline, there is a pre-submission deadline with title, authors, and one-paragraph abstract due. The text of the abstract can change between the original and new deadline, as long as major intent remains the same.

Research Track Format

  • Please use the full paper LaTeX files available [here]. The example paper in the file pack contains sample sections. The margins and author block must remain the same and all papers must be in 11-point Times font. Further, you must include the generalizable insights section in the introduction.

  • Papers should be between 10-15 pages (excluding references and appendix); 15 pages is a hard upper limit. 

  • MLHC does not allow the use of generative AI such as Large Language Models to write the manuscripts; use of LLMs for copy editing alone is acceptable. 

  • Please refer to the submission instructions on our website, including mandatory content and tips on what makes a great MLHC paper.

  • Papers must be submitted blinded and completely anonymized. Do not include your names, your institution’s name, or identifying information in the initial submission. While you should make every effort to anonymize your work — e.g., write “In Doe et al. (2011), the authors…” rather than “In our previous work (Doe et al., 2011), we…” — we realize that a reviewer may be able to deduce the authors’ identities based on the previous publications or technical reports on the web. This will not be considered a violation of the double-blind reviewing policy on the author’s part.

Violations of these policies are grounds for desk rejection.

Research Track Proceedings and Presentations

Accepted submissions will be published through the Proceedings of Machine Learning Research (formerly the JMLR Workshop and Proceedings Track).

Authors of accepted papers will be invited to present a poster on their work at the conference. 

At least one author of each accepted Research Track paper is required to register and present at MLHC to confirm publication to PMLR.

Publications through PMLR are made open access without an article processing fee.

Research Track Dual Submission Policy

Concerning dual submissions, research that has previously been published, or is under review, for an archival publication elsewhere may not be submitted. This prohibition concerns only archival publications/submissions, and does not preclude papers accepted or submitted to non-archival workshops or preprints (e.g., to arXiv). It is a violation of dual-submission policy to submit to another journal or conference a MLHC Research Track submission while under review at MLHC, or after its acceptance in the MLHC proceedings.

CLINICAL ABSTRACT TRACK

In addition to our main Research Track proceedings, we welcome the submission of clinical abstracts (up to 2 pages) to be presented in a non-archival, abstract track.

Clinical abstracts typically pitch clinical problems ripe for machine learning advances or describe translational achievements.  The first or senior author and presenter of a clinical abstract track submission must be a clinician (often an MD or RN).

The clinical abstract may consist of:

  • Preliminary computational results: we encourage submissions from clinical researchers working with digital health data using modern computational methods; MLHC is a great venue for clinical researchers to brainstorm further analyses with an engaged computational community.

  • Clinical/translational successes: we seek abstracts about data and data analysis that resulted in new understanding and/or changes in clinical practice.

  • Open clinical questions or interesting data sets: we encourage submissions from clinicians and clinical researchers on important directions the MLHC community should tackle together, as well as abstracts describing interesting data sources.

  • Demonstrations: we seek exciting end-to-end tools that bring data and data analysis to the clinician/bedside.

  • Software: abstracts describing processing tools/pipelines tailored to health data. Software demos typically introduce a tool of interest to machine learning researchers and/or clinicians in the community to use. These are often (but not necessarily) open source tools.

Abstracts will not be archived or indexed, but will have the opportunity to be presented as a poster and/or spotlight talk at MLHC. The clinical abstract track is not intended for work-in-progress by primarily computational researchers. Given that this track is designed to engage clinicians, the first or senior author of a clinical abstract must be a clinician (MD, RN, etc. -> your job involves working with patients) or a clinician-in-training (i.e., currently enrolled in an MD or MD/PhD program).

Clinical Abstract Track Format

Clinical Abstract Track submissions should be two pages or less, using the abstract template [linked here].

At least one author from each submission will be required to review, similar to policies from other machine learning conferences.

Clinical Abstract Track submissions must be submitted blinded and completely anonymized. Do not include your names, your institution’s name, or identifying information in the initial submission. While you should make every effort to anonymize your work — e.g., write “In Doe et al. (2011), the authors…” rather than “In our previous work (Doe et al., 2011), we…” — we realize that a reviewer may be able to deduce the authors’ identities based on the previous publications or technical reports on the web. This will not be considered a violation of the double-blind reviewing policy on the author’s part.

Clinical Abstract Track Review Process

Reviewing for MLHC is double-blind: the reviewers will not know the authors’ identity and the authors will not know the reviewers’ identity. All clinical abstracts will be peer-reviewed by one clinician and one computational reviewer. 

Please include sufficient detail to assess technical correctness for a computational review, and fully describe the significance of the submission in healthcare.

Clinical Abstract Track Proceedings and Presentations

Abstracts will not be archival.

Authors of accepted papers will be invited to present a poster on their work at the conference.

We expect one of the presenting authors to be a clinician. Please reach out to the organizers in case that is not possible.

Clinical Abstract Track Dual Submission Policy

Work in progress, work in submission, and recently published work are all welcome (as long as you follow the other publication’s rules).

Pilot ‘Abstract to Proceedings’ Program 

This year we are piloting a program to provide the top 25% of the accepted Clinical Abstracts an optional opportunity to submit an enhanced version of the work to the Mayo Clinic Proceedings (MCP).

The MLHC 2025 submission and reviewing process is the same for all Clinical Abstracts, and the opportunity to pursue Mayo Clinic Proceedings (MCP) Digital Health publications will be offered, after the fact, to the accepted top Clinical Abstract submissions.


Please note that, if you avail of the opportunity, the abstract will undergo an independent review process with MCP, and acceptance is at the discretion of the MCP Digital Health editorial and peer review team. Manuscripts submitted through this route will have to enhance the content with additional results and insights and will receive appropriate details, as applicable. Note that MCP Digital Health editorial and review team will not be involved in the initial stage of the abstract peer review process as part of MLHC 2025. Reviews from the abstract track will not be shared with the MCP Digital Health team.