Developing artificially intelligent agents to support earth independent medical capabilities during human exploration-class space missions

developing-artificially-intelligent-agents-to-support-earth-independent-medical-capabilities-during-human-exploration-class-space-missions
Developing artificially intelligent agents to support earth independent medical capabilities during human exploration-class space missions

Exploration-class missions to the moon, Mars, asteroids, and beyond present unique medical challenges that mission designers and astronauts do not encounter during low earth orbit (LEO) spaceflights. During LEO missions to the International Space Station (ISS), for example, astronauts are in constant, instantaneous contact with the Mission Control Center (MCC). Any medical conditions that arise can be discussed in real-time with specialists on the ground. Diagnosis and treatment can be directed or even controlled remotely by MCC. If a serious condition arises that cannot be treated onboard, crewmembers could be evacuated to Earth within 24 h. When medical consumables run low, resupply is possible via regularly scheduled flights. Mass and volume of medical equipment are generally not issues on ISS. Once astronauts embark on exploration-class missions, however, many of these capabilities disappear, and the crew needs to become increasingly independent and self-reliant on available onboard resources. Round-trip communication delays on the way to Mars can be as long as 45 min, negating the possibility of real-time diagnosis and treatment of emergent medical conditions. The distance from Earth will make routine re-supply of medical equipment and evacuation for serious medical conditions nearly impossible. Vehicle performance issues will limit mass and volume, and the crew will need to treat medical problems that arise with only the equipment and supplies that have been manifested. This paradigm change in medical care has been labeled “Earth Independent Medical Operations” or EIMO1.

To enable EIMO, the crew will need to substitute on-board expertise for that currently provided in MCC to respond to urgent medical conditions. Having adequate knowledge, skills, and abilities (KSA) available on-board and in real-time is an essential component of EIMO. Crews will have limited time available for medical training pre-flight, and skills, even for an onboard physician/astronaut, will degrade as the mission progresses. Dating back to early space missions, astronauts report needing more time to complete tasks during spaceflight than they required during pre-flight simulations2 as well as “space fog” that can cloud their judgment3.

Given the challenges described above, new tools will need to be developed to meet them. Advances in artificial intelligence (AI) can augment crew capacity in training, diagnosis, treatment, and medical supply inventory management. These tools may include mixed reality for just-in-time training (JITT) and procedure guidance; AI-driven “chatbots” using large language models that take the place of MCC-provided expertise; and on-board reference materials and manuals. This paper describes advances made by our team in developing such tools.

Imagine you are the crew medical officer (CMO) on-board the first mission to take humans to Mars. You have had some preflight medical training, but you are not a physician. Another crewmember tells you that he is experiencing chest pain. From your training you know that this could be something serious like a heart attack, aortic dissection, pulmonary embolism, or pneumothorax, or something as simple as a muscle strain. You know that it will take 45 min to send a message to mission control and receive a response. How can an artificial intelligence-based system help the CMO diagnose and best manage this condition with the limited resources available? Some of the important qualities for an AI-based system assisting the crew to manage medical problems include4,5,6:

  1. 1)

    Asks questions, respcmendations like a ground-based medical consultant.

  2. 2)

    Works without external assistance. Communications are not real-time, and internet is not available.

  3. 3)

    Meets the strict mass, volume, power, and data constraints of exploration missions.

  4. 4)

    Has a comprehensive database of medical information. The system must be able to guide the crew through common and rare conditions.

  5. 5)

    Conducts symptom-based dialog. A patient will present with a complaint (e.g., chest pain) rather than a diagnosis (pneumothorax). The AI tools must be able to obtain a history much like physician would do when first seeing a patient, guide the CMO through physical examination and testing, and provide a management plan.

  6. 6)

    Optimizes the questions asked and quickly responds to pose the next pertinent question or advises to obtain imaging/lab tests. A history should consist of a series of succinct, logically progressive questions (e.g., “when did the pain start?”, “where is it located?”, “what were you doing when it started?”)

  7. 7)

    Recognizes emergent conditions that may require treatment before definitive diagnosis. It is critical for the system to know when to go into an “emergency” mode; in cases such as cardiac arrest, resuscitation protocols must be started before the diagnostic process is complete.

  8. 8)

    Provides procedural guidance (such as echocardiography for chest pain) and just-in-time refresher training when needed.

  9. 9)

    Speaks at the level of training of the operator. If the CMO is a physician, fewer prompts are needed and the dialog is more technical. The CMO may be the patient, so the system must be able to communicate with novice operators.

  10. 10)

    Always on with continuous monitoring in the background. The tools must react immediately when a prompt is given. The system should monitor vital signs and other information that may be available on “wearables” and can alert the crew to a worrisome trend or an impending medical event.

  11. 11)

    Provides responses to questions or requests for information. The system should be able to respond instantly to requests for basic medical information (e.g., how often can I take ibuprofen?)

  12. 12)

    Ideally, works hands-free with voice interaction. A CMO may be busy doing a procedure, drawing blood, examining the patient, etc, and it is highly desirable to be able to communicate with the system hands-free.

  13. 13)

    Has detailed knowledge of each crewmember’s medical history, both preflight and in-flight. It also must know the available on-board resources with which to treat a medical condition. It must also be able to project the future need for treatment modalities for the remainder of the mission.

Methods

Agents in Development

Our team has been developing tools for NASA to meet the needs of EIMO and help astronauts independently manage the inevitable medical problems arising on exploration-class missions. The technology behind AI-based Large Language Models (LLM) holds the potential for true onboard medical expertise emulating the current ground-based capabilities. To this end, we are in the early stages of development of Space Medicine GPT (SGPT). This is a localized LLM, Generative Pretrained Transformer (GPT) that is designed to meet needs #1–7 and #9–13 above. Unlike other chatbots that provide comprehensive answers to specific questions, SGPT employs novel prompt engineering to provide an interactive, question-answer environment to reach a medical diagnosis. To function without internet connectivity, this system utilizes a smaller LLM “distilled” from a larger model. By concentrating on only medical information, its targeted size of less than 50 billion parameters is suitable for deployment on laptops, smartphones, and tablets. This should help meet the mass, volume, power and data constraints described in need #3. “ector databases”7 that provide rapid search and retrieval of evidence-based medical resources are built on top of the foundational LLM. Actual conversations recorded during physician-patient encounters can be used to train the model to respond as an expert medical consultant. Once fully developed the dialog directed by SGPT will hopefully mimic the thought-process and data gathering that a clinician would use to manage a crewmember who presents with a symptom such as chest pain (Fig. 1). Through a series of focused questions, SGPT will incorporate well-established Bayesian logic techniques8 to arrive at the most appropriate follow-up question, test, or procedure. When sufficient information is gleaned from the symptom history, the CMO will be asked to perform a physical exam and further tests such as imaging and laboratory examinations, as needed. The highest priority will be to rule out an emergent condition such as acute coronary syndrome or tension pneumothorax (need #7) requiring immediate action.

Fig. 1: Current Framework.
figure 1

Conceptual Framework using AI tools to assist onboard diagnosis and treatment of medical problems arising during Earth Independent Medical Operations.

Full size image

Once the need for urgent intervention is recognized, SGPT will take whatever lifesaving measures are required even if the diagnostic process is incomplete. Per need #9, SGPT will know the knowledge and skills base of the CMO or non-CMO crewmember and speak at a level commensurate with his/her expertise. Likewise, the complete medical history of each crewmember will be stored in memory and be available as part of the diagnostic process, as well as automatic reading of vital signs which are available through wearables. SGPT will be an always on, instantly available resource that can also continuously process and monitor data from crew-worn sensors and alert the crew when dangerous readings or trends appear (need #10). Finally, it can act as a conventional medical information source for routine medical information requests or questions that the crew may have (need #11).

How can AI be used if a CMO needs additional guidance for a recommended medical procedure (e.g., placing an IV or performing an ultrasound examination) or wants refresher training on a particular medical topic (need #8)? Another AI-based tool has been developed called the Intelligent Medical Crew Agent (IMCA). IMCA uses augmented reality (AR) to provide step-by-step guidance for the performance of medical procedures. Content such as imaging, diagrams, checklists, and videos can be projected through crew-worn AR devices, alongside views of the real-world environment. AR scripts for procedures such as pericardiocentesis, chest tube placement, and deep abscess drainage have been developed and can be tailored to the level of an expert or novice. Scripts can be rapidly developed using systems modeling language tools9 for new procedures or just-in-time training that may be required and then uplinked to the crew.

A key component of IMCA is the Visual Ultrasound Learning, Control and Analysis Network (VULCAN). Currently, the primary imaging tool available to crews in LEO is ultrasound. VULCAN is designed to monitor operator execution of ultrasound scans and ultrasound-guided procedures and provide real-time, corrective feedback to ensure proper execution and outcomes in situ. It uses AI/machine learning procedural guidance as a 3-dimensional mentor, guiding operators through complex medical procedures in near real-time. The VULCAN system fuses computer vision and telemetry (e.g., magnetic GPS for spatial precision) to create “closed loop” procedural guidance with corrective feedback (300–400 msec) to assist in proper performance of procedures. VULCAN not only shows operators what to do but also monitors execution through machine learning, providing instant corrective feedback.

Challenges and Aspirations

With the assistance of AI-based tools such as SGPT, IMCA, and VULCAN, our CMO on the way to Mars can expertly manage his crewmate’s condition. Whether the etiology of the chest pain is as serious as a myocardial infarction or as benign as muscle strain, these tools act as omnipresent, well-informed “experts” that take the place of ground-based medical consultants. This is the goal, but significant challenges lie ahead before all the needs listed above can be met. Not least of these include the immaturity of the hardware and software required to make these tools a practical reality during exploration missions. The deductive process that physicians use to arrive at a diagnosis from a patient complaint is extraordinarily complex. No two clinicians have precisely the same style/approach and determining the “optimal” method to emulate experts is difficult. It is imperative to train our model with generally recognized “best medical practices” to avoid going rogue. Almost all large LLM’s in use today are designed to be internet-based while our tools must work off-line. Large medical databases must be distilled into a usable size that conforms to spaceflight constraints while providing a comprehensive medical knowledge base for both rare and common conditions. Systems such as we describe will need to be rigorously tested in earth-based analogs such as Antarctica, wilderness medicine, military deployments, and even routine medical practice. Finally, it is important to note that these tools are only one component of a comprehensive space-based clinical decision support system. Other features like pharmacy and supply management, health maintenance, disease prevention, and environmental control are needed, but AI will certainly be an integral part of these.

We are hopeful that, despite the challenges, a model such as the one described will be developed for exploration-class missions. Hardware advances in storage and speed will allow AI-based systems such as SGPT to reside on a tablet or cellphone. Imagine the possibilities if the same kind of on-board expertise that is being developed for exploration of our solar system is available at low cost to local healthcare paraprofessionals in very remote terrestrial settings, in disaster situations, or war zones where internet, or any communications at all, is non-existent. Artificial intelligence-based tools such as those described have the potential to save lives, reduce human suffering and improve the health of all, not only in outer space, but importantly, here on Earth.

Data availability

All data presented in this manuscript is available in the presented figures.

Code availability

The code used in this study is proprietary and cannot be publicly shared due to confidentiality agreements. Researchers interested in accessing the code should contact [william.buras@tietronix.com] to discuss potential collaborations under a non-disclosure agreement (NDA).

References

  1. Levin, D. R., et al. Enabling Human Space Exploration Missions Through Progressively Earth Independent Medical Operations (EIMO). IEEE Open J. Eng. Med. Biol. 4, 162–167 (2023).

    Google Scholar 

  2. Manzey, D. & Lorenz, B. Mental performance during short-term and long-term spaceflight. Brain Res. Rev. 28, 215–221 (1998).

    CAS  Google Scholar 

  3. Welch, R., Hoover, M. & Southward, E. F. Cognitive performance during prismatic displacement as a partial analogue of ‘space fog’. Aviat. Space Environ. Med. 80, 771–780 (2009).

    Google Scholar 

  4. Waisberg, E. et al. Challenges of artificial intelligence in space medicine. Space: Sci. Technol. 2022, 9852972, (2022).

  5. Russell, B. K. et al. The value of a spaceflight clinical decision support system for earth-independent medical operations. npj Microgravity 9, 46 (2023).

    Google Scholar 

  6. Garcia-Gomez, J. M. Basic principles and concept design of a real-time clinical decision support system for managing medical emergencies on missions to Mars. arXiv:2010.07029v2 [cs.CY] 27 Feb 2021.

  7. Taipalus, T. Vector database management systems: fundamental concepts, use-cases, and current challenges. Cogn. Syst. Res. 85, 101216 (2024).

    Google Scholar 

  8. Gill, C. J., Sabin, L. & Schmid, C. H. Why clinicians are natural bayesians. BMJ 330, 1080–3 (2005). Erratum in: BMJ. 2005 Jun 11;330(7504):1369.

    Google Scholar 

  9. Amador, A. R. et al. Enabling space exploration medical system development using a tool ecosystem. 2020 IEEE Aerospace Conference, Big Sky, MT, USA, 2020, 1–16, https://doi.org/10.1109/AERO47225.2020.9172751.

Download references

Acknowledgements

We would like to thank NASA and TRISH for their past and ongoing research support of these technologies. This work was supported by NASA SBIR funding under grant numbers NNX16CC522P, NNX17CC12C, 80NSSC120C0541, 80NSSC21C0578, 80NSSC23PB612.

Author information

Authors and Affiliations

  1. Tietronix Software Inc., Houston, TX, 77058, USA

    William R. Buras

  2. Baylor College of Medicine, Translational Research Institute for Space Health (TRISH), Baylor Center for Space Medicine, Houston, TX, 77030, USA

    David C. Hilmers

Authors

  1. William R. Buras
  2. David C. Hilmers

Contributions

W.R.B.—Conceptualization, Project Administration, Funding acquisition. Writing—original draft, Writing—review and editing. D.C.H.—Conceptualization and construction of figures. Writing—original draft. Writing—revisions and editing. All authors have read and approved the manuscript.

Corresponding author

Correspondence to William R. Buras.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Buras, W.R., Hilmers, D.C. Developing artificially intelligent agents to support earth independent medical capabilities during human exploration-class space missions. npj Microgravity 11, 51 (2025). https://doi.org/10.1038/s41526-025-00503-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41526-025-00503-x