Johns Hopkins spinoff building risk prediction tools emerges with $15M


A startup created by a machine learning researcher at Johns Hopkins is emerging with a new sepsis model and $15 million in funding. In a world full of AI models, Bayesian Health is looking to set itself apart with data showing the tool’s efficacy as used by physicians — a rarity in the world of machine learning.

Founder and CEO Suchi Saria has been working in machine learning for almost two decades. She published some of the first research showing machine learning could be used to identify sepsis early, and built a model in use at Johns Hopkins.

Now that other models have been developed, Saria hopes that the company’s research-based approach will help it stand out from the rest.

“Today, there’s so much hype in health AI. Doing it well has been really hard. If we can really crack the nut, there’s so much opportunity in reducing preventable deaths,” she said.

Part of the problem is that while a growing number of digital health solutions are released to the market, there’s often little to no data backing it up, making it difficult to win over providers’ trust. Saria hopes to change that with the results of a recent study and past published data on the company’s sepsis model.

Saria launched Bayesian Health in 2018, and since then, has raised $15 million in a funding round led by Andreessen Horowitz. The company is looking to commercialize its machine learning algorithms, starting with its tool to detect sepsis.

The startup recently shared the results of a prospective study evaluating the tool’s use by physicians. Though it’s just a preprint — it hasn’t yet gone through peer review — it’s a different approach when most models are only evaluated using data that was collected before they were implemented.

The model was tested at five Johns Hopkins hospitals between 2018 and 2020. Of about 9,800 patients later diagnosed with sepsis, the model flagged 82% of them.

Of that number, 3,775 patients did not have antibiotic orders prior to the alert, but received them within 24 hours. Importantly, about 89% of doctors and nurses actually used the alert.

This is an important measure, Saria said, because it shows if a tool was timely or useful for clinicians.

“If you use something that just doesn’t have timeliness, it’s alerting but often after the providers have treated the patient, that’s not very productive,” she said. “Or, if it’s alerting, but there’s a high number of false alerts. … if that number’s really high, providers are really busy. They don’t have time for that.”

Currently, most decision support tools used at hospitals, including sepsis alerts, haven’t been cleared by the Food and Drug Administration. This leaves hospitals beholden to developer’s claims about how accurate a model is, hopefully validated by their own evaluations.

Some models that have put these algorithms to the test are finding that they’re not as helpful as advertised. A recent study of Epic Systems’ sepsis model found that it performed “substantially worse” than claimed, finding just a small percentage of sepsis cases that hadn’t been identified by clinicians despite generating a large number of alerts.

In addition to its work with sepsis, Bayesian Health is also developing models for clinical deterioration, transitions of care, and pressure injuries. Not only are these important quality measures for hospitals, but they can change patients’ lives. Saria understands this intimately after losing her nephew to sepsis.

“The difference between correct and not correct can mean a person’s life,” she said.

Photo credit: Gremlin, Getty Images