Skip to main content
Toggle menu

Search the website

“No other platform comes close”

Posted:
Written by:
  • Mark Russell
Categories:
This article is part of a series: The Past, Present and Future of OpenSAFELY

Written by Dr Mark Russell

Our team used OpenSAFELY to research how the COVID-19 pandemic affected routine healthcare for people with inflammatory arthritis conditions, such as rheumatoid arthritis.

The standard way of studying – and therefore improving – routine healthcare in this and other areas of medicine is via national audits, in which clinicians are asked to fill out forms manually. That’s a time-consuming task, often seen by clinicians as an unwelcome burden. We wondered: could we use OpenSAFELY to change how some of that monitoring is done?

We used OpenSAFELY to replicate some of the quality-of-care metrics that are typically generated by the National Early Inflammatory Arthritis Audit. We looked at data points such as:

  • number of new diagnoses
  • time to assessment by a hospital specialist
  • time to prescription of a disease-modifying treatment.

We then compared the standards before and after the pandemic, nationally and regionally.

What we found was really interesting. New diagnoses of inflammatory arthritis dropped in the first year of the pandemic, then went back up, but they still haven’t rebounded above pre-pandemic levels. Which suggests that there might be lots of undiagnosed patients out there.

We also found that diagnosed patients got good care, despite the pandemic. The time to see a hospital specialist continued to improve, and the proportion of patients who were prescribed disease-modifying drugs remained stable.

The health service did an impressive job of adapting to the pressures and restrictions of working through the pandemic and all the lockdowns.

What was most striking to us was that we were able to gain all these insights using data already present in OpenSAFELY – no additional, time-consuming data collection had to happen. We think this research demonstrated the potential for using OpenSAFELY and other platforms alongside audits, and perhaps in the long term, replacing the need for manual data collection entirely.

Equally striking was how using OpenSAFELY introduced us to new ways of doing research. While many of us had experience of writing code, it was the first time I’d used GitHub, or Python. So our whole team was very grateful for the resources made available to help us learn – the documentation for OpenSAFELY is better than anything I’ve seen on any other platform. OpenSAFELY’s docs are detailed, well written and very helpful. What’s more, there’s a huge archive of other people’s code that you can reuse when writing your own code. The dummy data OpenSAFELY provides makes the whole process safer – not just from a privacy perspective, but also because it helps improve the integrity of the research.

We also really liked working with our co-pilot. At least once a week, we had a chance to ask questions, get our code reviewed, and get help working our way through the process. Once our co-pilot had walked us through some of the trickier tasks, it was easier to do them solo next time round.

We enjoyed working with OpenSAFELY. It’s actually not as hard to pick up as you might think. There’s a proactive team behind it, and a supportive community around it. That makes writing code easier, especially when you’re actively encouraged to reuse code that’s already been written. No other platform comes close in terms of data coverage, privacy and safety.