‘There is no such thing as a local healthcare system in a global data economy. So why are we still building AI as if there is?’
Payal Arora
We need to move from platform to people, from large-scale infrastructures to real user experiences. And put those experiences at the heart of design and deployment

From platform to people
Looking ahead, Arora remains cautiously optimistic. “Yes, there is a concentration of raw power in the hands of a few,” she acknowledges. “But it has also sparked a global movement. People and States are coming together and are already building public driven alternatives, for instance, digital public infrastructures.”

That shift requires a different mindset. “We need to move from platform to people,” she says. “From large-scale infrastructures to real user experiences. And put those experiences at the heart of design and deployment, not as an afterthought.”

Quality is a key word here. “If the tools are not well-trained, make mistakes and miss the cultural contexts and conditions, they can do more harm than good. If we want AI tools to work for more people, we need to do better at engaging with these people.

A global perspective on healthcare
Ultimately, Arora’s message is about perspective. “We need to end paternalistic approaches towards technology design.” Besides, healthcare does not exist in isolation. “There’s no such thing as a small, self-contained healthcare ecosystem. Covid should have taught us that by now,” she says. “We are part of a larger global storytelling on what well-being is. If we continue to question whether we can learn from the rest of the world, we have a problem.”

The risk of bias
A major barrier to inclusive AI lies in the data that it’s trained on. “The majority of clinical trials are based on very narrow populations,” Arora explains. “Western, educated, industrialized, rich, democratic, the WEIRD demographics, who are often male and white.”

The consequences are significant. Treatments developed on this data do not always work equally well for women or people from different ethnic backgrounds. “If we don’t have the right data, we are building systems that reproduces the same old, same old: which basically translates to these tools being less effective for the majority.”

To change this, Arora calls for investment in diverse, high-quality datasets. “If we want truly democratic healthcare, we need cross-cultural and gender-inclusive data.”

Shared responsibility
Building an inclusive AI is not the responsibility of a single group. “It’s a collective effort,” Arora emphasizes. “Policymakers, tech developers, healthcare professionals and the public all have a role to play.”

However, each group faces its own challenges. Policymakers tend to be risk-averse, while entrepreneurs often focus heavily on efficiency. “Efficiency is not a dirty word,” Arora says, “but it cannot come at the expense of safety, security and equality.”

Efficiency is not a dirty word but it cannot come at the expense of safety, security and equality
If a patient moves across European borders, they often have to rely on proprietary tools just to access their own data

Tekst: Matthijs van Els

A UX blind spot
While countries like the Netherlands are at the forefront of healthcare innovation, Arora sees a persistent blind spot. “Dutch healthcare is building impressive tools but often negates the user perspective.”

That becomes problematic in a world where patients, data, and care increasingly cross borders. “If a patient moves across European borders, they often have to rely on proprietary tools just to access their own data,” Arora explains. “We are unintentionally pushing people into systems we don’t fully control.”

Even well-intentioned policies around data protection can have unintended consequences. “We can build lofty AI regulations and frameworks,” she says, “but if they don’t align with everyday realities, patients and caregivers will find workarounds, often by turning to big tech platforms such as ChatGPT.”

Back to basics
For Arora, the solution is not more complexity, but more humility. “AI needs to be grounded in the everyday challenges of healthcare workers and patients,” she argues.

That includes seemingly simple but crucial applications: translation, education, and accessibility. “It’s not rocket science. We can really use AI to help people. Whether it’s caregivers who are working within the system, or patients who are receiving care from that system.”

The rise of AI has sparked both excitement and fear, including in the field of healthcare. For Arora, that tension was precisely the reason to act. “There is so much fear and polarization right now. It’s tearing our society apart,” she explains. “If we want to contribute meaningfully to democratizing healthcare, we need to ask ourselves what drives us and gets us out of bed every day.”

From fear to action
That reflection led to the creation of the Inclusive AI Lab, an initiative that helps build inclusive and responsible AI policies and technologies by getting people from the business, civic, creative, and academic sectors from around the world to engage with each other. By attending to how diverse stakeholders in India, Latin America and China think about how AI systems are being applied and thought about, the Lab hopes to combine forces to combat formidable problems we face today.

The lab is both Global South- and women-led, something Arora considers essential in a field still dominated by Silicon Valley bros. The goal is to establish a fair future for everyone. “We need to snap out of local thinking and understand the interconnectedness of people, planet, and power. These data economies affect all of us. There’s no such thing as a local healthcare system in a global data economy. So why are we still building AI as if there is?”

T

As artificial intelligence rapidly reshapes healthcare, a fundamental question emerges: who is this technology actually built for? According to Payal Arora, professor of Inclusive AI Cultures at Utrecht University and co-founder of the Inclusive AI Lab, the answer is still under construction.

‘There is no such thing as a local healthcare system in a global data economy. So why are we still building AI as if there is?’
Payal Arora

From platform to people
Looking ahead, Arora remains cautiously optimistic. “Yes, there is a concentration of raw power in the hands of a few,” she acknowledges. “But it has also sparked a global movement. People and States are coming together and are already building public driven alternatives, for instance, digital public infrastructures.”

That shift requires a different mindset. “We need to move from platform to people,” she says. “From large-scale infrastructures to real user experiences. And put those experiences at the heart of design and deployment, not as an afterthought.”

Quality is a key word here. “If the tools are not well-trained, make mistakes and miss the cultural contexts and conditions, they can do more harm than good. If we want AI tools to work for more people, we need to do better at engaging with these people.

A global perspective on healthcare
Ultimately, Arora’s message is about perspective. “We need to end paternalistic approaches towards technology design.” Besides, healthcare does not exist in isolation. “There’s no such thing as a small, self-contained healthcare ecosystem. Covid should have taught us that by now,” she says. “We are part of a larger global storytelling on what well-being is. If we continue to question whether we can learn from the rest of the world, we have a problem.”

We need to move from platform to people, from large-scale infrastructures to real user experiences. And put those experiences at the heart of design and deployment
Efficiency is not a dirty word but it cannot come at the expense of safety, security and equality

A UX blind spot
While countries like the Netherlands are at the forefront of healthcare innovation, Arora sees a persistent blind spot. “Dutch healthcare is building impressive tools but often negates the user perspective.”

That becomes problematic in a world where patients, data, and care increasingly cross borders. “If a patient moves across European borders, they often have to rely on proprie-tary tools just to access their own data,” Arora explains. “We are unintentio-nally pushing people into systems we don’t fully control.”

Even well-intentioned policies around data protection can have unintended conse-quences. “We can build lofty AI regulations and frameworks,” she says, “but if they don’t align with everyday realities, patients and caregivers will find workarounds, often by turning to big tech platforms such as ChatGPT.”

Back to basics
For Arora, the solution is not more complexity, but more humility. “AI needs to be grounded in the everyday challenges of healthcare workers and patients,” she argues.

That includes seemingly simple but crucial applications: translation, education, and accessibility. “It’s not rocket science. We can really use AI to help people. Whether it’s caregivers who are working within the system, or patients who are receiving care from that system.”

The risk of bias
A major barrier to inclusive AI lies in the data that it’s trained on. “The majority of clinical trials are based on very narrow populations,” Arora explains. “Western, educated, industrialized, rich, democratic, the WEIRD demographics, who are often male and white.”

The consequences are significant. Treatments developed on this data do not always work equally well for women or people from different ethnic backgrounds. “If we don’t have the right data, we are building systems that reproduces the same old, same old: which basically translates to these tools being less effective for the majority.”

To change this, Arora calls for investment in diverse, high-quality datasets. “If we want truly democratic healthcare, we need cross-cultural and gender-inclusive data.”

Shared responsibility
Building an inclusive AI is not the responsibility of a single group. “It’s a collective effort,” Arora emphasizes. “Policymakers, tech developers, healthcare professionals and the public all have a role to play.”

However, each group faces its own challenges. Policymakers tend to be risk-averse, while entrepreneurs often focus heavily on efficiency. “Efficiency is not a dirty word,” Arora says, “but it cannot come at the expense of safety, security and equality.”

If a patient moves across European borders, they often have to rely on proprietary tools just to access their own data

The rise of AI has sparked both excitement and fear, including in the field of healthcare. For Arora, that tension was precisely the reason to act. “There is so much fear and polarization right now. It’s tearing our society apart,” she explains. “If we want to contribute meaningfully to democratizing healthcare, we need to ask ourselves what drives us and gets us out of bed every day.”

From fear to action
That reflection led to the creation of the Inclusive AI Lab, an initiative that helps build inclusive and responsible AI policies and technologies by getting people from the business, civic, creative, and academic sectors from around the world to engage with each other. By attending to how diverse stakeholders in India, Latin America and China think about how AI systems are being applied and thought about, the Lab hopes to combine forces to combat formidable problems we face today.

The lab is both Global South- and women-led, something Arora considers essential in a field still dominated by Silicon Valley bros. The goal is to establish a fair future for everyone. “We need to snap out of local thinking and understand the interconnectedness of people, planet, and power. These data economies affect all of us. There’s no such thing as a local healthcare system in a global data economy. So why are we still building AI as if there is?”

Tekst: Matthijs van Els

T

As artificial intelligence rapidly reshapes healthcare, a fundamental question emerges: who is this technology actually built for? According to Payal Arora, professor of Inclusive AI Cultures at Utrecht University and co-founder of the Inclusive AI Lab, the answer is still under construction.