KartavyaDesk
news

Medical AI should be as much about equity as algorithms

Kartavya Desk Staff

New technologies in medicine are often described in terms of innovation—faster diagnoses, smarter predictions, streamlined workflows. Yet history reminds us that technology does not ipso facto transform health systems simply. That requires governance, design, and intent. India’s recently released national strategy for the use of advanced computational systems in healthcare invites a deeper conversation. Rather than presenting these systems as tools to be added onto clinical practice, the strategy treats them as part of the architecture of the health system itself. This distinction is not semantic, but structural Most countries have introduced digital decision systems incrementally — through pilots, vendor contracts, or isolated hospital-level initiatives. Regulation has frequently followed innovation, not guided it. The result has been fragmentation: Uneven standards, unclear accountability, and uncertainty about responsibility when harm occurs. India’s approach signals a different philosophy. It begins not with algorithms, but with infrastructure—interoperable health records, consent-based data exchange, and nationally aligned standards. In doing so, it acknowledges a simple but often overlooked truth: Computational systems reflect the data and institutions that sustain them. If those foundations are fragmented or inequitable, the technology will reproduce those weaknesses at scale. Equally significant is the strategy’s insistence that oversight cannot be a one-time approval. These systems evolve. Their performance can shift as populations, practices, and contexts change. A model that works in a tertiary urban hospital may falter in a rural clinic. A prediction that appears accurate at launch may degrade silently over time. Governance, therefore, must extend across the full lifespan of deployment — through monitoring, reassessment, and, when necessary, withdrawal. Perhaps the most compelling feature of the strategy is its treatment of fairness not as an aspiration, but as a design element. In diverse societies, data rarely represent all communities equally. Urban centres, insured populations, and well-resourced facilities generate more complete records than marginalised or rural groups. Without deliberate safeguards, systems trained on such data risk reinforcing structural inequities. By emphasising representativeness and equity impact assessment, the strategy confronts this risk directly. It also recognises that safe use depends on human capacity. No matter how sophisticated the system, clinicians must understand its limitations, administrators must interpret its outputs responsibly, and regulators must grasp its risks. The call for structured training, institutional units dedicated to oversight, and integration of digital literacy into professional education reflects an understanding that governance is as much about people as it is about technology. There is also an implicit political economy argument. Public procurement, interoperability requirements, and clear pathways from pilot to scale are treated as instruments of stewardship. In many health systems, fragmented purchasing has locked institutions into proprietary platforms that hinder integration and accountability. By positioning the state as an active shaper of standards and incentives, the strategy suggests that public value must guide adoption. Whether this vision succeeds will depend on implementation — transparent classification of risk, meaningful audit mechanisms, sustained investment in data quality, and coordination across federal structures. Policies alone do not guarantee outcomes. Yet the conceptual shift is notable. Medicine has long regulated medicines and devices. It now faces the task of governing decision-support architectures that influence diagnosis, treatment pathways, resource allocation, and public health response. When computational systems begin to shape who receives care, how quickly, and on what basis, they cease to be peripheral tools. They become part of the care continuum itself. The global medical community stands at a pivotal moment. We can treat intelligent technologies as products to be purchased and patched, or as infrastructure to be stewarded. The latter demands humility, foresight, and institutional commitment. It requires acknowledging that code carries consequences. • 1If ‘miya-biwi razi’, then what is the role of parents in marriage? Gujarat must stop policing love • 2Did Youth Congress’s ‘shirtless protest’ at AI summit hurt India’s image? That is the wrong question • 3Devuji’s surrender marks the end of the Maoist movement in India • 4Evidence, not panic, must shape street dog policy • 5Rahul Gandhi’s self-righteousness will only harm him and his party’s prospects If the future of healthcare will be increasingly mediated by digital systems, then governance must mature alongside innovation. The true measure of progress will not be technological sophistication alone, but whether these systems strengthen trust, widen access, and protect those most vulnerable. In that sense, the debate is not about technology. It is about the kind of health systems we choose to build. Joshi is a Mumbai-based endocrinologist. Samajdar is clinical pharmacologist and diabetes and allergy-asthma therapeutics specialist in Kolkata

AI-assisted content, editorially reviewed by Kartavya Desk Staff.

About Kartavya Desk Staff

Articles in our archive published before our editorial team was expanded. Legacy content is periodically reviewed and updated by our current editors.

All News