Journalism that records events, examines conduct, and notes consequences that rarely surprise.

Category: Business

Advertisement

Need a lawyer for criminal proceedings before the Punjab and Haryana High Court at Chandigarh?

For legal guidance relating to criminal cases, bail, arrest, FIRs, investigation, and High Court proceedings, click here.

Artificial Intelligence in Indian Workplaces: Surveillance, Control, and the Emerging Labor Divide

In recent months, a growing cohort of Indian enterprises, ranging from multinational information‑technology service houses to indigenous manufacturing conglomerates, have accelerated the deployment of algorithmic decision‑making platforms that purport to optimise workflow whilst simultaneously embedding pervasive digital monitoring mechanisms within the quotidian activities of salaried personnel. The ostensible promise of increased efficiency, frequently amplified by corporate press releases and ministerial pronouncements, has been eclipsed by emerging evidence that the same computational apparatuses are being wielded as instruments of behavioural governance, restricting autonomous judgement and fostering a climate of constant surveillance that bears resemblance to the watch‑towers of erstwhile industrial paternalism.

Consequently, a bifurcation has become discernible within the Indian labour market, wherein a minority of technocratic cadres, equipped with quantitative acumen and privileged access to proprietary model‑training datasets, are able to harness artificial intelligence as an augmentative ally, whereas the overwhelming majority of frontline operatives, shop‑floor assemblers, and service agents find their daily routines increasingly dictated by opaque algorithmic scores that determine task allocation, performance appraisal, and even the continuity of their contractual engagement.

The existing statutory architecture, comprising the Information Technology Act of 2000 and the nascent Personal Data Protection Bill awaiting parliamentary assent, offers scant guidance regarding the accountability of private algorithms, thereby allowing corporate governance bodies to assert that algorithmic outcomes constitute merely "business judgments" insulated from conventional fiduciary scrutiny. Such regulatory lacunae have prompted the Ministry of Labour and Employment to issue advisory notes that, while acknowledging the potential for discriminatory bias, fall short of mandating independent algorithmic audits or establishing a public registry of model provenance, leaving workers bereft of any meaningful recourse when confronted with inexplicable disciplinary actions triggered by black‑box determinations.

Analysts specialising in the Indian equity sector have observed that the superficial narrative of productivity gains, often cited by the Confederation of Indian Industry and echoed in quarterly earnings calls, masks a subtler erosion of employee morale and a concomitant rise in attrition rates that, if unaddressed, may engender hidden costs surpassing any marginal increase in output attributable to mechanised decision support. From the perspective of the ordinary citizen, the encroachment of algorithmic surveillance into public service delivery—manifested in the automated adjudication of welfare benefits, the allocation of railway reservations, and the triage of telemedicine consultations—raises profound questions concerning the transparency of state‑run digital infrastructures and the capacity of democratic oversight mechanisms to interrogate the fairness of outcomes that affect livelihoods on a mass scale.

If the present legislative framework permits enterprises to deploy predictive scoring engines without obligating them to disclose the logical criteria or the data provenance underlying such scores, does this not contravene the foundational principle of procedural fairness that undergirds Indian labour law, and should Parliament not therefore consider instituting a statutory requirement for algorithmic transparency that parallels the disclosure obligations imposed on financial statements? Moreover, when a corporation invokes the defence that algorithmic outputs are the product of autonomous machine learning processes beyond human control, does this not create a legal vacuum whereby accountability is diffused, thereby necessitating a reevaluation of the doctrine of corporate responsibility to encompass not only the actions of directors but also the unintended consequences of proprietary code deployed across millions of employee interfaces? Finally, in light of the observable correlation between intensified algorithmic monitoring and rising incidents of employee burnout, does the State not bear an inviolable duty to commission independent impact assessments that measure psychosocial harm, and should such assessments be made a prerequisite for granting any corporate licence to operate algorithmic workforce management systems within the national territory?

Given that the Personal Data Protection Bill, once enacted, will confer upon data fiduciaries the obligation to obtain explicit consent for processing sensitive personal information, should the definition of "sensitive" be expanded to expressly include algorithm‑generated behavioural profiles that influence employment conditions, thereby preventing corporations from sidestepping consent requirements by classifying such profiles as merely ancillary operational data? In addition, if the National Financial Reporting Authority were to extend its auditing remit to cover the financial ramifications of algorithmic decision‑making, such as quantifying losses attributable to erroneous automated terminations, would this not furnish shareholders and the broader investing public with a more accurate appraisal of corporate risk, and consequently compel boards to institute robust governance frameworks that align AI deployment with fiduciary duty? Lastly, should the Competition Commission of India deem that algorithmic scheduling platforms constitute essential facilities whose access must be non‑discriminatory, might it be prudent to impose sector‑wide licensing conditions that obligate providers to disclose algorithmic criteria, thereby safeguarding smaller enterprises from being marginalised by opaque optimisation engines that otherwise entrench market concentration?

Published: May 11, 2026