Journalism that records events, examines conduct, and notes consequences that rarely surprise.

Category: Society

Advertisement

Need a lawyer for criminal proceedings before the Punjab and Haryana High Court at Chandigarh?

For legal guidance relating to criminal cases, bail, arrest, FIRs, investigation, and High Court proceedings, click here.

Survey Unveils Widespread Inability Among Indian Employees to Detect AI‑Generated Correspondence, Sparking Questions of Institutional Transparency

A recently published nationwide survey conducted by the consultancy firm Resume Now indicates that while a majority of Indian employees assert confidence in discerning artificial‑intelligence‑generated written material, empirical testing reveals that nearly one‑half of participants are unable to correctly identify such content.

The investigation, which encompassed respondents from diverse sectors including information‑technology services, public‑sector undertakings, and private manufacturing enterprises, found that the diffusion of algorithmically produced emails, reports, and chat‑based interactions has become sufficiently commonplace to render the traditional cues of human authorship increasingly obscure.

Within the public‑health domain, for instance, junior administrative officers reported receiving briefing notes whose stylistic uniformity suggested machine assistance, yet the inability to verify origin engendered doubts regarding the reliability of data presented for policy formulation.

Similarly, educators at municipal schools disclosed that curriculum‑related memoranda circulated through digital platforms often bore the hallmarks of generative text, prompting concerns that pedagogical directives might be derived from non‑human sources lacking contextual sensitivity to regional linguistic variations.

The broader civic infrastructure appears likewise compromised, as municipal ward officers recounted instances where citizen grievance letters were drafted by automated systems, thereby obscuring the authenticity of grievance narratives and potentially diminishing the efficacy of redress mechanisms.

These observations collectively underscore a systemic neglect wherein administrative protocols have not yet evolved to mandate transparent labeling of AI‑assisted communication, a lacuna that may exacerbate existing social inequalities by privileging those with technical literacy while marginalising vulnerable populations reliant on clear, human‑authored guidance.

In light of these findings, one must inquire whether the prevailing welfare design adequately anticipates the epistemic risks introduced by covert algorithmic authorship, whether statutory frameworks impose sufficient evidentiary duties upon agencies to disclose the provenance of official correspondence, and whether the ordinary citizen retains any meaningful capacity to demand verifiable explanations rather than accept unqualified assurances of authenticity.

Furthermore, does the existing administrative accountability structure possess the requisite mechanisms to audit and rectify the unchecked proliferation of AI‑generated documentation within health, education, and civic services, and might the apparent policy vacuum betray a deeper failure to align technological adoption with the constitutional mandate of equal access to transparent governance?

Published: May 9, 2026