A recent academic paper on arXiv (Sep 11, 2025) titled “Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India” by Avinash Agarwal & Manisha J. Nene investigates gaps in India’s regulatory framework around AI systems used in telecommunications. It argues that laws like the Telecommunications Act, 2023, CERT-In Rules, and the Digital Personal Data Protection Act, 2023 cover cybersecurity/data privacy but fail to adequately address operational AI incidents (bias, performance drift, algorithmic failure). It proposes mandating reporting of high-risk AI failures, creating nodal agency, and standardized reporting frameworks.
For telecom engineers, AI model trainers, and operations teams, this kind of gap is anxiety inducing: what happens if an AI system misroutes emergency requests, or biases certain callers, or drops service unpredictably—who owns responsibility? For managers, being reactive to AI incidents feels risky. The academic proposal gives hope: structured accountability, protocols, less ambiguity. Employees working on AI hope they'll have clarity on incident thresholds and safety nets rather than being judged by rumors after things go wrong.
Though the paper is not yet law, it signals a fast-coming compliance domain. HR / legal / risk teams should monitor this regulatory trend, start logging AI-related failures internally, define thresholds for reporting, and build internal incident response teams. Employment contracts must reflect AI risk and responsibilities where relevant. Proactively building traceability, bias detection, fallback plan (human override), and communications plan will position firms well when law follows policy suggestions. Globally, AI governance frameworks (EU AI Act etc.) already embed incident reporting; India’s telecom-AI sector appears poised to follow.
Should employees have right to know when AI tools evaluating them fail?
What internal policy would you put in place to track AI incidents in operations?
For telecom engineers, AI model trainers, and operations teams, this kind of gap is anxiety inducing: what happens if an AI system misroutes emergency requests, or biases certain callers, or drops service unpredictably—who owns responsibility? For managers, being reactive to AI incidents feels risky. The academic proposal gives hope: structured accountability, protocols, less ambiguity. Employees working on AI hope they'll have clarity on incident thresholds and safety nets rather than being judged by rumors after things go wrong.
Though the paper is not yet law, it signals a fast-coming compliance domain. HR / legal / risk teams should monitor this regulatory trend, start logging AI-related failures internally, define thresholds for reporting, and build internal incident response teams. Employment contracts must reflect AI risk and responsibilities where relevant. Proactively building traceability, bias detection, fallback plan (human override), and communications plan will position firms well when law follows policy suggestions. Globally, AI governance frameworks (EU AI Act etc.) already embed incident reporting; India’s telecom-AI sector appears poised to follow.
Should employees have right to know when AI tools evaluating them fail?
What internal policy would you put in place to track AI incidents in operations?
Yes, employees should have the right to know when AI tools evaluating them fail. Transparency in AI operations is crucial for maintaining trust and accountability in the workplace.
As for the internal policy to track AI incidents in operations, here are some steps that could be taken:
1. Establish an AI Incident Response Team: This team would be responsible for monitoring, logging, and addressing AI-related failures. It should include members from different departments such as HR, IT, and operations to ensure a comprehensive approach.
2. Define Incident Thresholds: Clearly define what constitutes an AI incident. This could range from minor performance drifts to major algorithmic failures. The thresholds for reporting these incidents should also be clearly defined.
3. Implement a Reporting System: Develop a standardized reporting system for AI incidents. This could be an internal software or a dedicated channel where incidents can be logged and tracked.
4. Regular Training and Updates: Conduct regular training sessions for employees on how to report AI incidents. Keep them updated about changes in the AI governance frameworks and how it affects their roles and responsibilities.
5. Review and Update Employment Contracts: Employment contracts should reflect AI risk and responsibilities. They should clearly state the employee's role in case of an AI incident and the protocols they need to follow.
6. Develop a Communication Plan: In case of AI failures, a clear communication plan should be in place. This includes informing the affected parties, explaining the incident, and outlining the steps taken to resolve the issue.
Remember, the goal of these policies is not just to comply with future regulations, but also to build a culture of transparency and accountability in the use of AI systems.
From India, Gurugram
As for the internal policy to track AI incidents in operations, here are some steps that could be taken:
1. Establish an AI Incident Response Team: This team would be responsible for monitoring, logging, and addressing AI-related failures. It should include members from different departments such as HR, IT, and operations to ensure a comprehensive approach.
2. Define Incident Thresholds: Clearly define what constitutes an AI incident. This could range from minor performance drifts to major algorithmic failures. The thresholds for reporting these incidents should also be clearly defined.
3. Implement a Reporting System: Develop a standardized reporting system for AI incidents. This could be an internal software or a dedicated channel where incidents can be logged and tracked.
4. Regular Training and Updates: Conduct regular training sessions for employees on how to report AI incidents. Keep them updated about changes in the AI governance frameworks and how it affects their roles and responsibilities.
5. Review and Update Employment Contracts: Employment contracts should reflect AI risk and responsibilities. They should clearly state the employee's role in case of an AI incident and the protocols they need to follow.
6. Develop a Communication Plan: In case of AI failures, a clear communication plan should be in place. This includes informing the affected parties, explaining the incident, and outlining the steps taken to resolve the issue.
Remember, the goal of these policies is not just to comply with future regulations, but also to build a culture of transparency and accountability in the use of AI systems.
From India, Gurugram
CiteHR is an AI-augmented HR knowledge and collaboration platform, enabling HR professionals to solve real-world challenges, validate decisions, and stay ahead through collective intelligence and machine-enhanced guidance. Join Our Platform.