Business decision making in the AI era: Is the governance paradigm changing?
admin
- 0
Corporate decision-making may well be gearing up for a generational shift with the advent of autonomous Artificial Intelligence (AI) platforms. A recent report by Deloitte finds that over 80% of Indian organisations are exploring the development of autonomous agents, marking a shift towards Agentic AI.
Agentic AI operates autonomously — making decisions, taking actions and even learning on its own, without continual human supervision. This is different from traditional AI which performs specific tasks based on predefined rules or algorithms. With a goal-oriented approach, Agentic AI can analyse events, forecast outcomes and optimise its behaviours to accomplish desired objectives.
The use of advanced AI platforms makes its adoption an interesting proposition given the speed, convenience and depth of analysis it is likely to offer. Recently, Tata Consultancy Services (TCS) announced a technology collaboration aimed at bringing to the Indian market enterprise-grade GenAI tools for decision intelligence with conversational AI. It hopes that such a platform will enable C-suite executives to ask questions and gain real insights from their data repository.
Such advanced AI applications come with its share of positives: speed, agility, depth, breadth of analysis.
Despite the advantages, adoption of AI by boards is not an easy choice. Concerns around errors with real-world consequence, bias, hallucinations and data quality are among some of the factors which come in the way of AI adoption.
Indian companies thus far have not been very aggressive on AI adoption. According to NASSCOM’s 2024 AI Adoption Index 2.0, 87% Indian companies are in the middle stages of Enthusiast and Expert AI adopters. Given the opportunity AI seems to offer, it may not be too late for boards— the real decision-leaders in any company, to be influenced by AI in their decision-making work.
In a board setting, use of advanced AI platforms can bring a mix of opportunities and risks. It also challenges the very manner how companies are governed. The article looks into some key factors which may need considerable thought.
Board’s decision-making process
Corporate governance frameworks function on the principle of separation of management and
the board. Management runs the corporation and the board oversees management actions in the
interest of the shareholders. While traditionally, boards rely on information provided by the
management, with AI, the information dynamic could significantly change.
-
Deloitte Fourth Wave of the State of GenAI Report (2025).
https://www.deloitte.com/in/en/about/pressroom/india-rides-the-agentic-ai-wave.html
- ‘TCS and Vianai Reimagine the Future of Decision Intelligence with Conversational AI’, TCS Press Release, April 17, 2025.
-
Deloitte Fourth Wave of the State of GenAI Report (2025).
https://www.deloitte.com/in/en/about/pressroom/india-rides-the-agentic-ai-wave.html
Larcker et al., in their recently published paper, “The Artificially Intelligent Boardroom” states—
In theory, granting directors access to an AI interface that itself has full access to all data in the corporate data repository means that directors will have practically no limit (relative to management) to the information they can access. From a legal perspective, however, boards might not want unrestricted access. Where and how to draw the line (and what information is ring-fenced) will require careful thinking.
The decision-making process within boards often involves dealing with many unknowns. Well-run boards benefit from the collective wisdom of human beings, who are responsible, prudent and intelligent. AI though may not understand everything. For instance, it may not understand the importance of privacy or the importance of disclosures. Similarly, it may not appreciate the value of sustainability or employee welfare.
A team from Warwick Business School and The BCG Henderson Institute teamed up to understand: What happens if they take AI out of the lab into a real company. They engaged with an Austrian company to see what and how AI (they used the ChatGPT platform) does in a live board environment.
The results were mixed. The experiment showed that while AI could be valuable in guiding and enriching discussions, it performed satisfactory only with an actively engaged management. Also, it offered significant time and cost savings.
The downsides: The authors called it the “Completeness illusion.” It found that in one of the meetings there was a discussion of an upcoming announcement. ChatGPT offered a wide set of factors that needed to be addressed prior to the announcement, but it omitted potential legal implications that needed to be considered. Under usual circumstances, the executives might have brought up the legal questions, but AI had created the illusion of infallible completeness, which they had begun to rely on.
Source: Christian Stadler and Martin Reeves, “When AI Gets a Board Seat,” Harvard Business Review, March 12, 2025
While machines certainly will always be more intelligent than the human, the same cannot be said about their level of prudence and sense of responsibility. It is hard to train AI models values such as integrity and empathy and respect for the law. How then do we ensure that the options they come up with, for instance, when dealing with a topic which involves substantial public interest, will follow a balanced approach?
The information asymmetry versus information overload conundrum
One of the risks that independent directors have to deal with is information asymmetry. Their source of information is the management and the information they receive is part of the board process. Now, if an AI platform linked to a company’s entire data repository is made available to the board, it creates the risk of information overload.
Ask any independent director if they would want an information overload – the answer may not always be in the affirmative. The law itself expects independent directors to act only with respect
4. Larcker, David F., Seru, Amit, Tayan, Brian, and Yoler, Laurie.
The Artificially Intelligent Boardroom (March 17, 2025).
Rock Center for Corporate Governance at Stanford University Working Paper No. CL110
to information that they are provided as part of the board process. If they know way more than
that, they will be at par with full-time directors and their personal risk significantly increases.
What about Accountability?
Who is ultimately accountable if AI fails in its analysis and the board relies on it? Who is to be held
responsible if AI-generated insights are manipulative in nature?
A decision-maker in a company has to act responsibly, be fair and diligent. Directors are somewhat
immune from legal liability for their wrong actions by virtue of what is called the Business Judgment Rule however, this privilege is available only when they have acted with diligence and
have demonstrated their good faith actions.
AI models are vulnerable to failure. Algorithmic harm can be introduced via poor training-data
quality, technical flaws in the algorithm, human error in development, or interacting with the
algorithm outside intended use cases.5 Failure to address AI risks can thus result in
regulatory, reputational, or legal risks.
AI certainly cannot be elevated to the position of a director. In India, only an individual can be
appointed as a director in a company. The accountability for failure therefore rests with the
director. At best, the role of AI can be that of somebody whom the board relies upon. Even in such
a case, what happens if the AI which the board relies upon is found to have been biased in its
analysis?
SEBI’s proposed new amendments on usage of AI tools by Regulated Entities (which includes
stock exchanges, stock brokers, mutual funds, AMCs, etc.), reflects on situations when AI-assisted
decisions cause damage. It says that every Regulated Entity concerned shall be held responsible
and also be liable to investors for all consequences arising from outputs from the usage of
Artificial Intelligence (AI) or Machine Learning (ML) technologies that are relied upon.
While SEBI’s proposal concerns Regulated Entities, it makes the regulatory intent quite clear:
it is the one who relies on AI-generated insights who would be responsible. Going by this standard,
it may well follow that if AI is relied upon for critical business decisions and which eventually
fails the test of due diligence, it is the company which will be held accountable.
Conclusion
For all its capabilities, AI does not seem—as yet—adept at dealing with work which involves
putting into practice the human virtues such as integrity, prudence, empathy. For AI to engage
with boards, actively or passively, it has to be trained in some of these virtues so as to limit the
risk of bias.
In addition, there may also be a need to train AI to obey certain legal standards. It is equally
important for the law to recognise the growing influence of AI in corporate decision-making and
set standards dealing with risks of information bias and privacy.
AI truly can put to test the way companies are governed in the future. But not yet.
5. Source: The Data & Trust Alliance and IBM, Perspectives from the Front: Data Quality, AI, and Trust
(New York City, New York: Data & Trust Alliance, September 2023).