Tata Consultancy Services Launches Responsible AI Framework

‘The orchestration will happen so that decision-making action will flow as the agents interact. But most importantly, the human in the loop continues to stay relevant as the technology evolves. And in this whole context, one of the things our clients keep reinforcing is the need for AI to be safe, secure, and used in an ethical manner,’ says Nidhi Srivastava, vice president and global head of TCS’ AI.Cloud offerings.

Tata Consultancy Services this month introduced a comprehensive AI lifecycle platform it said will help it and its customers deploy responsible AI.

The move comes as customers increasingly look to deploy AI solutions while ensuring that such solutions meet business requirements, said Nidhi Srivastava, vice president and global head of TCS’ AI.Cloud offerings.

It has been an exciting year for TCS and its AI.Cloud business unit, which had several launches including the release of its AI for Business study which showed that businesses are ready to go big from an enterprise AI adoption perspective in 2025, Srivastava (pictured above) told CRN.

“What it really uncovered was the fact that AI as a strategic driver for change will require a whole lot of focus on AI strategy, change management approaches, and last but not the least, talent strategy in terms of how you would build talent,” she said. “Would you hire? Would you upskill people? So I think these considerations became fairly clear.”

[Related: TCS North America Chairman: Enterprises Preparing For GenAI Roll-Out]

As a result, TCS expects 2025 to be the year that AI goes mainstream, Srivastava said. This includes a shift from chatbots and agents to a focus on agentic AI, where multiple agents will work together to determine an action, she said.

“The orchestration will happen so that decision-making action will flow as the agents interact,” she said. “But most importantly, the human in the loop continues to stay relevant as the technology evolves. And in this whole context, one of the things our clients keep reinforcing is the need for AI to be safe, secure, and used in an ethical manner.”

For that reason, TCS used the recent AWS re:Invent conference to launch its responsible AI framework solution on AWS using Amazon Bedrock Guardrails, Srivastava said.

TCS defines responsible AI as a practice that allows the design and building of AI solutions that are safe and ethical using a combination of the five tenets of what the company calls its SAFTI framework, which include security, accountability, fairness, transparency, and identity protection, Srivastava said.

Using that framework, TCS can look at the outputs that can be generated from using AI models, Srivastava said.

This includes ensuring that the output is first of all secure, that there is someone accountable for the results of the model, that the outcomes should be fair and transparent, and that any identities will be protected, she said.

TCS then uses that framework to build and deploy using what it calls its “5 A’s” framework, Srivastava said. This includes assessing what AI solutions, techniques, technologist, guardrails, regulations, and governance frameworks a client already has in place; analyzing the environment to generate a readiness score to build a plan for responsible AI; aligning policy templates versus a repository of configurable policy templates; acting to roll out guardrails and the responsible AI framework; and auditing to monitor results on a regular basis to ensure the outputs comply with the regulations and standards that were set, she said.

TCS uses a variety of tools to ensure responsible AI, Srivastava said.

“For example, we first and foremost look at the data being used to train the model, and then check for different things like bias,” she said. “You'll hear a lot about bias using AI for doing recruitment. And so you look for all kinds of biases like gender or ethnicity. You look at the data and see, as you run the model, what kind of outputs are getting generated. And then if you see some biases—because in real life, there are biases—you could retrain the model and address the bias which is creeping in from the data to correct it.”

TCS also tests things like toxic content creation and malicious output that is getting generated to help clients build their policies and then build the guardrails, Srivastava said.

“And now the hyperscalers are also coming in with their own implementation of guardrails on their respective platforms so you can address these things such as multimodal toxicity,” she said. “These are things that come in as a service with Amazon Bedrock, and that's how we did our launch. It's a process of continuous testing, and also continuously checking that the model is not drifting from the objective of the problem that was set out to resolve.”

The biases of the people developing the models is addressed via checks and balances built in through the governance and review mechanisms are set up for the client, Srivastava said.

“So while the team that is doing the testing is looking for biases, you will have the sponsor, something equivalent to an ethics or a compliance office, set up in your organization when you are driving AI adoption at scale,” she said. “It would be their job to do regular reviews, checks and audits. And that is where the fifth ‘A’ of our framework, audit, comes in, wherein you will continuously look for metrics to ensure that the guardrails and policies are being implemented correctly. Your ethics office, the compliance office, would do regular reviews and audits to ensure the policies are implemented in a fair and a correct manner.”

Any client organization rolling out an AI-based solution inherently takes the responsibility of ensuring the system being built and deployed will align with responsible AI practices, Srivastava said.

“It is the customer’s responsibility, especially for anything external facing such as sales and marketing where they're doing outreach to prospective clients,” she said. “However, when TCS is building a system, it depends on the engagement model. Whether you are building that solution or system as a service or doing the work for hire determines what level of accountability TCS will have looking at the engagement model. If it is essentially something we are designing and something we are taking for responsibility for in terms of delivering that application as a service, then TCS would also be responsible, or TCS would ensure that the output being generated meets the responsible AI expectations or framework.”

TCS’ responsible AI framework is available for AWS environments, but the company will expand to other hyperscalers going forward, Srivastava said.