When implementation of ai happens at a scale, how does responsible ai contribute to business?

As companies embrace artificial intelligence to drive business strategy, the topic of responsible AI implementation is gaining traction.

A new global research study defines responsible AI as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.”

The study, conducted by MIT Sloan Management Review and Boston Consulting Group, found that while AI initiatives are surging, responsible AI is lagging.

While most firms surveyed said they view responsible AI as instrumental to mitigating technology’s risks — including issues of safety, bias, fairness, and privacy — they acknowledged a failure to prioritize RAI. The gap increases the possibility of failure and exposes companies to regulatory, financial, and customer satisfaction risks.

The MIT Sloan/BCG report, which includes interviews with C-level executives and AI experts alongside survey results, found significant gaps between companies’ interest in RAI and their ability to execute practices across the enterprise.

52%

52% of companies practice some level of responsible AI, but 79% of those say their implementations are limited in scale and scope.

Conducted during the spring of 2022, the survey analyzed responses from 1,093 participants representing organizations from 96 countries and reporting at least $100 million in annual revenue across 22 industries.

The majority of survey respondents (84%) believe RAI should be a top management priority, yet only slightly more than half (56%) confirm that RAI has achieved that status, and only a quarter said they have a fully mature RAI program in place.

Just over half of respondents (52%) said their firms conduct some level of RAI practices, but 79% of those admitted their implementations were limited in scale and scope.

Why are companies having so much trouble walking the talk when it comes to RAI? Part of the problem is confusion over the term itself — which overlaps with ethical AI — a hurdle cited by 36% of survey respondents who admitted there is little consistency given the practice is still evolving.

Other factors contributing to limited RAI implementations fall into the bucket of general organizational challenges:

  • 54% of survey respondents are struggling to find RAI expertise and talent.
  • 53% lacked training or knowledge among staff members.
  • 43% reported limited prioritization and attention by senior leaders.
  • Proper funding (43%) and awareness of RAI initiatives (42%) also hamper the maturity of RAI initiatives.

With AI becoming further entrenched in business, there’s mounting pressure on companies to bridge these gaps and prioritize and execute on RAI successfully, the report stated.

“As we navigate increasing complexity and the unknowns of an AI-powered future, establishing a clear ethical framework isn’t optional — it’s vital for its future,” said Riyanka Roy Choudhury, a CodeX fellow at Stanford Law School’s Computational Law Center and one of the AI experts interviewed for the report.

Getting RAI done right

Those companies with the most mature RAI programs — in the case of the MIT Sloan/BCS survey, about 16% of respondents, which the report called “RAI leaders” — have a number of things in common. They view RAI as an organizational issue, not a technical one, and they are investing time and resources to create comprehensive RAI programs.

These firms are also taking a more strategic approach to RAI, led by corporate values and an expansive view of responsibility towards myriad stakeholders along with society as a whole.

Taking a leadership role in RAI translates into measurable business benefits such as better products and services, improved long-term profitability, even enhanced recruiting and retention. Forty-one percent of RAI leaders confirmed they have realized some measurable business benefit compared to only 14% of companies less invested in RAI.

RAI leaders are also better equipped to deal with an increasingly active AI regulatory climate — more than half (51%) of RAI leaders feel ready to meet the requirements of emerging AI regulations compared to less than a third of organizations with nascent RAI initiatives, the survey found.

Companies with mature RAI programs adhere to some common best practices. Among them:

Make RAI part of the executive agenda. RAI is not merely a “check the box” exercise, but rather part of the organization’s top management agenda. For example, some 77% of RAI leader firms are investing material sources (training, talent, budget) in RAI efforts compared to 39% of respondents overall.

Instead of product managers or software developers directing RAI decisions, there is clear messaging from the top that implementing AI responsibly is a top organizational priority.

“Without leadership support, practitioners may lack the necessary incentives, time, and resources to prioritize RAI,” said Steven Vosloo, digital policy specialist in UNICEF’s Office of Global Insight and Policy, and one of the experts interviewed for the MIT Sloan/BCG survey.

In fact, nearly half (47%) of RAI leaders said they involve the CEO in their RAI efforts, more than double those of their counterparts.

Take an expansive view. Beyond top management involvement, mature RAI programs also include a broad range of participants in these efforts — an average of 5.8 roles in leading companies versus only 3.9 roles from non-leaders, the survey found.

The majority of leading companies (73%) are approaching RAI as part of their corporate social responsibility efforts, even considering society as a key stakeholder. For these companies, the values and principles that determine their approach to responsible behavior apply to their entire portfolio of technologies and systems — along with and including processes like RAI.

“Many of the core ideas behind responsible AI, such as bias prevention, transparency, and fairness, are already aligned with the fundamental principles of corporate social responsibility,” said Nitzan Mekel-Bobrov, chief AI officer at eBay and one of the experts interviewed for the survey. “So it should already feel natural for an organization to tie in its AI efforts.”

Start early, not after the fact. The survey shows that it takes three years on average to begin realizing business benefits from RAI. Therefore, companies should launch RAI initiatives as soon as possible, nurturing the requisite expertise and providing training. AI experts interviewed for the survey also suggest increasing RAI maturity ahead of AI maturity to prevent failures and significantly reduce the ethical and business risks associated with scaling AI efforts.

Given the high stakes surrounding artificial intelligence, RAI needs to be prioritized as an organizational mandate, not just a technology issue. Companies able to connect RAI to their mission to be a responsible corporate citizen are the ones with the best outcomes.

Read the report

When AI was first introduced into the world of business, many of us got excited about the new possibilities and facilitations it made possible. However, as time passed, more and more concerns arose around what artificial intelligence could do if used for wrongful purposes. 

That’s when the question of what responsible AI means first came to the forefront. 

So, how do we ensure these powerful tools are used for good and not for harm? In this article, we’ll explore what responsible AI is, give some examples from business, and discuss why it’s so important for the future.

What is responsible AI?

Responsible AI can be defined in several ways, but at its heart, it’s about ensuring that artificial intelligence (AI) technologies, AI development services, and processes are ethically sound and that their use does not harm individuals or society.

It involves creating systems that are explainable, transparent, and accountable. Ones that protect user privacy and are fair, unbiased, and inclusive. 

In other words, it is about making sure that modern and future AI technologies, many of which we don’t fully understand just yet, will be used in a way that’s responsible and ethical.

Read also: How to start with AI?

Source: The Guardian

Given how powerful AI systems have become in recent years, many worry that – in the wrong hands – they could cause serious harm to both individuals and entire societies. There are plenty of ethical concerns, from the impact of AI on jobs to the use of AI in warfare.

In fact, we’ve seen how the lack of AI responsibility can be detrimental to the world in recent years.

The Cambridge Analytica scandal that was first reported in 2015 shocked the world with how data was used to manipulate global elections (including presidential campaigns by Barack Obama and Donald Trump, as well as Brexit in the UK). In all these instances, AI technology was used to suggest content to on-the-fence voters. The intention here was to swing their votes in the way of the party that was paying for the content to be displayed.

This is a clear example of how AI can be used unethically. Similar instances of AI violation can also be found in the business world, and come from none else than tech giant Amazon.

Back in 2014, Amazon introduced what was a pioneer technology back then, i.e., an AI-powered recruitment tool that would help minimize human screening of early candidates. Unfortunately, as the tool was trained primarily on samples coming from male applicants, it came with an accidentally built-in bias against women. This resulted with female candidates being eliminated from the recruitment process due to gender bias – a violation in the U.S. It took Amazon a whole year to realize the major glitch in their system. Unfortunately, the damage has already been done.

That being said, from a technical standpoint, responsible AI is about creating explainable, transparent, and accountable systems. It should also protect user privacy and provide fair, unbiased, and inclusive results.

Read also: Mistakes in AI adoption

What are the advantages of adopting responsible AI by an organization?

Here are some of the most important advantages of introducing AI responsibility:

Avoiding unconscious bias

The ability to explain the outputs of machine learning models is crucial if we want to build trust in AI. If a model is trained on data that contains bias, it will be reflected in the model’s output.

This happened in 2019 when researchers discovered that US hospitals effectively operated on an AI with racist bias. The algorithm they used helped them identify which patients would benefit from ‘high-risk care management services’.

This allowed hospitals and insurance companies to quickly find patients who could have access to specially trained nurses, extra primary care visits, and extra monitoring services. The research paper on the algorithm found that the technology heavily favored white people to receive the care.

Out of 50,000 researched patients taken randomly, over 43,500 were white, whereas only 6,000 were Black. 

This kind of bias needs to be kept in check and eliminated entirely as it causes so much harm to those in need of care but are deprived of it due to the fault in a technical system. 

Verifiable AI results

Verifying the results of AI systems is becoming harder. This is because AI has the potential to calculate results using datasets of millions if not billions of data points, identifying patterns and connections that human minds could never comprehend. The scale is just staggering.

However, if we cannot verify the results, we can’t ensure that the AI is doing its job correctly. For example, we can’t get objective and inclusive results if we don’t feed the AI with unbiased and inclusive data. And we can’t check if this is the case without verifying the results.

Failure to do so can lead to problems, such as false positives in medical diagnoses or errors in financial predictions.

Protecting security & privacy of data

There is a considerable emphasis in the world of Responsible AI on security and privacy of data. 

In 2021, Clearview AI was found in violation of the Australian Privacy Act, The UK ICO, three Canadian privacy authorities, and France’s CNIL for the collection of user biometric data and images without the consent of its users. 

This goes to show that AI responsibility doesn’t come down to ethics only. It’s also about being compliant with national and international data protection laws like GDPR, which have been put in place to prevent data abuse online. 

Contributing to organizational transparency

Organizations using AI-powered systems should be open about the implementation of these technologies. This includes disclosing the use of AI and providing information about the purpose, expected outcome, and associated risks. By being transparent about the use of AI, organizations can build trust among the stakeholders. On this note, it’s important to mention that creating a model that lives by responsible AI guidelines is just part of success. The remaining element is making sure people within your organization know how to use the data and how to derive insights from it, all the while staying ethical.

Finally, let’s not forget about the positive impact that adopting the right approach to AI has on your business. Not only is the data you collect and analyze safe; it’s also representative of your entire customer and user base. This means that you can base your business decisions on reliable data. But it’s not just that – it also helps you with automating mundane tasks like replying to customer queries, categorizing documents, or prioritizing tasks.

Read also: How does AI enhance software development?

Responsible AI examples: how the biggest IT organizations handle the challenge of leveraging responsible AI

When considering responsible AI examples, it’s worth looking at Google and Microsoft. These two organizations approach the subject of AI responsibility seriously enough to have created a set of values their staff are expected to follow in their work.

What are the key principles of responsible AI (by Microsoft)?

The company recognizes six core principles that stand as the pillars of AI responsibility. These include: 

  • Fairness: Ensuring AI systems treat all people fairly and that no systematic or societal biases operate as standard or make current situations worse.
  • Reliability & Safety: AI systems should be reliable in their operations and output. Microsoft also ensures that the models they use do not cause harm to the world nor amplify existing problems.
  • Privacy & Security: The company believes that responsible use of AI will never entail taking advantage of people. It should also respect the confidentiality of the data it’s handling, and make sure that it’s not used with malicious intent.
  • Inclusiveness: Instead of only minimizing the risk of AI being used for malicious reasons, Microsoft believes AI should also be proactive in raising people up and empowering humanity. They declare AI must serve positive engagement with the world only.
  • Transparency: The most responsible AI systems are the ones that can be easily understood. The purpose of AI is to handle large quantities of data, and while this process is something we can’t fully understand, we should be able to verify the processes that have taken place to generate the final results. 
  • Accountability: At the end of the day, human beings always need to be held accountable for their AI systems. Because there’s so much possibility for malicious actions and unconscious bias, Microsoft has systems that ensure their staff are accountable for their actions.

Microsoft also acknowledges that everybody using or interacting with AI, be it an individual, business, development team, or even country, should take time to develop their own standards and beliefs for responsible AI. 

Google’s best practices for responsible AI

Similar to Microsoft, Google has also released a set of best practices that they believe will promote responsible use of AI. Google acknowledges that we as a species have a long way to go when it comes to understanding AI, what it’s capable of, and how to use it in today’s world safely. They also mention that we need to be proactive about the steps we take to ensure a safe future.

Google’s principles are, as follows:

  • Use a human-centered design approach: AI systems should always be used to benefit people and the greater good, especially regarding design and how these systems and technologies interact.
  • Identify multiple metrics to access training and monitoring: To ensure errors, false positives, and unconscious biases are minimized, multiple metrics must be used to help monitor all aspects of the data management process.
  • When possible, directly examine your raw data: All machine learning models will only ever give results based on the data they’re fed with, and, therefore, the data should always be examined to minimize the risk of mistakes, errors, missing values, and fairly represent the user base.
  • Understand the limitations of your dataset and model: The scope and vision of the machine learning system should always be communicated as clearly as possible, as should the limitations. This is because AI models work strictly on patterns and reflect the data they are fed and cannot, and will not, account for all variables.
  • Test, Test, Test: To ensure an AI model can be trusted and the results verified, every model should be strictly and rigorously tested for clean and clear results and to ensure the systems don’t change unexpectedly.
  • Continue to monitor and update the system after deployment: Even when an AI system is released into real-world use scenarios, it should continue to be monitored to ensure they remain the optimal way of processing data and providing the required experience.

By following these fundamental principles, Google believes we can ensure that everyone using AI technologies can do so responsibly, ethically, and with the best intentions for all. 

Function of responsible AI – summary

Responsible AI isn’t ‘just’ about being ethical; it’s the future. There are a few reasons for this – firstly, the tech community is starting to realize that AI should serve the greater good, and that the risks of tampering with data can be severe. Secondly, with data privacy and security being a major concern, companies will be forced to create responsible AI systems, as these need to comply with laws like Europe’s GDPR and the US medical privacy standard HIPAA.

Finally, to end on a positive note, responsible AI will bring plenty of benefits to your company – from minimizing bias, creating faster and more effective recruitment processes, to building a better brand image. All these, and many others, will support your business growth for many years to come.

Discover the power of data

Watch interviews with AI and Machine Learning experts. Learn how artificial intelligence can support your business and how to implement AI-powered solutions successfully.

Watch the AI Talks

Postingan terbaru

LIHAT SEMUA