MAPFRE
Madrid 2,632 EUR 0 (0,08 %)
Madrid 2,632 EUR 0 (0,08 %)

INNOVATION | 01.29.2025

The importance of good governance in the adoption of AI

Thumbnail user

Artificial intelligence (AI) has become one of the most transformative advances in innovation of our time. However, while organizations are identifying relevant use cases, they often struggle to achieve maximum returns due to internal limitations in its application. AI not only enhances the efficiency and personalization of services but, under responsible governance, allows for the anticipation and minimization of risks, aligning with the well-being of clients and society.

Generative AI, exemplified by technologies such as GPT-4, Bard, and DALL-E, has made the power of this technology accessible to everyone. However, generative AI also presents challenges, including the potential introduction of biases, privacy risks, and questions about the veracity of the generated information.

In this context, the new AI regulation approved by the European Parliament establishes a legal framework governing the development and use of AI, ensuring alignment with fundamental rights and ethical values. At the same time, the “Hiroshima Process” of the G7, at its annual meeting in 2023, highlighted the need for international cooperation to establish ethical and legal standards in the adoption of AI.

Given this scenario, good AI governance is essential to protect fundamental rights, foster trust in this emerging field, and ensure its adoption is both responsible and ethical.

Algorithmic governance and transparency

Two concepts play a pivotal role in any AI governance strategy: algorithmic governance and transparency.

Algorithmic governance refers to the set of policies, practices, and structures that guide the design, implementation, and oversight of AI-based algorithms and systems. Beyond abstract ethical principles, the focus is on ethical and human-centered management of algorithms, ensuring they operate transparently, fairly and responsibly.

Transparency, therefore, is a fundamental pillar. The processes behind algorithmic decisions must be assessable and understandable by everyone, not just experts in the field. In other words, we can provide clear information about how AI systems work, the data they use, and how automated decisions are made. Transparency, not only fosters public trust, but also facilitates the identification and correction of potential biases or errors.

“Artificial intelligence encompasses more than just crafting functional algorithms; it also entails ethical and human-centered management. At MAPFRE, we adopt ethical AI management that not only complies with regulations, but also places people's rights at the center and promotes equitable and responsible AI”, comments Diego Bodas, Director of Artificial Intelligence at MAPFRE.

“Therefore, as part of our public commitment to all our stakeholders, at MAPFRE, we use AI models trained with data for which prior consent has been obtained and that undergo an anonymization process, ensuring both the privacy of the information and the ethical behavior of the models we create and use,” he adds.

Different approaches to effective governance

The World Economic Forum has identified two complementary approaches for implementing effective AI governance:

  • Top-down approach: Establishes guidelines and best practices from the highest levels of the organization. Policies are developed and then implemented across the company. This approach ensures a strategic vision and alignment with corporate and regulatory objectives.

  • Bottom-up approach: Encourages the participation of all members of the organization in creating and improving AI-related practices. By involving employees at all levels, a culture of shared responsibility is promoted, and different perspectives are gained, enriching the established policies.

The combination of both results in stronger and more flexible governance, integrating strategic direction and inclusive participation.

Five key aspects for good governance

The International Monetary Fund has highlighted five key aspects that should be considered in AI governance. In addition to this organization, other international bodies emphasize the same road map:

  1. Precaution: Adopt a prudent approach to the development and deployment of AI systems, especially when their impacts are not fully understood. Risk management is essential, not only for regulatory reasons but also for the responsibility of minimizing potential harm.
  2. Agility: Maintain the ability to adapt quickly to technological advances and changes in the regulatory environment. AI evolves at a rapid pace, and organizations must be able to adjust their practices and policies accordingly.
  3. Inclusivity: Ensure that all perspectives, including those of minority and vulnerable groups, are considered in decision-making related to artificial intelligence. This ensures that AI-based systems benefit society as a whole and do not perpetuate inequalities.
  4. Security: Protect AI systems from vulnerabilities and threats, ensuring the integrity and reliability of their operations, such as data protection or user privacy.
  5. Focus: Direct efforts toward AI applications that generate a positive impact, aligned with the organization's ethical and social objectives.

“Effective governance involves implementing mechanisms to mitigate risks. This means ensuring representation from all areas of the organization in decision-making, making privacy and security integral by default, and establishing controls to identify approved AI systems and models, as well as providing exhaustive details on the use cases developed on them,” stated the Director of AI at MAPFRE.

MAPFRE's commitment to responsible AI

At MAPFRE, we believe that the latest advances in artificial intelligence are likely to be the most transformative developments we will witness in both society and the corporate world in the coming years. AI has the ability to impact almost all business processes, and increasingly, clients demand services in a more immediate and personalized manner.

Elena Mora, Director of Privacy and Data Protection at MAPFRE, emphasizes that “the concept of responsible use of artificial intelligence is vital; it is one of the aspects that should guide our activities and be present in every project and decision-making process. In this way, the responsible use of AI must be embedded, like security and privacy, from the design stage and by default in every new initiative. This is what will ensure truly effective governance.” This commitment translates into concrete actions, such as training teams to ensure that employees understand the principles and practices of ethical AI, adapting processes to ensure the inclusion of these aspects in every new initiative, or overseeing the development and use of these technologies, incorporating best practice manuals and dedicated teams for this purpose.

MAPFRE has identified more than 200 AI use cases and applies AI in over 90, most of which are aimed at improving the customer experience and the efficiency of insurance management. Additionally, it has 75 under study for the application of the latest wave of generative AI. Here are a few examples:

  • Claims automation: Processing home insurance claims, allowing direct payment to the customer in a single interaction after validation.
  • Image automation: Use of AI to detect vehicle damages in images, both during contracting and in claims.
  • Voice automation: Development of virtual assistants and call automation systems that enhance customer service.

Adaptation to the guidelines framework

The new AI regulation approved by the European Parliament, among other things, establishes prohibitions on certain uses and treatments, the need for a governance model, risk management methodologies, and requirements for data transparency and quality. In this regard, the Director of MAPFRE emphasizes that “although this regulation establishes specific and stricter requirements for high-risk classified systems, it affects all AI systems, from the moment governance is required for these systems and their uses, to carry out proper classification and risk assessment, aspects necessary to ensure compliance.”

The insurance industry, especially in the areas of life and health, includes certain use cases that fall under the classification of high risk according to the regulation. Therefore, MAPFRE is adapting its policies and processes to comply with new requirements, ensuring that AI is used responsibly and in alignment with fundamental rights.

MAPFRE anticipates regulations and adopts practices that not only meet regulatory requirements, but also guarantee the highest level of protection and trust in our AI solutions.

“For MAPFRE, the responsible use of AI is not a matter of compliance. It refers to the practice of AI under a methodology and controls designed to keep people and their objectives at the center of the design process, respect fundamental rights, ensure transparency, security, and accountability, and consider both the benefits and the potential harm that AI systems may cause to society. To achieve this, good governance in the adoption of AI is essential, allowing the identification of different use cases for these systems, the data being used, and the risks they may carry, so that they can be properly managed,” explains Elena Mora.

And what will happen in the future?

When managed properly, AI can be a catalyst for innovation and sustainable development. The figures support its growth: according to estimates from Precedence Research, the AI market will expand at a compound annual growth rate of 38.1% from 2022, reaching 1.6 billion dollars by 2030. Studies by Boston Consulting Group and McKinsey also highlight its significant economic impact.

At MAPFRE we care about what matters to our customers and aim to generate a positive impact on society and the environment. That’s why we invite organizations and authorities to work together to establish common standards and share best practices. In the not-too-distant future, there will be a need to insure systems managed entirely by AI, and it is everyone’s responsibility to ensure they operate ethically and safely.

 

RELATED ARTICLES: