With the ongoing AI revolution in the workplace, which I discussed in my previous article, compliance and ethics are now in the spotlight. Artificial Intelligence (AI) is transforming organisations and society, bringing with it immense opportunities and risks. As this technological revolution continues, considerations around ethics, transparency and compliance have come to the fore.
In particular, with the emergence of generative AI models such as ChatGPT, legal and ethical dimensions are essential. For financial service providers, compliance is critical. But AI also raises ethical questions that go beyond legal requirements. Professor Mario Martini, a leading authority on digital governance and AI ethics and a key member of the German government's Data Ethics Commission, highlights the critical issue of the non-transparent nature of many AI systems, which often operate as 'black boxes', in his insightful book 'Blackbox Algorithmus - Fundamental Issues in the Regulation of Artificial Intelligence'. This creates compliance risks such as misuse and loss of control. For example, algorithms for lending could discriminate against certain groups due to biases in the training data. Such issues can be mitigated through accountable and transparent systems. It is equally important to avoid bias and discrimination more broadly. AI systems must be developed and used fairly and ethically to avoid risks such as hiring bias or medical misdiagnosis. Careful data curation, algorithm design and stakeholder engagement are critical to achieving this.
In Germany, the Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin) is already putting ethical AI requirements into practice, such as developing a clear AI strategy and adapting risk management procedures. The EU AI Act aims to provide a comprehensive legal framework that sets minimum standards for the safety, transparency and accountability of AI systems across the EU. Major companies such as IBM, Microsoft, SAP, Deutsche Bank and Allianz have also formulated ethical guidelines and principles for AI, reflecting the growing awareness of responsible AI practices.
However, there are still limitations in the governance and implementation of ethical AI that need to be addressed
The role of explainable AI (XAI) is crucial for the creation of transparency and trust, but is not sufficient on its own. Comprehensive governance and regulatory compliance are also required. While sectors such as finance are currently subject to stricter oversight, the EU AI Act aims to harmonise. Voluntary corporate initiatives demonstrate a growing commitment to ethical AI, but practical implementation remains a key challenge.
As advanced models such as GPT-4 drive innovation, their responsible and sustainable integration will be critical. The promise of AI can be realised while mitigating risks by building trust through transparency, governance and stakeholder dialogue. This undertaking requires translating principles into policy, monitoring outcomes and enabling continuous improvement. With careful governance and responsible development, AI can be harnessed for the benefit of business and society. But sustained efforts are essential to chart an ethical course amidst rapid technological change.
Avoidance of distortions and discrimination through XAI
A key aspect in the development and implementation of AI systems is the avoidance of bias and discrimination.
Explainable AI (XAI) plays a decisive role here by providing methods to make the functioning of AI systems transparent and comprehensible.
Limits of Explainable AI (XAI):
Despite the advancements in Explainable AI (XAI), there are limitations that we need to consider. One of the biggest challenges is to find the right balance between explainability and performance of AI models. It is often the most powerful AI models that are the most complex and therefore the most difficult to explain. Over-interpreting explanations can be misleading. Another limitation is the complexity of the XAI explanations themselves, which can limit their usefulness even for experts. XAI should therefore not be seen as a 'silver bullet', but as one of many tools to be used in accordance with other measures such as data diversification and compliance.
To effectively avoid bias and discrimination in AI systems, it is essential to combine XAI methods with careful data curation, algorithmic verification and regulatory compliance. Technologies for monitoring and documenting AI decisions also play an important role in ensuring complete traceability. In addition, regular reviews and validations of AI systems by independent bodies or internal audits are crucial.
In addition, the 4-6 eyes principle helps to ensure that decisions are reviewed by multiple people or parties to minimise errors and bias. All these practices help to build trust in AI systems and ensure that they are used fairly, transparently and in accordance with ethical and legal standards.
„In the realm of AI, clarity is king; understanding how and why decisions are made builds the bridge between innovation and trust."
Legal framework: The EU AI Act, Foundation LLM models and BaFin requirements
In the world of AI, Explainable AI (XAI) is becoming increasingly important, not only for understanding complex AI systems, but also as a means of meeting compliance requirements. Some AI providers are proposing a voluntary label or "social contract" to create binding minimum standards for AI services in Europe. These standards would provide the basis for a level of transparency and ethical responsibility - on a voluntary basis, with only minor economic consequences for non-compliance.
The voluntary label could serve as a kind of certification that an AI system meets the legal and ethical requirements defined by the AI Foundation model provider. For financial institutions, this would mean that the AI models they use to make credit decisions, for example, are not only efficient and accurate, but also voluntarily comply with ethical and legal standards. This approach is welcome, as it appeals to a certain sense of honour on the part of providers, but will not be sufficient to inspire solid trust, given the far-reaching social and economic relevance.
The EU AI Act will create a binding framework with clear conditions, going beyond voluntary self-regulation. The benefits for providers and users of such systems will be high, both directly and indirectly. Let me answer how: regulation means the definition of guaranteed minimum standards for AI systems in Europe. It will also create a standardised basis for evaluating and comparing AI offerings. This will pay off by increasing transparency and consumer confidence in AI systems. And society's trust will be the basis for any future business with AI in Europe. In Germany, the BaFin is already putting these requirements into practice. They are already very tactically applicable. For example, it requires management to develop a clear strategy for the use of algorithms. For example, a financial services provider could introduce guidelines for the ethical use of AI in customer service. As part of risk and outsourcing management (MaRisk), companies must adapt their risk management to the specifics of algorithmic decision-making processes, for example by setting up dedicated teams and procedures for monitoring and assessing AI risks.
In terms of data strategy and governance, companies need to ensure the quality, quantity and data protection compliance of the data used. This could include implementing strict privacy policies for the collection and processing of customer data. Detailed documentation of model selection, calibration and validation is also required.
Finally, "putting the human in the loop", as BaFin calls it, emphasises the importance of human involvement in the decision-making process. This means that technical experts should be appropriately involved in AI-based decision-making processes, for example in the case of credit decisions made by an AI system and reviewed by a human officer.
The combination of the EU AI law, sector-specific regulations and voluntary trustmarks thus forms a comprehensive, trust-building system that ensures transparency and compliance in AI without the need for users to become deeply involved in the technical details.
Similarities in corporate approaches to AI ethics
Examining the public documents that the companies have made available on the Internet and comparing them in terms of their objectives, approaches and dimensions of action, it has become clear that there is a high degree of similarity, but also differences that I would like to highlight. Let me go into specifics:
Major companies such as IBM, SAP, Microsoft, Deutsche Bank and Allianz emphasise the importance of ethical guidelines and principles for the use of AI. However, their approaches differ in specifics.
Microsoft and IBM have developed concrete standards and tools to implement ethical AI principles, focusing on areas such as transparency, fairness, privacy and inclusion.
Microsoft sets measurable goals in each of these areas, along with detailed requirements. IBM is building on long-standing core principles, now supported by new tools, to ensure fair, robust and accountable AI.
Meanwhile, SAP, Deutsche Bank and Allianz place more emphasis on high-level values and ethical considerations, without delving into detailed AI-specific policies. SAP highlights seven guiding principles that focus on human-centred design, privacy and societal challenges. Deutsche Bank discusses integrating sustainability and ethics into its AI strategy, but does not provide specifics. Allianz focuses on data ethics principles and responsible AI practices.
"Navigating the AI landscape requires not only technological skills, but also a firm commitment to ethical principles; only then can we responsibly unlock its full potential."
These gaps suggest that while formulating ethical AI principles is important, putting them into practice, monitoring results and enabling continuous improvement remain ongoing challenges.
Conclusion and outlook
In summary, the integration of AI into businesses is a complex undertaking. It presents both enormous opportunities and significant challenges. In particular, advanced AI models, such as GPT-4, open up new opportunities for innovation and efficiency, but also raise important questions about compliance, ethics and transparency.
In this context, the role of Explainable AI (XAI) is critical to creating transparency and trust in AI systems. However, XAI alone is not enough to address all the challenges, so comprehensive governance structures and regulatory compliance are needed. While BaFin primarily supervises and regulates the financial sector, its guidelines and resulting legal requirements also have an impact on other industries, particularly with regard to compliance with anti-money laundering laws. This results in an environment where some sectors are more heavily regulated, while others are subject to less stringent requirements.
In contrast, the EU AI Act will provide a more comprehensive legal framework, setting minimum standards for the safety, transparency and accountability of AI systems in the European Union.
As discussed in the previous chapter on business ethics, major companies such as IBM, SAP, Microsoft, Bosch, Deutsche Bank and Allianz, have also developed their own ethical guidelines and principles for AI. This reflects a growing awareness of the need for responsible practices in the use of advanced technologies. However, there are still limitations in governance and implementation that need to be addressed.
The future will show how companies integrate these technological developments with ethical considerations. Such an understanding will be crucial to building trust in AI technologies and promoting their integration into society in a sustainable and responsible way..
Citation
- Microsoft. (2022). Microsoft Responsible AI Standard v2: General Requirements. Retrieved December 3, 2023, from https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5culu
- Bosch. (2020). Bosch Code of Ethics for AI. Corporate Department Communications & Governmental Affairs, Executive Vice President: Prof. Christof Ehrhart. Retrieved December 3, 2023, from https://www.bosch-ai.com/industrial-ai/code-of-ethics-for-ai/
- SAP SE. (2021). SAP’s Guiding Principles for Artificial Intelligence. AI Ethics Steering Committee and AI Ethics Advisory Panel. Retrieved December 3, 2023, from https://www.sap.com/documents/2018/09/940c6047-1c7d-0010-87a3-c30de2ffd8ff.html
- Federal Financial Supervisory Authority. (2021). Principles for the Use of Algorithms in Decision-Making Processes. Retrieved December 3, 2023, from https://www.federalreserve.gov/SECRS/2021/October/20211004/OP-1743/OP-1743_071521_138490_334755314835_1.pdf
- Federal Government of Germany. (2018). The Federal Government's Artificial Intelligence Strategy. Retrieved December 3, 2023, from https://www.ki-strategie-deutschland.de/home.html?file=files/downloads/Nationale_KI-Strategie_engl.pdf
- Kettemann, M.C. (2021). UNESCO Recommendation on the Ethics of Artificial Intelligence. Conditions for Implementation in Germany. Retrieved December 3, 2023, from https://en.unesco.org/artificial-intelligence/ethics
- IBM. (2022). 2022 ESG Report. Retrieved December 3, 2023, from https://www.ibm.com/impact/esg-report
- Deutscher Ethikrat. (2019). Stellungnahme Mensch und Maschine: Ethische Fragen im Bereich der Künstlichen Intelligenz [Statement on human and machine: Ethical questions in the field of artificial intelligence] (Veröffentlichung Nr. 2019-09-11). https://www.ethikrat.org/fileadmin/Publikationen/Stellungnahmen/deutsch/stellungnahme-mensch-und-maschine.pdf
- Bundesministerium für Bildung und Forschung, Bundesministerium für Wirtschaft und Energie und Bundesministerium für Arbeit und Soziales. (2018). Strategie Künstliche Intelligenz der Bundesregierung [Artificial intelligence strategy of the federal government]. https://www.bmbf.de/bmbf/shareddocs/downloads/files/nationale_ki-strategie.pdf?__blob=publicationFile&v=2.
- Martini, M. (2019). Blackbox Algorithmus – Grundfragen einer Regulierung Künstlicher Intelligenz. Springer.
Dez 5 2023