As the use of Artificial Intelligence soars, organisations need to ensure it is explainable, transparent and responsible, says Erik Beulen.
Organisations are increasingly becoming data-driven organisations and the use of Artificial Intelligence (AI) by business continues to accelerate.
For instance, one forecast by KPMG predicts that by 2025 the market size of AI will grow to $232bn, while another from Gartner says that in just two years’ time some three quarters of all organisations will have operationalised AI, driving a five-fold increase in streaming data and analytics infrastructures.
However, this dizzying growth isn’t without huge challenges, not least in terms of the need to address the risks of AI in areas such as privacy, competition, ethics, and potential breaches of GDPR and anti-trust laws.
As a starting point, and to ensure proper control over use of AI, it is essential that the ownership of organisation specific AI algorithms remains with the organisation. By contrast, non-specific AI can be transferred to service providers without jeopardising the strategic and business interests of organisations.
This split, which results in a joint intellectual property framework, helps foster innovation between organisations and their service providers, increases levels of trust between them (supported by experience-driven service levels), and improves overall governance around the use of AI algorithms.
Indeed, my own recent research with co-authors has found that well governed client/service provider relationships positively affect the degree of trust between parties in creating AI solutions, and also encourages a bilateral approach to joint innovation. In fact, this mutual commitment and proactive communication is an essential prerequisite when using AI.
Legislation deals with privacy and unfair competition. However, this is not sufficient to address ethical considerations. In our paper we also discuss the importance of what we term the ‘physiological contract’ between the two parties.
This contract supplements the service delivery contract and is an essential element when it comes to governance around AI. In particular it balances the client’s interests with the commercial interests of service providers, while it also encourages both sides to address the use of algorithms to avoid ethical issues.
The ethical challenge is to be transparent on the purpose of the data collection and to use the AI algorithm in line with that purpose. Both the organisation and their service provider are therefore bounded to this in a physiological contract.
Another key element of a responsible AI policy is to ensure that employees as well as service providers remain neutral and unbiased, and that no personal preconceptions or opinions interfere with the data collection process. This can be challenging as the complexity and number of employees that use data and algorithms is growing in combination with the growing data volumes.
An illustrative example is the Dutch government which deployed AI to handle childcare benefits applications and disproportionately denied benefits to (and charged with fraud) applicants from ethnic minorities, a move which actually led to the Dutch cabinet resigning in January 2021.
This shows just how crucial oversight from both parties is required when adopting AI, and also how crucial the psychological contract can be in terms of providing guidance that ensures responsible AI.
Pausing not an option
AI is undoubtedly here to stay and its use will only accelerate. That means that pausing or putting on hold AI projects is simply not a feasible option for most organisations if they wish to remain competitive in their markets.
But as they implement AI strategies they must also have a strong focus on building trust based on ethical principles and on implementing explainable and transparent algorithms.
In this regard organisations will also be helped by the regulatory environment which is finally catching up with these issues. For example, the forthcoming European Union Digital Service Act Package will be significant as it provides an infrastructure for building AI systems.
Specifically, it aims to help organisations leverage the full potential of data and insights, help avoid unfair competition, and simultaneously protect the interests of consumers and their fundamental online rights.
This legislation establishes a powerful transparency and a clear accountability framework for online platforms and is the centrepiece of the European digital strategy. It also stops platforms treating services and products offered by the platform itself more favourably than those offered by third parties on the platform.
AI should not only be lawful, but also explainable, transparent and responsible. And organisations need to use data purposefully too. As I have argued, this requires (in addition to a service delivery contract) a psychological contract between organisations and their service providers.
The hope is that upcoming legislation, such as the Digital Service Act Package, will provide further guide rails for organisations to increase value creation by leveraging data and algorithms, and to help them foster innovation.
You may be interested in our Data Science for Business Decision Making Executive Education course which focuses on applying essential data and analytics tools to your business concerns.