Why do some organisations hit the brakes when it comes to applying AI?

It is clear that Large Language Models have multiple benefits to organisations. But how do you achieve a sustainable situation, keeping your company data seperate from the open domain?
August 1, 2023

Recently Chief Innovation and Technology Officer Bart Leurs from Dutch major bank Rabobank voiced that the bank is calling for a temporary stop on the use of AI, primarily addressing Generative Large Language Models (GLLMs). In March this year a group of tech leaders including Steve Wozniak (Apple’s co-founder) and Elon Musk, as well as prominent AI researchers, signed an open letter that called for an immediate pause of the work of AI labs beyond GPT-4, (where GPT stands for ‘Generative Pre-Trained model), trained on data-sets from a part of the internet. Late May this year Tristan Harris and Aza Raskin, founders of Centre for Humane Technology and contributors to the Netflix movie the Social Dilemma, presented the AI-dilemma.

They basically rang the alarm bells claiming that it is high time to establish ethical and legal frameworks, before it is too late. The AI dilemma makers mainly target the tech giants and draw parallels with how social media developed into harmful applications leading to addictions and destabilzing democracies, where initially the technological advancement and social enrichment were celebrated. It seems that corporations now follow suit with a break, buying time, investigating which interventions are required to mitigate risks.

GLLMs offer interesting benefits to organisations

It is clear that these GLLMs - and especially the multi-modal models (generating text, images, video and sound) - have multiple benefits to societies and organizations. Tedious tasks can be supported or performed by AI; e.g. coding an application, writing a letter, summarizing a document, creating an image and combinations thereof. Voice can be transcribed, understood, summarised or re-written and re-generated. These benefits, if used by criminals, can easily lead to scams as well; e.g. ‘stealing’ people’s voice providing false authenticating and writing convincing good spam emails asking for credentials or requesting payments. It becomes very hard to distinct true from false and soon we may need decoding tools to do so. 2023 will mark the year where organisations can no longer rely on authentication as they knew it. Financial institutions and governments should be extra alert.

Organisations should be aware of the risks

In organisations there are a number of factors weighing in for a pause or to re-think AI usage. However fancy an application like Chat-GPT may seem, there are risks involved:

- Accuracy or credibility: GLLMs calculate their output based on their input, i.e. their training data. If the input is growing to be inaccurate the output is as well; i.e. garbage in, garbage out, also known as hallucination.

- Privacy: all prompts, including input and documents shared, are shared with the tech companies behind these models; i.e. OpenAi, Microsoft, Google, Meta. If you share company sensitive information in your prompt, that information might be stored on someone else’s servers and become part of the model’s training data and input.

- Intellectual Property: content is being distributed and represented as original

- Ethics: output can be toxic, obscene, or otherwise inappropriate.

- Polarity and discrimination: unfair positive or negative attitudes to certain individuals or groups, as these models fail to reason

- Environmental impact: using a GLLM in a production environment requires the use of many ‘graphic processing units’ or GPUs, which will incur high computational and environmental costs.

How to achieve a sustainable approach?

Most organizations are looking for a framework that leverages AI’s possibilities, while mitigating risks. Such a framework ensures AI applications to be:

- Reliable and secure: apply privacy and security measures, robust testing, production monitoring

- Accountable and governed: clear documentation, roles and ownership

- Fair and human-centred: employ data bias mitigation, keeping the human in the loop

- Transparent and explainable; if an AI model is used to approve e.g. a mortgage, it should be understandable and explainable when a request is denied. Too often AI is a ‘black-box’; ‘computer says no’ but we don’t understand why.

Over the past 4 years Y.digital has developed a cognitive AI platform, putting the above principles in place. This cloud-native, enterprise secure, GDPR compliant SAAS platform offers the building blocks to deploy language-based AI applications with a short time-to-market, at lower cost. We mitigate the earlier mentioned GLLM risks by integrating a semantic model that collaborates with several AI techniques and provides the much needed guardrails for a GLLM.

More information

Please reach out to us (result@y.digital) to discuss how your organization can deploy language related AI applications in the most reliable, fair, sustainable and transparent way.

Meer weten?
Klaar voor een grote implementatie, of wil je juist kleiner beginnen met een informatie sessie over AI? Y.digital helpt je om jouw ambities rond AI waar te maken.
Boek een meeting