Blog Main Image
Art Ligthart
January 21, 2025

Artificial Intelligence has penetrated the capillaries of our digital society, and everyone sees new opportunities and possibilities. At the same time, there is a lot of discussion about the threats and dangers that AI entails.

Artificial Intelligence has penetrated the capillaries of our digital society, and everyone sees new opportunities and possibilities. At the same time, there is a lot of discussion about the threats and dangers that AI entails. The European Union has therefore taken the lead worldwide and is introducing new regulations - such as the AI Act and the Digital Services Act - to ensure that the fundamental rights of people are also guaranteed in the digital world. In concrete terms, this means e.g. that public and private organisations must be open about the algorithms that are used and how they work. The ‘magic black box’ of AI must be replaced by a ‘glass box’ which is transparent for everyone. Increasingly, this movement is called ‘human-centred AI’.

Recently, I have been asked frequently what this ‘human-centred AI’ means exactly, if it is really important or just another hype, and if it is technically possible for AI to be transparent? My answer: it is mega important, it is a hype and at Y.digital we are fully engaged in thinking it through in the design of our AI solutions. Let me elaborate a bit on this.

The architecture of AI


You can look at AI from different perspectives. The definition of the European Commission looks at it from an operational point of view: “all systems that demonstrate intelligent behaviour by analysing their environment and - with a certain degree of independence - taking action to achieve specific goals”. The AI Act focusses more on risk and use, for example, AI-applications that are used for ‘social scoring’ fall into the category of ‘unacceptable risk’. And from a technical perspective, AI is a colourful collection of concepts, methods, techniques, modules and technologies, each with its own typical characteristics and functioning. To realise a specific AI-application, you select multiple elements, assemble the application, connect the data sources and start training. In fact, AI is not magic: AI applications are assembled and configured by following solution architecture patterns, although this might not be recognisable for outsiders. The art of architecting - as always - is to design an application that fulfils all desired functionalities but also complies with national and EU regulations . But when exactly is an AI application ‘human-centric’? Publications on this subject often mention different topics, but in our view, it revolves around four characteristics.

  •  Supporting people

    With traditional IT systems, only part of the work can be automated, and whatremains is manually processed by humans. AI has the potential to go muchfurther, but this digitisation should not be a goal in itself. Human-centred AIputs a specific focuson supporting customers and employees in such a way thatthey have more time and opportunity for human contact, self-development andnon-trivial things that matter. For example: genuine human interest having apersonal conversation. Ensuring that nurses’ hands are free to care forpatients. Increasing self-reliance without losing the human touch. Having timeto deal with complex cases instead of boring repetitive work. Being able to dojustice to someone’s specific circumstances when making a government decision.At Y.digital, we call this ‘empowering humans’. For us, human-centred AI meansusing AI applications to support people, while people themselves remaining incontrol at all times

  • The human as a blueprint

    A specific current in AI tries to make algorithms that - based on studies ofthe human brain - mimic how people learn. These algorithms are not given anyrules but learn from rewards and punishments. An example is the companyDeepMind, which has beaten the world champion GO with this approach. However,human-centred AI does not necessarily imply to go this far. We use humans as asource of inspiration. Our goal is to study the things that people are good at,to find out what elements in the human body play a role, and then to give theseelements a place in the architecture of our AI applications. Without pretendingto be able to replace humans completely. At Y.digital, we call this “the humanas a blueprint”. The architecture of our platform Ally contains modules andfunctions that have a recognisable human counterpart, such as short- andlong-term memory, a knack for languages, communication skills, various ways oflearning, knowledge libraries, a knowledge processor, etc. And we focus on AIconcepts and technology that fit in well with this, such as natural languageprocessing, machine learning and knowledge graphs.

  • Advancing humanity and society

    One step beyond “human AI” is “humane AI”, aimed at helping mankind meet thechallenges we all face: improving the quality of life, nature and environment,climate, healthcare, security, food supply. AI can make a huge contribution tothese issues if we manage to find new ways to deal with the main barriers, suchas funding, commercial revenue models and access to data. But contributing tosociety can also be done through small initiatives: at Y.digital for example,we apply our AI expertise to the social initiative Teach the Future, which aimsto make children think about the future, and thus actively stimulating them toshape their dreams.

  • Contribute to fundamental rights, rule of lawand democracy

    The most frequently mentioned risk of AI is that it cannot be clearly explainedhow algorithms work, especially in the case of self-learning. AI applicationsmust therefore comply with increasingly strict national and EU regulations. Butyou can also turn this around: these regulations are intended to guarantee thefundamental rights of people in the digital world, which is what we all shouldwant, and therefore human-centred AI must fulfil these fundamental rights ‘bydesign’. The EU demands the functioning of an AI application to be transparent,traceable, and explainable. The good news is: there is much more possible thanmost people think! At Y.digital, we have been researching possible architecturedesigns to meet these fundamental requirements. Our employees have fundamentalknowledge of AI concepts, many of them with a PhD in relevant AI fields, andthey work closely together with legal experts. We are continuously adding newfunctions to our AI-platform, Ally, which was designed from the start to complyto these regulations and ethical aspects. For example, we developed modulesthat contribute to traceability in the legal domain, such as the use ofknowledge graphs (with legal concepts, references to laws and regulations,based on legal analyses) and audit logs of all processing.

A fundamental and feasible hype

Ten years ago, most people thought that ‘privacy’ was an outdated concept on the Internet. But look how this has changed in recent years, with our EU leading the way! Given the amount of interest in ‘human-centred AI’, it is safe to say that the hype is there. It is already commercially interesting to be at the forefront of this hype and develop AI applications that are compliant to the new regulations. So a hype? Definitely. Will it pass? Yes, but that will take some time. This hype is about something very important, about imbedding the essential fundamental rights of citizens in the digital world, about the most important norms and values that we share in Europe. This really must be designed correctly in AI applications. Is that feasible? A lot is already possible using exiting AI-concepts and technologies, and this will certainly develop further in the coming years. Feel free to contact us for more information: we are fully devoted to contribute to the hype of human-centred AI.

Art Ligthart
January 21, 2025