'Computer says no’: AI-based decisions in public service
The court case surrounding SyRI (System Risk Indication) recently caused a lot of media attention regarding the use of AI by the Dutch government. SyRI was an AI system intended to detect welfare fraud, looking at combined databases of various agencies, such as municipalities, the immigration agency (IND) and the Tax Administration. The use of SyRI was banned by the court, ruling it as stigmatizing and discriminatory, as it was to be applied mainly in ‘problematic neighbourhoods’. It is clear that this implementation is not successful yet, but such legal rulings can be used to learn: how should we do it?
This case has made many people aware of the use of AI by government, but it is far from the only AI solution used in the public domain today. Decision support systems can be found at all levels of government: from creating an optimal route for waste collection, to suggesting a ‘healthcare profile’ of a patient on the basis of which that person will or will not receive special subsidies. Decision support itself is not a new phenomenon; traditional IT solutions are also used for this purpose - solutions that are often just as ‘black box’ in their operation as AI is often accused of.
Big data patterns or expert knowledge?
But what is the effect of automation on the delivery of public services? Such digitisation of administrative processes has been developing unseen for some time, but attention is growing rapidly in both academia and (inter)national politics. Nevertheless, there is still relatively little research on the influence of AI-based decision-support on the decision-making process, the civil servant interpreting the system, or the citizen affected by the decision. For exceptions, see for example WRR (2021) or Marlies van Eck (2018). Decision support often concerns administrative decisions that were previously made by so-called ‘street-level bureaucrats’: civil servants who work in direct contact with citizens and who have a high degree of discretionary power in performing their work (Lipsky, 1980). This discretion is a great asset in their profession, and Lipsky also believed that the work of street-level bureaucrats ‘requires human judgement that cannot be programmed and cannot be replaced by machines’ (Lipsky, 2010: p.166). For this reason, Y.digital’s catch phrase is empowering humans: automate what’s possible, so that the employee can spend their valuable time and energy doing intelligence work.
This belief is not yet widespread: there is a deep-seated conviction that all problems can be solved with enough data and technology, and that arguments based on data are the best arguments. This ‘dataism’ is a hype that is prevalent in many sectors. Until recently, ‘relationshipism’ prevailed: trust-based relationships between professionals and clients. In order to keep these relationships and the valuable ’tacit knowledge’ of senior civil servants intact, at Y.digital we never start by recognising patterns in ‘big data’. Instead, the in-depth knowledge and experience of professionals forms the basis of our solutions. Our solutions ensure that knowledge which resides mainly in people’s heads can be scaled up, like knowledge of laws and regulations, jurisprudence and years of experience putting those rules into practice.
A new type of digital literacy
A second change that occurs when decision-making is partially automated is the moment when human intervention occurs. Take, for instance, an application for rent subsidies. These are granted (or not granted) entirely automatically; only when the applicant objects to the result is it reviewed by a civil servant. In this way, the work of the street-level bureaucrat moves to the end of the decision-making process, where they are expected to explain why that particular decision was made. The effect of working with decision support systems ultimately depends on the specific system, the organisation or even individual employees. What is certain: this way of working requires novel and different skills. Employees need to be able to interpret the outcome of a system correctly in order to explain it to clients - this is a lot more complicated with decisions made by AI than with traditional IT systems. Staff will benefit from training in “algorithmic thinking”: learning about how a particular system works and what intelligence has been applied to reach a decision. Misinterpreting data-based advice can lead to ill-informed - or even unjust! - decision-making. The EU AI Act will also require ’explainable AI’, which means that the basis of a system’s output must be traceable and explainable. At Y.digital, we use knowledge graphs to model domain knowledge and to record the relationship to the source, so that the grounds for a decision become clear.
A balancing act
Automation therefore affects delivery of public service in various ways. The day-to-day work of professionals changes as they are expected to not only base decisions on their own insight, but also on advice generated by a system. Professionals will need to be trained in skills to interpret and explain these suggestions properly. However, in an industry where professionalism and human discretion is paramount, technology must only support the decision-making process, not take over: empowering humans. It is therefore necessary to move beyond the hype of data-ism and first determine what improvements are possible in the existing processes and how these can contribute to medium-term strategy. After designing solutions, we make sure that operational employees are always closely involved in the development process, so that their expertise forms the basis of the AI solution and that it suits their daily practice.