Levering knowledge work using Machine Learning (ML), RPA and Rule Based systems. How to scale unique human qualities**.**
A few years ago I was working on a project to deliver a fully automated underwriting process. During this project the term RPA (Robotic Process Automation) came up as ‘a revolutionary way to scale knowledge work’. In all honesty, I had to look up the term RPA as I was a bit intimidated by what it stood for but it turned out to be rather simple 😉 In the years after I have seen much confusion around the topic of scaling knowledge and what method (ML, RPA, Rule based or human insight) is best suited to the needs of a specific business process. That’s Y we want to share our insights to help you to get ahead of the curve.
Before we go into the characteristics of RPA, ML and rule based solutions it is essential to understand how knowledge and expertise build up (a simplified version):
- Data: facts or numbers that can be analyzed or used to gain knowledge or help decision-making
- Information: facts and know-how that can be captured in books, procedures, systems and programs;
- Knowledge & experience: understanding of information about a subject obtained by experience or study. Knowledge is a term used to describe understanding of certain domains, context, terms, vocabulary and logic or rules that apply.
- Cognition: refers to the process of thinking. It is the identification of knowledge, of understanding it and perceiving it. Cognition encompasses processes such as memory, association, concept formation, pattern recognition, language, attention, perception, action, problem solving and mental imagery.
Information is static and useful but only has value when it is applied (knowledge), which builds experience. Using knowledge and experience, skilled experts can quickly assess, judge and act on cases presented to them (cognition) by e.g. recognizing patterns and deviations.
Building and attaining knowledge takes dedication, learning and many hours of practice. This is why human experts and knowledge workers are by definition rare and hard to scale.
For reference; the Future Ready Lawyer’s Survey of 700 lawyers stated that 72% struggle with the increasing volume and complexity of document processing and want to focus on efficiency and productivity. It’s likely other knowledge workers have a similar experience.
Every business of any size has to rely on unstructured (and in fact often non-digital) information. As highly regulated entities, banks and insurance companies have case law and policies and contracts to consult. Companies concerned with property rights (e.g. in Oil & Gas) exploit detailed and complex agreements. This also applies to offshore, trade, logistics and transport and maritime industries.
Managing such operations require lineage (agreements and other documentation may go back decades), compliance (guidance from environmental authorities and other governmental agencies) and operations (processing millions of emails, texts, PDFs, SharePoints, and other random documents).
To be more competitive and in control you need more sophisticated and intelligent processing which actually scales. Without it, more labor will continue to be needed to deal, organize and search through it, increasing coordination overhead making the problem worse.
Conventional IT has never succeeded in scaling and leveraging knowledge and experience of humans. Large databases and search-based systems do not address the essential point of knowing how to apply knowledge and the ability to appreciate the value of information. A big design flaw in these systems is that they assume people are disciplined and actually store, label and maintain their information… some do, the majority does not.
So, if we want to scale knowledge, the question is; how can we apply human-like qualities in some of the new technologies out there?
Scaling Knowledge – a practical example
Each organization runs business processes that require knowledge, skills and experience, for example:
- Compliance checks in financial institutions (typically KYC, CDD, AML)
- Review of healthcare applications (medical experts)
- Processing of LC’s (Letters of Credit)
- Contract review
- Legal assessment (case law, fiscal, infringement)
- Tax compliance checks (deeds, contracts)
- Loan or credit application (automated and real time underwriting)
As stated earlier, I worked on a project to deliver a fully automated underwriting process. In the current underwriting process c. 60% of applications were handled via a rule based system. The remaining c. 40% was processed via ‘desks’ by skilled experts. Our goal was: can we decrease the 40%? However, this was not possible with the rule based system. The main disadvantages of the current setup were:
- The rule based system reached its limit; it had become too complicated to transfer the human expertise into business rules
- The desk-based system only scales linear and takes too much time (not real time) and therefore not useful for our online apps
- Human rulings are more intelligent as they factor in more information and logic than the rule based system can
- The rule based systems’ logic and assessment framework was too narrow leading to unnecessary rejections. As one of the managers stated: we are not looking for more reasons to reject applications, we are looking for more intelligent rulings to make more (healthy) deals…
We resolved this issue by adding several other methods and technologies:
- 52% Rule based: as part of the processing led to false rejections we carved out the tail of the previous 60% avoiding this. A rule based method is perfect for cases that are simple and have no ambiguity. This type of processing is 100% auditable and highly scalable.
- 21% RPA: this method is used to automate repetitive and procedure driven tasks performed by humans on highly structured data. It is a sort of workflow layer across different systems. The method is relatively cheap and scalable within it’s conceptual limits.
- 23% ML and NLP: this is typically used where information is highly unstructured, many variables and a wide variety of reference frameworks. In essence; dealing with ambiguity and a complex of rules, regulations and logic.
- 4% desk: this is what’s left after the cascading set of methods used to process all cases presented. Typically the cases that are rare (so no statistical basis to predict) or do not fit the automated methods. This is where human expertise and judgment is required.
Looking at this cascading way of processing you can see that there is actually a continuum in which we applied different methods, from simple, unambiguous rule based methods to automated human workflow (RPA) to more versatile (statistical) models (NLP ML) and ending up with most versatile system of all; the human expert.
Conclusions and recommendations for scaling knowledge
Build the business case
The business case for scaling knowledge is evident in many dimensions (cost, lead-time, full compliance). However, building the business case requires more than crunching the numbers – you need to understand how the actual processes and rules work as a whole and what purpose they serve. A new service design will most likely include a set of methods and technologies merged into 1 concept.
Dealing with ambiguity
Traditional rule-based automation often couldn’t do much without at least some analog help–it still needed assistance when dealing with the many variants and exceptions found in data, since such differences were hard to capture in the rules that governed a bot’s decision-making. RPA is about automating human workflow but not for dealing with ambiguity. This makes RPA interesting as one measure but will not resolve the bulk of human processing.
This is Y it makes sense to include other methods such as advanced NLP and machine learning (ML) for more sustainable and better results as the bulk of manual work still required skill and processing loads of unstructured data.
Today, semantic driven solutions have emerged to automate retrieving information from the unstructured chaos. Natural Language Processing (NLP) methods can extract the important intent, concepts and due diligence items from contracts and other documents, and it does it without lots of (non-scalable) hand-holding, and with a high degree of accuracy, after being trained with a small document set. In addition to training machine learning models based on annotations from experts we build knowledge graphs, which deliver great value in applying knowledge and experience in a very versatile and intelligent manner.
Feedback loops for integrity & increased performance
By including feedback loops in our design the system is not only governed by real experts but also refined by additions, corrections and feedback while people use the system. This closed loop principle is of paramount importance for the integrity and sustainability of the system.
But by taking a more sophisticated approach that exploits advances in NLP and semantics as well as really understanding how to use a set of methods in a total concept companies are now finding ways to automate complex work that was impossible just a few years ago. Want to know more? Schedule a demo.