Six important AI Technology Trends for 2024

Looking forward to 2024, we have lined up some trends to pay attention to after the huge wave in Large language models last year.
December 19, 2023

We are wrapping up 2023, and we are looking back at a year in which Artificial Intelligence (AI) technology has become a disruptor. This means that a year ago, we couldn’t have imagined the effect the rise of Large Language Models and their applications, such as ChatGPT, could have on businesses, organizations, and society. The disruptive nature comes from the fact that this technology is openly accessible, so anyone can experience the power of generative AI. Generative AI creates content, text, images, code and all sorts of other material and is already disrupting several businesses.

Looking forward to 2024, we have lined up some trends to pay attention to.

1. Employee Experience (EX)

Whereas ‘simple’ AI technology was applied to automating customer interaction with chatbots and voicebots, the application to support employees with the possibilities of AI was very limited. By starting to apply AI within the organization, employees are involved in the application of AI and benefit from it. Especially in the current situation on the labor market where employee experience is paramount, AI can play an important role in supporting and onboarding employees. This will also give a more nuanced view of the ability of AI to reduce tasks, functions, and entire jobs. AI is a powerful technology when used for specific tasks, yet the human perspective and ability to feel, weigh, consider, and make a considered decision will still be paramount. We will see that the ability to support employees and the employee experience with AI will take a central role in the years to come.

2. Knowledge Graphs as guardrails to steer Large Language Models

Most organizations experimented with the end-to-end application of Large Language Models within some of their business processes. GenAI and Large Language Models (LLMs) are on the verge of the Gartner hype cycle, and although the application opens up an appetite for more, the real integration of GenAI and LLMs into business processes depends on the reliability of the content generated. Can it be trusted? The answer to this question is to enable organization-specific content and logic to the way answers are formulated. Technologies like RAG (retrieval-augmented generation) are improving the way in which answers are formulated by adding context to a prompt when generating it. Yet, there are powerful options to create guardrails for Large Language Models to minimize the percentage of hallucinations. By infusing manually input from business-specific content and process, the ease of scalability is limited. The knowledge graph is a technology in which you add organization-specific information, the definition of certain vocabulary, and logic. In a way, you build your custom model that can be used in combination with generative AI to steer the outcome of a large language model. You put information into knowledge while conserving the level of trust in the output of the content.

3. Personal Assistants for consumers

With the rise of ChatGPT as the predecessor of what is coming next in the field of tailored (voice) assistants. What looked impossible with the Google Assistant and Amazon Alexa is now around the corner. The personalized assistant that is tailored to your needs, restaurant suggestions, reservations, cook recipes, groceries, possibly even lifestyle tips & tricks. The bottleneck of the former generation of what attempted to be personalized assistants was the way in which a conversation was steered. The assistant could only react to specific queries, and you could not switch from one subject to another. Entering a new computer-human interaction era, the personal assistant is set to rise from the ashes again and might also influence the way we are interacting with companies on a daily basis. In the U.S., there was a case where an automated personal assistant took over 30 minutes’ time from a human call center agent for a customer question. Meaning that companies should also prepare for incoming AI generated contact.

4. Use of local and smaller large language models

Large language models are offered by big tech companies, using basically all information and content from the internet. More and more organizations limit the use of large language models within their business processes due to the fear of sharing privacy-sensitive information. The need for localized and locally run models is getting bigger and bigger. Meanwhile, the discussion about efficiently using large language models, and the fact that not all content on the internet is needed to generate content specific to your organization makes us see two trends. First, the possibility to use models such as Llama, that can run locally. Second, the initiative from TNO (in the Netherlands) to see whether the Dutch government can create a localized Dutch Large Language Model, that is free of bias and other offensive content.

5. Regulations and Ethics

Not only the use of large language models, but also the creation of large language models should be critically looked into. The turbulence of the competition between big tech companies to ‘win’ the contest for the best application and high-volume usage, let alone conflict within the management of OpenAI, creates room for discussion about regulation and European and National governance. The AI Act of the European Union was presented on the 8th of December of this year and is focused on trustworthy AI, with the limitation of biometric identification and bans social scoring as examples for applying the European notion to big tech. Not only in 2024 will there be room for regulation and guidelines, but ethics might also play an important role. With regards to the impact that the application of AI within businesses and organizations has on the future of work, the impact that the use of AI technology on the environment and the application of responsible AI.

6. Sustainability

The world around us is changing, but what most of us do not realize is the amount of energy used when using ChatGPT. For a simple prompt to generate an image, the electricity required is equal to charging your mobile phone. Imagine what impact the application of large language models could have if this is woven into (crucial) business processes. The footprint of the use of large language models is not visible yet, but organizations will take this into account in their ESG-scan. Possible solutions are using smaller large language models and using other technologies for specific use-cases where large language models are not necessarily required. The application of AI will remain to stay top of mind, yet the hype of GenAI will make space for a more nuanced view on the use, the added value of GenAI for specific use-cases.

We are looking forward to 2024; would you like to know more about what AI can mean for your organization? We are happy to help!

Meer weten?
Klaar voor een grote implementatie, of wil je juist kleiner beginnen met een informatie sessie over AI? Y.digital helpt je om jouw ambities rond AI waar te maken.
Boek een meeting