It’s AI time (again? still?)

We are in the middle of Tech-Spring. Kicked off with a sudden, new (probably set up to preempt Google’s announcements) OpenAI Spring Event. A lot of rumours were going around before the event – GPT5? a new competitive search engine? What will they announce? – None of that in the end – but more below.
OpenAIs spring event was then followed by Google I/O with massive AI announcements. Sundar has tried to joke around with counting how often AI was mentioned during the keynote (122 times), but I think they should have rather counted how often Gemini was mentioned. Again, more below.

If you did not have a chance to read in depth or even watch the keynotes, I can only recommend you go there. It is massive and really interesting. Here you can watch OpenAI’s Spring Event and here you go for Google’s I/O keynote.
If you don’t have time for everything, the Verge (and many others, but they are always very reliable) has cut “Google I/O in 17 minutes“.

OpenAI’s Spring Event

OpenAI has announced ChatGPT4o which is a next evolution, significantly improved model. It is not only multi-modal but also much faster than before. It has gained new and useful capabilities – at least at the first look. When you watch the event for the first time, I am sure you are blown away from what you see. It is amazing and there is also a reason why there are comparisons with “Her” across the net.
For sure the new way of interaction, the new voice, more emotions and better understanding will lead to improved acceptance and usage for sure – and I am also certain that it will lead to new hype-conversations of “AI-Girlfriends”. Well, let’s leave that topic where it is.

However, with multi-modality baked in, it is much more useful in daily life and figuring daily problems. The demo to help on solving math problems for example is great for kids and hopefully at some point will lead to more equality on tutoring. But this is not what I want to talk about.
What I find more interesting are its abilities of being a wider range personal productivity tool. Helping understand complex documents and graphs and explaining it is truly huge (and yes, you still need to insert personal judgement if what it tells you makes sense, it won’t get it all right). The capability for live, fast translations is also fab in business settings. I know that we kind of ignorantly assume that English is THE language and it all perfectly works with just English. Fact is that this is not true and having a productivity tool that helps with just that is massive. It levels the playing field and helps especially smart colleagues that are not good at languages/ English. Check out my conversation with David around exactly that topic.
Its for sure great what was announced and shows how quick Gen AI is developing – and also, look at their price announcements, it gets way more affordable.

Google I/O Keynote

One day after OpenAI’s spring event, Google held its I/O Keynote. And it was packed with AI, in fact, it was packed so much that I had to watch multiple times to understand and properly follow. It felt a bit like Google throwing all they can do at the wall and see what might stick. But one of the major problems I had in following through is that basically everything is somehow called Gemini. And this means it is really difficult to distinguish between different models and use-cases.

Of course, Google is offering very similar features and ideas that OpenAI has announced and I don’t want to repeat myself here. But what I truly find interesting from a business and especially Experience lens is Project Astra.

The examples Google shared are very “everyday life”. But this is something I can see developing further and especially in the business environment. The ability to understand context multi-modal, interpret it and then provide support could be the next game changer in Experience.

Today we are already happy when we have a Gen AI powered chatbot that can interpret natural language and then helps to get a single self-service task done. I don’t want to underplay that and I also know that this is not yet a commodity.
But think further, there is an additional problem we face today in our HR systems. It is not only that the systems are HR language based, but they are also HR process based: Each self service scenario is a singular HR task. And these tasks are everything but designed in a way a regular employee or manager approaches people management. Usually, a people manager requires multiple singular HR tasks to be performed at the same time or right after the other – take for example a return to work from parental leave with the person switching jobs. Today, in most systems this is more than one case and doesn’t flow naturally. In a world with Astra, the singular tasks can be triggered by the AI in one go as it can interpret what you want to do and it can action multiple activities based on that. This is a future I love as it further decouples backend and frontend and helps simplify and speed up self-services while empowering each people manager to be independent.

Yes, this is a dream-scenario not easy or today turning into reality – but soon. With the speed of Gen AI development, I am sure I will write about it next year here.

Volker Schrank Avatar

Posted by

Discover more from Employee Xperience Labs

Subscribe now to keep reading and get access to the full archive.

Continue reading