Julien Molez (Société Générale): The use of data and AI for boosting digital experience !

Come with us to discover a little bit more in depth the potential of AI for making your clients’ life easier (and your company’s too!), Learn by the hand of Geoffrey about Julien Molez and his vast experience of the field. 👇

We all know Société Générale, a historical player in the French financial sector. We are less familiar with its challenges in terms of innovation, in particular around data and AI. That’s what we’re going to try to explore with JulienMolez, who is Group Innovation Data & AI Leader at Société Générale.

Well, thanks for spending some time with us on Innovation Leaders.

It is a pleasure, a shared one to take the time to talk about the different challenges I face every day in this exciting group that is Société Générale.

We’re going to talk about innovation, data, of course, artificial intelligence, the specificities of the banking world in relation to these issues, and also, your experience of the implementation of these concepts in a large group like Société Générale, in comparison with the startups I’m used to talking to, usually more agile.

Before that, I’d like you to tell me something about yourself. It’s not easy to take a step back, but you graduated from an engineering school, before joining a big bank like Société Générale. You had a career as a consultant so, can you tell us more about what led you to occupy that position on the company?

Indeed, after I left school, I had a career in the field of intellectual services, but there were two very distinct periods. The first part of my career, which lasted seven years, I really had an IT profile and I was very strongly linked to technology: I was doing ERP development but overall, quite general development. First in a large company called IBM, then on my own, but it gave me a good grasp of technological issues and the functioning of IT departments since they were my natural clients for seven years.

Then, I wanted to have a much more impactful business dimension, often even an IT department that serves the business, so with a posture that is not necessarily always very transforming. I wanted to combine this vision of technology with a more transforming vision. That’s why I went into management and strategy consulting for seven years. So, always in the field of financial services because I’ve always worked for banks, asset managers, or companies that did employee savings, for example. This time, I was more involved in reorganisations, process reinventions or strategy definitions, for seven years as well. Rather in structures which are similar to startups, since they were very modest structures, with about twenty people each time, fairly agile structures. That was before I joined Société Générale. So about 15 years with a half IT, half consulting background, before joining the Société Générale group at the beginning of 2014.

I joined Société Générale in an extraordinary team called SG Consulting, which is the internal consulting firm of the Société Générale group. It’s a team of about 100 consultants: half come from the bank’s internal teams, half come from the market, this last one was my case. They carry out organisational and strategic transformation assignments for all the Société Générale Group’s business lines.

They didn’t focus on tech?

No tech focus at all, rather reorganisations, process reinventions, optimisations, roadmap definitions, strategies, business cases, target operating models, so quite business-like. That’s one of the particularities as well, it gives a very, very good vision of the Société Générale group because it’s a unique team. Other banks have chosen models where the consulting firm is specialised either in investment banking or in retail banking. We have a single consultancy for all the businesses, retail banking in France and internationally, investment banking, but also corporate functions such as HR, risks and compliance which gives a very, very broad vision of the transformation.

I spent a year and a half in the field, carrying out missions, particularly digitalisation missions for structured finance. Then I was in the management team of the firm for a little over three years, where the aim was to grow the structure, so we went from 60 to 10. We also wanted to broaden the scope of the assignments in order to reposition the brand in a more strategic position.

At the end of this journey, of these five years, it enabled us to see about 50 transformation missions throughout the Société Générale group, so we had a good knowledge of the players and their challenges. I wanted to combine this strategic knowledge, and the intimacy of the business lines with technology, which was the first part of my career. That’s where I was lucky enough to cross paths with Claire Calmejane, who had just been appointed Chief Innovation Officer for the Société Générale group and was directly linked to Frédéric Oudéa, who is our CEO, and who was looking for someone to take the lead on Data AI issues with a business rather than a tech prism.

It’s a mix of the two expertise fields, a position that is not easy to find.

Here is where you found your hobby. A business impact, but with a tech focus, also by valuing the transverse vision of the organisation that you had. We’re talking about data and AI. Before we talk about that, before we talk a little about the use and your intervention on this subject, I need you to define it for me a little. Data in general, we see what it is. Now, it would be interesting to know what the data you can use actually means. How would you define AI in absolute terms for someone who is not a techie at all, with some illustrations if you can.

Let’s talk about the data first. But let’s talk about the fundamental material, the raw material. There is no AI without data, that’s the key point. This data in a bank is extremely varied. I’m not even talking about a bank, I’m talking about a group: Société Générale, there are essentially banking and financial activities, but not only. There is an extremely successful mobility branch called ALD, which specialises in the management of car fleets and other businesses, not necessarily banking.

Today, it is rather the data of a large company. This data is of many kinds, it is obviously about our customers, extremely sensitive data and in particular personal data the one that we should take care off the most: data on transactions and on accounts. All the payments, withdrawals and transfers that our individual clients may make, but also on a much more complex market, transactions on the stock market, on over-the-counter derivatives in capital markets or on extremely complex financing operations when we finance large property complexes or aircraft fleets.

This data is both on the financial arrangements but is also of a textual nature. There are a lot of contracts, some companies generate an extremely large number of contracts in the world. There is also data on our risk indicators because the job of a bank is to manage its risk, which means a lot of calculation data, especially on complex words like RWA or risk exposure, so these are the data calculated from the basic data. You can see that there is a multitude of data with a particularity, as it is a bank, it means managing intangible, non-physical assets for quite some time. We haven’t been moving gold for a long time, we haven’t been clipping paper coupons on bonds. We’ve been managing data for a long time, which gives us a particular prism compared to today’s technological players.

We have lots and lots of structured data, and tabular data with figures, whereas we have seen the explosion with the digital players with rather unstructured data, data from navigation logs, data with photos, files or audio. These types of data are less present, even if they are starting to emerge in the banking world. Increasingly, there is a very, very important point for banks, which is also to start using external data: how we are able to acquire and combine data that comes from outside, whether they are structured or unstructured.

Maybe just an indicator: today, I think, it’s about ten petabytes, of data stored by SG globally, in its various infrastructures, and essentially structured data. It’s quite a substantial and important asset.

So your playground covers all these different banking professions.

My playing field is above all a field of influence, more than a field of production. I’m not necessarily a CTO who spends his day with teams, that mixes these teams. I’m much more involved in strategic dialogue, but in the end, my playground is this set of data. Perhaps an indication for those who are less familiar with us: the Société Générale Group is a little over 60 countries, 30 million customers, individuals, and professionals, and 25 different business lines, very different business lines in 60 countries, and a little over 240,000 employees. So a great diversity of geography of activity.

How would you define AI, without necessarily talking about it in the banking world?

AI is not banking. There are two ways of putting it. In a more theoretical way, AI is a set of algorithms, a family of algorithms that replicate human capacities, either deduction or prediction, and replicate human behaviour, based on experiments and sensors. If we define it today, it is essentially because we have very few expert systems. It is essentially the algorithmic branch called machine learning, a set of predictive algorithms which are there to make predictions from data.

Here, we are reversing the paradigm a little. Before, when we wrote computer code, we took data as input, we wrote rules, and this produced an output. Today, it’s more like we have the output data, and we try to make the algorithm write itself a bit to be able to reproduce them by crossing the inputs and outputs.

Machine learning is a branch of algorithms, which itself is broken down into several systems, you have the expert systems, especially in the 70s and 80s, and a machine learning branch which has gone through different cycles since the beginning of the theories was in the 50s. There was a lot of promises at one time, then a certain abandonment and a revival, notably thanks to the work of Yoshua Bengio and Yann Le Cun on deep learning, which is even a sub-branch of machine learning.

Is AI considered as a side project in R&D mode or is it something really concrete? Can you illustrate for us the stakes and the strategy of these stakes for a group like that?

It’s not a side project, that’s for sure. It’s one of the strategic challenges, like many large companies, especially those in the CAC 40. Because there is a potential for transformation and amplification of the digitalisation that is underway, to meet the expectations of our customers, individuals or companies, who increasingly expect to consume digital services. They expect these digital services to be more and more reactive and personalised, because they are used to consuming this type of services from major digital players such as Google, Netflix, Apple, Amazon… They are used to experience, reactivity, personalisation and excellence.

That set of qualities is supported by data and AI, which in fact boost the digital experience. The primary digital experience is often to make a service accessible through a web page, an API or a mobile application. The data will help to completely personalise this service. This is the reason why it’s becoming a critical issue for the bank. Today, the expectations of our customers, and also our employees and regulators, are increasingly strong in terms of customized experiences.

It’s a double critical issue: there are customer expectations, and on the other hand, we also have strong pressure from regulators, particularly from the European Central Bank, which is asking more and more for us to provide the finest possible data, the most granular possible, so that they can play their role in financial stability, particularly in the European zone. Therefore, they are asking the banks to give them raw data to be able to pilot as closely as possible. This issue is critical for customer expectations and regulators.

How can we materialise it? What would you find most interesting, Geoffrey? A few emblematic projects? Or the size of the portfolio? What do you think would give the best look?

I’m thinking about use cases, the main ones you might have in this AI world.

There are a little over 80 in production today that are already live. It’s a reality, it’s no longer in the labs, it’s more a question of seeing and trying what we can do. Machine learning is already being used in banking operations. There are visible things that consumers are already getting used to every day in the interfaces: the conversational agent, and the chatbot, which we are starting to roll out on our customer interfaces to be able to handle the most basic customer requests. There is a conversational agent in our retail bank in France, Société Générale, which is called SoBot, and which is already beginning to handle a little more than 5,000 questions and answers per day from customers.

We can see that there was a very strong increase during the health crisis, a need for greater responsiveness at a distance from customers in order to be able to answer them more quickly. For example, we use quite a lot of this processing to route our emails for market operations, so our back office receives hundreds of thousands of emails every week. And it’s a real challenge to be able to sort them out and direct them to the right operators. It turns out that AI is quite good at reading, understanding the intention of the email, and then using a rule to route it to the right person, which makes it possible to analyse the email in real time and redirect it to the right person, saving a huge amount of time in managing customer complaints. We also have cases that are a little more massive, especially when there are regulatory requirements.

Today, from a client’s point of view, we must be sure that traders do not commit offences in relation to what we call market abuse or any other deviation. So, in the communications they may have with their clients, they behave appropriately. We have an AI that monitors all these communications, emails, chats and telephone conversations. It makes real-time transcriptions of the audio, transforms all that into text and analyses the text to identify if there are elements of conversation that could be suspicious to detect them as quickly as possible.

To succeed in covering all these issues and the way in which we talk about managing a use cause, there is a strategy, a vision for data and AI. Can you give us an outline or an instruction manual for setting up an effective data and AI strategy in a smaller group, but generally speaking in a company?

Even if I myself lead this theme of data and AI in a particular way, it must always be at the service of a product or a real strategic desire. That can only be done by the person in charge of the business, of their client or their problem. This is true everywhere in the digital area, where the client must really be in charge. So the impetus for the use case and the product must really come from the business concerned. That’s one of the challenges. In other words, it’s not my role to drive the use cases.

Some time ago, this was true in the startup phase, when you begin, for people who are perhaps less mature in this approach, it is important to have a lab that can initiate ideas in order to demonstrate them to the business. To give the impetus as soon as we are in a slightly more mature phase, we must on the contrary make the different businesses responsible, explain to them that it is their responsibility, help them to train, and give them the right methodological frameworks, but always make sure that the impetus comes from them. Often, when the impetus comes from a team that is too centralised or too involved in the problem or from teams that are too technological, the problem doesn’t find traction nor a sponsor. That’s the key point: ideas really have to come from the business.

On the other hand, we’ll say that we have four key dimensions in a data strategy. I’m talking about cross-functional people, because I said that the responsibility for ideas lies with the business. There are a whole bunch of essential layers in a data strategy so that these ideas can subsequently come to life. The first layer is more of a technological issue, it’s about infrastructures.

Do I have sufficiently modern infrastructures to be able to do machine learning, which requires infrastructures that are relatively different from traditional IT infrastructures?

Do I have a data deck, with the right level of services for management, data preparation, training of my models, the production of my models and the traceability of the models?

Do I do it at home or do I do it on the public cloud?

This last one is part of the structuring process and the investments made, which are often multi-year but fundamental to be able to give life to the model. Since the essence of the machine learning model is to produce predictive models, we work on the language and the data, but we need infrastructure, and consequently to invest in it. There is a key aspect of data management: how I manage my data in an appropriate, secure way so that it is easily accessible, of good quality and secure. Because it’s really the fuel, so if it’s not available you could have the best idea in the world but if there’s no data, there’s no AI. Machine learning will take the data, and if you don’t have any, nothing will happen. Or if it takes nine months to access your data because it is not cleaned, not accessible, or not available in an infrastructure, the use case will not come to life. Data infrastructures are essential.

The third aspect is strategic appropriation. We were talking about this earlier with the businesses. How do I ensure that my businesses have good ideas, feel responsible for having ideas and manage their portfolio and time in a prioritised way? Today, this is one of the market trends that we see in large companies. It can sometimes be a temptation to say, « well, this AI topic looks nice, I need to launch an initiative » and you launch it, but without having first defined the scope and the value you expect from it. Unfortunately, an AI case is an investment, it’s not quick nor cheap. AI is still an industrial project of value and impact. When you want it to have an impact on a large scale, it’s not a quick and cheap project, so you have to choose it carefully. You must take time to qualify the value and choose your battles in a prioritised approach. It’s extremely key that there is a very high level of sponsorship, a real qualification of the value, management of the portfolio and of the efforts, and also knowing how to give up when things aren’t going well. This strategic appropriation is key, a mixture of governance, methodology and business involvement. That’s the third layer, strategic ownership.

The fourth stage is fundamental, obviously, and that is talent and skills. The talent needed to work with machine learning and data science at that scale are not the same skills as those we had in the past to work in IT, those are complementary. But how do I recruit good data scientists to lead my transformation? And not just data scientists, that’s what you discover when you scale up, the more projects grow, the more you realise the number of slightly specialised profiles you need to scale up, and they are becoming more and more numerous. This is true in startups too, already in large startups. We know that they have machine learning operations teams, MLOps, just as they already had DevOps, but you need slightly specialised MLOps when you do machine learning on a massive scale. We are also convinced that we need AI Product Owners. These are the profiles that will be responsible for the use case.

For a long time, we thought that the data scientist would do all that, but a data scientist is an expert in mathematics and code who knows how to make predictive models, which is extremely precise. Our bias is to say no, we mustn’t waste their time and skills on project framing, value definition, data cleansing and data preparation. We have an increasingly complex chain of talents. We need to be able to recruit, retain and attract this talent. This is just as important as the data. The four dimensions are extremely key. Above all, you have to make them go hand in hand, you need great infrastructure if you don’t have ideas. If you don’t have strategic support, you will have invested at a loss. Conversely, if you have a very, very strong strategic commitment and you don’t know how to access your data or your infrastructure, there is a problem. You won’t be able to bring the use cases to life. It’s a balance between these four subjects, which are also sometimes addressed by different people in the organisation. You have to manage to get them to work together and make it a strategic subject with the right level of scrutiny from the general management.

Let’s try to get into some of these pillars. As the person responsible for data and AI at group level, what is your role? What are your challenges and your objectives?

For me, my issues are quite clear. They are really on the last two pillars that I mentioned of strategic appropriation and the talent part as regards the staff that the business needs. My role is really to make sure that the different businesses and geographies of the Société Générale Group understand what AI can bring them, that they want to embark on the adventure, that they choose the right battles and that they are aligned with their strategy. I use different tools for this, three exactly.

One is what I call the strategic dialogue. It’s me who gives a bit of a global vision to all the businesses, to the general management on this appropriation of AI by monitoring the global portfolio of use cases that we have in the group. I also have teams who take their pilgrim’s staff, go to each of the business lines to discuss with them and say « you take everything in your portfolio, what does it look like? Do you think this is a good use case? What do you think about collaborating with such and such? » They dialogue with the business and also spend a lot of time making themselves known in a structure as large as Société Générale. As I said, we have more than 80 cases in production with AI, so we are already getting to know each other, which has been developed sometimes in Germany, sometimes in England, sometimes by a subsidiary in Côte d’Ivoire and which could be interesting for everyone.

This role of amplification is constant work of pushing ideas to others. We run some sort of podcasts to publicise the different use cases and highlight people. We do that more at the level of experts and leaders and then we have a team that does more mass communication, because we are also convinced that we need to upskill and make AI common, an everyday subject, and demystify it for all employees. That’s part of our activity: to provide them with the right tools for training, in particular to launch a partnership with Coursera to allow everyone to train on Coursera and access the certifications for free, by putting forward training content on data sciences.

We also have a lot of external communication to be able to defend the Société Générale brand. To remain an attractive employer, to show our expertise to our students and various partners, whether they are startups or universities. This means giving ideas, sharing, communicating, making our exeprtise known, and in last place, I have set up a data science team at my place to respond to the cases of perhaps the least mature entities which did not have a dedicated set-up and which could be blocked by this lack of talent. We’re going to tell them, « Listen, the DG is investing and we’re creating a data science team which is there to take your first steps, your first pilots, your first MVPs in terms of AI, you won’t be blocked by that. If you have ideas, we will be able to support you.” It’s working on quite different dimensions, so once again, my objective is quite clear, it’s really to ensure that the Société Générale Group’s businesses know how to use AI in their transformation.

I wanted to come back to a point that you made earlier, on the side that you must also know how to stop when you realise that a use case is not effective or that you are not going in the right direction. In concrete terms, what element or indicator do you use to tell you that you’ve taken the wrong road, that you’re not on a good subject or it’s not promising?

You have a lot of things that can make it go the wrong way. The first case you should never forget, it’s a bit difficult sometimes in the mindset of big companies, is to understand that there are many reasons that can kill a use case. The first one: the absence of data. Great idea, but today all the processes of a company are not 100% digital. We have over 150 years of history with a physical tradition. We’re moving towards digital, we’re pretty good at it, but we still have things that aren’t digital yet.

There is the degraded version, that is, there is data but it is not usable for a machine learning case. Either it’s not representative of the phenomenon you’re trying to predict, the systems or the operators haven’t calculated it, so there’s no labelled data for AI to do anything. Unfortunately, either we have to go through a data generation phase again, perhaps three or six months to have a sufficient history. So already the idea is a bit nipped in the bud, and that has to be checked fairly early on. Presence of data, quality and representativeness of data are the first point.

The second point is the business idea which is not flying high. You’re either trying to solve a problem that isn’t a problem or even if you do solve it, the ROI is not good. In fact, if you’re trying to change a process that involves 1.2 people, that is quite difficult to predict. It is not repetitive, it doesn’t involve masses of data so there’s little chance that AI will deliver. You have to keep in mind that machine learning is not applicable to everything. It still requires a lot of data,  a big volume of repetitive phenomena, representative data. You have to qualify the value you expect from it at the end, right from the start. You have to think big, but if the value isn’t there, you have to stop.

The third step is never to forget that machine learning is a field derived from probability. Maybe you’ll come across a model that doesn’t work. You have the data, if it worked, the value would be great. But the model can’t predict. It happens. And in that case, you have to know when to stop. Afterwards, there may be more cases where everything is in place, except that when you start to think about the target integration, it’s not. I think that you must always have the target integration in mind. One of the phrases I repeat most often to avoid the lab effect or the poke multiplication effect « if you don’t know whose life you’re going to change with this AI model, don’t do it. »

You need a fairly clear vision, otherwise, you have fun making a model in a corner, but it doesn’t change your life. If you don’t know in whose hands the final result is going to be or how will it make a difference, for an employee or a client, you shouldn’t do it.

I’d like to have your opinion on the organisation of data skills, because these are cases that I come across with several clients I work with. We have skills and subjects that are at the crossroads between IT, data and business. How does an efficient data and AI organisation look like for a team? Do you need separate teams? Do you need to have teams in the business?

It’s an extremely complicated subject, which varies in time and in the maturity of the appropriation of links. Today, I have my conviction, which is perhaps not shared by all my colleagues, because we have quite different models within the group and within the businesses. I have a strong conviction that we need three types of teams: there are business teams, data science teams and IT teams. And there is work for everyone. One should not try to do the work of the other, that’s a key point.

For me, the role of the business is to have the ideas, the strategic alignment and possibly the capabilities of what I was calling earlier the IA Product Owners, for example, people who understand a little bit about data, capable of investigating the data, of framing their business cases in a fairly detailed way and who will take all the legal steps and who will ensure that the change is managed at the end. For me, these people are on the business side with the sponsors and with the IA strategist who must also be on the business side.

On the data side, obviously data scientists, to be able to build models. There are probably data engineering teams too, who will imagine transformations, data acquisition and transformation flows, and target pipelines, and also some more Front End, UX and Front End Dev skills, to be able to bring life to certain use cases, although the prototyping capacity must not be too far away.

On the other hand, on the IT side, you need strong skills in data science industrialisation platforms, workflow, ML, and integration with legacy systems. Once again, if you want it to change someone’s life, it is quite rare for an AI product to be standalone, it has to go into a system at the end. So here we have people in charge of super-robust platforms that offer workflows, quality of work environments and that will give the right instructions from the start to the Data Sciences teams, the two of them having to talk to each other to make sure that they are in sync on the technology stack, the way of industrialising and the way of dealing with these subjects.

In terms of organisation, do we put the data science teams in the business or not? That’s the question. When a business reaches a critical size, yes, it’s better to create this dialogue on a fairly regular basis. Today, we have models which work well on the pooling of data scientists, on a certain number of professions which have common assets, because they share either the same clients or the same technical foundations on the IT side. That works well as long as we haven’t reached a critical size in machine learning in a single business.

On the other hand, the dangerous phenomenon is the sub-critical size of data science teams. People who try to form teams with two or three data scientists generally don’t last very long. In fact, people often don’t get the challenge they expect. They lose stimulation in relation to the market; they have no coaching or community work. Unfortunately, this is often the reason why they lose their attractiveness. This has led to a certain amount of turnover, because people often find themselves either doing data cleaning or desperately trying to find a sponsor somewhere, so they don’t have the right size sometimes in the lab. It’s better to have labs or data science teams of critical size, often ten or fifteen people, it’s still better to be able to have a bit of R&D, to have harmonised practices, the same rituals, to develop common bricks that can be reused because there the leverage is much stronger. That can be shared at first. The business staff needs to have the widest possible scope. The data business talents are directly involved in the business. It’s a question of capillary systems.

For example, ALD is present in a large number of countries, so at the beginning of their maturity, these teams needed to be centralised and then perhaps decentralised as time passed, perhaps by country. It’s never easy, but in any case, it’s better to start centralised when expertise is scarce and maturity is limited, and then gradually decentralise to find the right set-up. The danger is to decentralise too early, as the teams would be too small and would lose traction and interest.

You were talking about pooling, so data teams who are going to interact with different teams on the IT or business side, but not a mix of skills?

Yes, that’s the model we use. We may have decentralised a bit quickly. We have quite a few large groups that we talk to who still have everything centralised, a central AI factory, including groups of more than 100,000 people who, of course, deal with quite different IT systems but who try to converge. Often, it’s a model where storage is also central. It often goes with a unique data lake which allows models to be passed on in APIs. Then these API models are called up by the various IT departments. This creates a model that is fairly agnostic to the underlying IT. It also depends on the impact of the legacy on the organisation and the history that you have. The data science team I was talking about has the same underlying IT, even if they are working on data science models for five or six different businesses, they share the same IT. You can mutualise more easily.

Speaking of IT, the tech stack, the tools that are used in this AI world, what are they? What kind of technological environment do you use?

It’s quite diverse. Again, that’s the complexity of large groups and it’s very different from a startup. There’s not a CTO who gives his vision and says « this is the reference technical stack I want to rely on and we’ll all speak the same language ». We already have more than twenty data science teams, so they haven’t all made the same choices. If you look at the dominant ones, we have a strong convergence of course on Python 3, which remains the dominant language in the world of machine learning, with a great predominance of open source libraries, whether they are classics, for the preparation of data PIII Panda and of course Scikit-learn for a certain number of data science functions, so this is present almost everywhere. Classics such as Tenser Flow are also developed for everything that is deep learning. In fact, Visual Code is still the reference editor. We have an industrialisation stack based on ML Flow. The storage and lake layer are also open source, so it’s predominantly open source, but I think that this is quite classic in the world of machine learning.

As a bank you don’t have the same freedom as all companies, do you?

Yeah, as a historical bank, it’s something that raises a lot of questions. It’s a strategic choice to know what use we can make of the cloud. The answer will never be 100%, I don’t think that’s the direction because of the sensitivity of the data we handle. There is always this balance to be found by type of data and we see that this is crystallising, including in industry. Not long ago, I heard Michelin say that even though they have fewer constraints, they make mixed use, but all the sensitive subjects in terms of intellectual property are handled on an infrastructure on-premise.

For us, these subjects are even more marked because of the regulations and the sensitivity of public information. It is certain that a bank today, which would be exposed to breach data on its infrastructures or on the cloud, would lose a large part of its reputation. We are institutions that are trusted by our clients, and this trust is a responsibility. We have dialogues with the cloud providers. There are non-critical applications with non-critical data that are on there to learn. There is a lot of dialogue at the moment to be able to find the right cursor, to find the right balance between the two worlds, because we can’t miss the value proposition of the big Cloud Providers in the agility they provide and the modernity of the stacks they offer.

If we look at the specific environment of a bank, I’m interested in your feedback on this implementation in a rather specific bank group such as Société Générale, with quite a few compliance constraints. Some are obviously reputational and the sensitivity of data is very important too.

Many of the company’s teams must be extremely curious, passionate, and willing, but others must also be quite worried about specific technologies. There are also shortcuts to optimising the number of employees through technology. Do you encounter this kind of difficulty yourself? Can there be cultural changes linked to these issues?

Undoubtedly, we encounter them, but at the same time, they are not new subjects. When people had the first punch cards, when they had micro-computing, when they started to have emails, when we started to put in business applications to automate, when scanners arrived, faxes, right up to the RPA recently, which I really don’t put in the field of AI, there has been a lot of changes. RPA is organising a sequence of tasks without doing it via a dedicated system. Basically, it’s replacing the operator’s mouse click, so it’s really moving the cursor from position 47.2 to 53.2, taking the data from the Excel file and copying it. It’s a set of rules. Some mix it with cognitive functionalities, particularly pattern recognition in customer service, but overall, pure RPA is not yet linked to AI. These things are there to optimise operating methods, to replace the jobs of the past and at the same time, there is a continuous cycle of creation of new jobs. There are people who do social networking, people who do digital marketing who didn’t exist or were almost non-existent ten years ago in banking, or people who do UX, so it’s a permanent recycling of skills.

AI is added as a technology that has the potential to automate, especially as we see it a lot through document analysis. It so happens that banking has a certain number of processes where we open, read, extract information from a document and process it. These are fairly generic capabilities and capabilities offered in MLP, particularly by the latest algorithms, whether it be Pert before or now GPT-3, which offers new standards in the automated understanding of documents. Yes, there are automation impacts. Yes, there are fears. These transitions have to be accompanied and I think I can say that they are being made like so today, without exception, always in a complementary human-algorithm way. We don’t have 100% AI processing. There is always a simplification of the operator’s work queue which is automated by the simplest tasks and by an analysis of the algorithm. But in the end, the human decides.

These mechanisms that we mention bring some sort of reassurance. There are more new fears linked to this somewhat intelligent side that we can see as a rejection from public opinion sometimes or others from employees. I come back to something I said earlier in the interview, which is that you have to think about the product. We had a quite striking testimony on a use case, which is used to predict the behaviour of the customer in the event of an overdraft, to see if the customer will pay back naturally. Before, this was a task that was done manually by the advisers on the platform or in the branches, and when we asked the question and said to the operators « what do you think of this solution? It’s AI. Does it scare you? », no one said, « it’s great, it saves us a quarter of an hour every day and it’s super valuable, like that we can spend time with our customers. » Once again, you have to think about the product experience, that’s the most important thing. And then, for the end user, whether it’s AI or simple coded rules or RPA, they don’t really care.

You were talking about skills support. It’s quite exciting and quite challenging for a group like Société Générale to succeed in meeting its need for skills. You were saying that there was support, particularly from Coursera or in any case from external training. What major initiatives have you put in place to meet your need for skills, either in the recruitment of external people or in the training of internal people?

For training, there are different responses, because there are many different groups with different aspirations. This management of skills is complex. One of the flagship initiatives that we launched in addition to this partnership, Coursera, has already enabled us to train more than 4,000 people who are self-taught in Data Sciences, we just set up the platform, did the promotion and some internal communication messages. Our employees have taken it up. It’s something that is absolutely splendid.

You have to trust your employees, give them the means and leavethem free access. More and more in this world, people have to be responsible for their own careers and we have to avoid trying to build top-down training plans by saying « you’re going to train, you’re not », we give them the means and people train quite spontaneously. I believe in this a lot.

On the other hand, we supplemented this approach, which was for all employees, with a more selective approach for the top 60. The top 60 in the bank, the 60 senior executives of the group. For them, we provided more tailored support in three phases, dedicated to each of them by taking a session, a bit of reverse mentoring with a data scientist for 2 hours, to see the fundamentals of machine learning. What does it mean to take data? What do you need to understand? What is overfitting? What is underfitting? What is learning from an experiment? And above all, go and look at a notebook. To de-dramatise and take this case public data from a large challenge platform. We would bring down a case with Python with these very senior people, telling them « this is how it happens, the data scientist does this, a model performance measurement, this is what it means » to play it down, to give them a feel for it.

We believe a lot in experience, and this approach was then completed. We also offered them tailor-made MOOCs, with a little closer accompaniment to help them do it, because it’s not always easy to fit it into their busy schedules. The last step to put it into practice was when we asked all of them to choose to sponsor the use of impact and to share with their peers. We created a collective dynamic to put into practice and boost our portfolio a little. This support has worked rather well. I have a small team. I can’t afford to do this in all the business lines, but we did it with the most senior executives of the bank.

Approaching the concenpt to them and having them used to it will multiply this knnowledge. Interesting. From a recruitment point of view, have the objectives been achieved? Isn’t it too complicated to recruit in these times?

So clearly, even today, we are very careful about the amount of recruitment we do. We also have challenges in terms of reprocessing our internal skills. We already have a lot of internal talent that we can support in these new professions. We need new blood, obviously, in data science type expertise. For the time being, it’s not an internal reskilling. We can reskill a quantitative analyst from the trading room, but we also need them to be in their field, or a good credit risk manager can make a good data scientist. There are bridges with some of the existing talents in a bank, but we need new blood.

We can go out and source them, so we have very good relations with the various universities. We have a lot of fairly junior profiles who come during their first years. Within the Société Générale group, we remain attractive, we have a lot of CVs, and we see that the market has become more dynamic in terms of training offers in machine learning. There are a lot of good profiles and then you have to integrate them well. And that’s the backbone of the senior profiles that we’ve recruited over four or five years. You need good mentors to be able to supervise the teams.

We’re still in the early years, but I’m interested in your point of view on the maturity curve, because we know that banks are relatively challenged at the moment, with low-interest rates that are lasting and that are eating into margins. Competitiveness is also being challenged by finTech and GAFAM players who are interested in your data, your customers, and who is coming up with a value proposition that is often attractive in terms of pricing or customer experience. Where do you think we are in terms of the maturity of these data and AI topics in the financial industry?

They are making progress, that is undeniable. They are progressing but indeed, to integrate them completely, it comes with a complete framework again. To rethink the difference between fintech and GAFAM is much more global than the use of data. Data is a fundamental, strategic role at the heart of the experience.

As I said at the beginning, we are able to have personalised services, to integrate them well, but this is also integrated into a complete framework of mobile experience, UX, simplification of processes, and design of the experience. So this combination of design, digital and data makes a whole. That’s why I always find it difficult to separate AI completely, or when we do separate it, we’re going to deal with the efficiency side, which is key. In terms of efficiency, it is extremely key, especially in a context where there is enormous pressure on revenues and in an environment of low-interest rates, so revenues from interest margins are extremely low, and so competitiveness.

You have to be able to process as well as possible at the right cost, and our competitors have extremely lean operational models, because they design processes from the outset that are data-driven and therefore extremely efficient in addition to being very lean with very few exceptions. There are a lot of things that play into this. Our maturity is progressing, because, a few years ago, we had a very regulatory focus on data and we also had a vision of data as an asset to be managed to comply with the GDPR like many companies, but also with BCBS-239. We already had to invest a lot of effort in cleaning up our data, to be able to do regulatory reporting with reliable, quality data. We have now passed this stage. The business lines have understood the need to take ownership, but once again, the transformation of skills and the desire to transform the product offer for clients by mixing digital and data takes time. I think we have the skill base.

People have the ideas. The use cases are known. Now we need time to implement them, so we’re going to see an explosion, or at least a communalisation of this AI skill in the next two or three years.

In terms of services or in terms of market changes, what changes could AI and data bring?

Much more personalisation, personalised services, and much more reactivity in customer service. This is one of the challenges: to use AI to personalise the customer relationship, to be as relevant as possible, as responsive as possible. Sometimes, the grievances that our clients can make to us by saying « You have experience, you have the historical experience, you have the expertise. But there are sometimes details that are not great, either in the processing times, or in the ability to be reactive, or sometimes in the pricing, which is a bit off.” We have to learn the best lessons from these issues to optimise our processes, to be more reactive and better. So, overall, more personalisation, more automated advice, and more reactivity. It’s still very generic.

In any case, this is a concern and being able to do everything too. It’s silly what I’m going to say, but in mobility, our customers ask us to be able to use the bank too, it’s direct, but without going to a branch. Today, we have populations that are still split in two, with different expectations. But here we have a part of the population that wants to use its bank independently and at any time.

The current pandemic phase means that this topic must be even more accelerated.

When you look at France and the maturity of France on this subject of data and AI compared to the rest of Europe, are there any countries that stand out more than others in terms of maturity?

Obviously, China and the US are leading the way in terms of the number of patents and investments, with huge application areas and models for putting AI into practice, including in banking. In Europe, we still have talent. France has the capacity to produce talented mathematicians and quality research, whether by Inria or the University of Saclay or the Polytechnic Institute. We really do have high-level talent and there is a will of the academic world to maintain these talents in France.

On the other hand, our English friends are often a little better at transforming it into a business, and we can see it: just the dynamics of the finTech scene between Europe and the UK are quite different. Will the Brexit change anything? I don’t know. There are start-ups like Person Ethics or Plum, for example, which offer automatic savings or automatic recommendation services, PFM (Personal Finance Management) based on AI transactions. These people have already been around for four or five years, it’s more natural.

Thank you for the clarification and all the information. A few quick questions before we leave. If you had to give one piece of advice to someone who wants to implement an effective data and AI strategy, what would it be?

For a large group? I would repeat my message: work well on the four fundamentals in parallel, never forget them, strategic appropriation, make the people in charge of the products responsible so that they are the ones who carry the use cases, don’t forget the infrastructure and data management, because if there is no infrastructure to run it, if there is no data available, nothing will happen in terms of AI. Finally, don’t forget the talent, because it requires special talent, so always work on these four dimensions and distribute the resources over the four aspects.

Is there a company that inspires you technically at the moment?

Plenty! For very different reasons, we’re going to say Apple. What they did with the M1, with the chip, was just a stroke of genius. This complete vision of software integration and hardware processor has led them to become reference creators by surpassing Intel in performance and integration, in addition to the Neural Engine, the units dedicated to AI computations and in their chips with a very future vision. Typically, this is one of the players that we don’t see too much on AI, who only work for their own account.

On the other hand, they magnify their product: the product remains the product. They are clear that it is a phone, but they use AI to enhance it, to detect incorrect gestures, to personalise services and with a real sense of data ethics. To date anyway, which is quite a contrast to Facebook.

Otherwise, of course, Deepmind can’t leave you indifferent after what they’ve just done on Alpha Fold, which is a breakthrough in the use of deep learning, on sequencing and then folding proteins, it’s just crazy, or Open AI, of course, with GPT-3.

In banking, there are plenty too. Revolut is interesting to follow, their product culture and their integration of AI. Of course, in finance, there is the Chinese reference, even if the market and the context are different. It’s always inspiring to watch what these people are doing.

How do you maintain and even develop skills? All these subjects are vast and specific. What do you do on a daily basis?

I don’t do it in a mechanical way, but I am opportunistic. I subscribe to a lot of newsletters, I have some in mind right now. I have Eye on AI, which is interesting. Data Elixir is one that I enjoy too. Those are the two of the main ones I read. Also, Data Ebook, I like it because it has a mix of humour and content in there as well.

Then I’ll scroll down on Google to look for platforms like FinExtra, which is more focused on financial services and tech, or more French, a publication like Mind FinTech, which is focused on FinTech, which allows me to have a fairly broad overview, and then I do generalist media, but that’s less frequent.

Are you practising in the tech and data field today?

So no, I’ve trained myself, I’ve coded in Python a certain number of things, I’ve taken tech modules and I’ve done application modules. But that’s not my role. I’m not particularly good at it, but I need to understand all these tech fundamentals to be able to integrate them into a strategy that makes sense.

Speaking of brilliant people, is there a CTO or a tech lead who could be one of your mentors, someone who stands out on the international scene?

So it’s going to be two, but they’re just big guys. It’s for quite different reasons. Andrew Yang, because I think he has put his foot down on the AI of millions of people around the world and he has this ability to both give a generalist strategic vision and at the same time to have achieved to be a Chief AI Officer of large organisations, either at Google or at Baidu. It’s really interesting to have this type of very complete profile. His mailing list is also very good, I follow him more on LinkedIn. Another one after that, I’ve said it often tonight, but also the more global approach of Product Management, so for example, Marty Cagan, who is one of the people to follow.

Is there a technology or product that you think will be a must-have in the next ten years?

We see the advent of the Quantum, we’ll be talking about it in the next ten years, that’s for sure. Will it become a reality? Will it be operational? I think it’s accelerated more quickly than we thought, notably because of the China-US competition. But more pragmatically, I think that the implementation and the declination in applicable libraries of the progress of GPT-3 on the language, I think that it will become a generality.

I don’t know if you’ve ever seen an interesting demo from a company called Other Side of AI, which has an email generator that’s pretty crazy. In fact, it’s the concept of using GPT-3 as a text generator, but you have three bullet points, and it generates an email in your style.

This language race is going to explode again, and it’s sure to become more widespread.

Is there a book you would advise any good engineer or data AI business leader to read?

My bedside book, and I was lucky enough to work with them, so this is an incredible opportunity, a book called Competing in the age of AI, by Marco Iansiti and Karim R. Lakhani, from the Harvard Business School.

It’s really a vision of both the global strategy for adopting AI and what it could mean in terms of transforming business models. The link with platform models from a business point of view is very complete, very rich. Or, as I mentioned earlier, Empowered by Marty Cagan.

What is the app you have on your phone that you recommend everyone to have?

For monitoring, Pocket, it’s still practical, but I’m not doing as much public transport now, but it’s still practical. Otherwise, I like Monument Valley because it’s a game I can share with my children. That’s always handy. It’s a kind of puzzle, it’s interesting. It’s a perspective game and the graphics are just great, very simple but very interesting. And of course, the Société Générale app.

If I were to invite someone on this podcast who you think has a tech background or experience in innovation that is a bit out of the ordinary, who would you recommend?

I could already mention my current boss, Claire Calmejane, who taught me a lot of things, and of course the people she gave me the opportunity to meet, for example, one of the directors of the Société Générale group, Lubomira Rochet at L’Oréal. She has had a rather incredible career, both personally and in the transformation that she has made of L’Oréal. But there are many profiles that inspire and nourish me.

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.