There is a key aspect managing the financing of trade credit, in general, and reverse factoring, in particular, which is not always well designed and used, from a trade, operational and risk point of view, in contrast to other financing arrangements: big data.
Financing commercial credit means managing and processing a “cell” which contains a lot of information, sometimes more than it seems. We are not referring exclusively to the information associated with the cell itself (amount, period of time, debtor, etc.), but to the behaviour and the information that all cells as a whole are capable of providing us: signs of potential insolvency, cash flow needs, suitable price levels, readjustment of limits and capacities, more favourable or suitable periods for proposing a specific action, etc.
We cannot discover anything if, at this point, we talk about data management as a fundamental thrust of any service or product which has numerous customers or users on the other side. That said, are we properly using the capacities that technology offers us to draw correct conclusions on behaviour and tendencies when we refer to legal persons or professionals? Do we understand, think and rationalise from the same viewpoint as an SME or a self-employed worker? It is clear that this is not an exclusively technological matter, but is also related to listening, learning, experience and extensive knowledge of the variables used to be able to interpret the foregoing behaviour.
Here, we refer to “thinking as a company does every day” in order to understand how it sets its priorities, understand its needs, learn about how it rationalises its decisions, know when certain messages and offers are appropriate and when these may be unwanted and annoying.
This work area is so important that we can venture to say that providing and engaging time and resources to this end is a key differentiating factor between the operators and entities. In general, we are dedicated to seeking “the sale”, so we do not stop to reflect and improve retention, the quality of the service and use of the information buffer we have. It is important to measure responses, establish quality criteria, and have SLAs and service indicators, but it is also important to allocate time in our daily routines to understanding the behaviour of our customers, their needs, real priorities and way of assessing the service, and anticipating what is important for a provider and when it is important.
If we focus on the Reverse Factoring activity, we can see how the Spanish market, as indicated by the AEF statistics, makes over €160 billion of payments through this system every year. If we agree that the average amount of an invoice could be between €6,000 and €9,000, we would be able to simply deduce that in our market between 27 million and 18 million of invoices or payment orders use Reverse Factoring every year (let’s say 23 million). On the other hand, if we study the amount of data and information we manage with every order processed (by accessing the standard Reverse Factoring format published by the AEF) and we add other relevant information about the process, price offer, acceptance or not of the offer, advance period, etc., we can also agree that we have no less than 20 fields/pieces of data per order. In other words, the Spanish market handles at least 460 million pieces of data directly related to the management of Reverse Factoring programmes.
We say directly because we can indirectly estimate that there are several hundred million more pieces of data, probably as rich and important as the initial pieces of data. We refer to data regarding the operation of the company, its balance sheet, activity, profit and loss statement and business sector. The value of the data itself from a Reverse Factoring programme put into context and in relation to the external or public information of a company can identify numerous situations and behaviours to be able to adjust and tailor our offer to unexpected limits.
The importance of the methodical, systematic, intelligent and evolutionary cross-referencing, emulsion and mixing of all this information is essential if we want to optimise the return of a service whose results can become arbitrary or difficult to control and predict.
The use of new technology applied to this end becomes essential when processing a horde of data and information which seemingly move from independent axes. The “evolutionary machine learning” methodology, which is continuously learning and correcting, provides us with evident conclusions and returns, but the route for implementing and adjusting the operation of these tools is not trivial as it entails an extensive loading of applied technology and functional knowledge of the product.
Another fundamental axis of these works is to reach a “manageable simplification” of the conclusions. The processing of such information requires simple, semi-automatic and continued action proposals. We are not referring to an occasional study or analysis which concludes a sporadic action. We understand that this management must be part of the DNA of the service and the business model, and must therefore entail a methodology which enables continual improvement when applied to the matter.
Applying these advanced data analysis systems to the business must enable us not only to adjust and improve our business expectation in terms of activity, but also to help us to anticipate decisions that revolve around other aspects to which we often pay little attention: systems of reassigning risk limits to optimise use of capacities, and optimise consumption, alerts and indicators for the early detection of behaviours indicating unwanted situations, assessment of operational resource capacities assigned to customer or provider service, etc.
Although we know that the results and conclusions of these actions and mechanisms do not derive from an exact science, as we are speaking about behaviours, we believe that the effort and dedication of these works, which are based on significant volumes managed and processed, merit our attention and interest.