Meaning, AI, and procurement — some thoughts — How to Crack a Nut – Shield Smart

James McKinney and Volodymyr Tarnay of the Open Contracting Partnership have published ‘A gentle introduction to applying AI in procurement’. It is a very accessible and helpful primer on some of the most salient issues to be considered when exploring the possibility of using AI to extract insights from procurement big data.

The OCP introduction to AI in procurement provides helpful pointers in relation to task identification, method, input, and model selection. I would add that an initial exploration of the possibility to deploy AI also (and perhaps first and foremost) requires careful consideration of the level of precision and the type (and size) of errors that can be tolerated in the specific task, and ways to test and measure it.

One of the crucial and perhaps more difficult to understand issues covered by the introduction is how AI seeks to capture ‘meaning’ in order to extract insights from big data. This is also a controversial issue that keeps coming up in procurement data analysis contexts, and one that triggered some heated debate at the Public Procurement Data Superpowers Conference last week—where, in my view, companies selling procurement insight services were peddling hyped claims (see session on ‘Transparency in public procurement – Data readability’).

In this post, I venture some thoughts on meaning, AI, and public procurement big data. As always, I am very interested in feedback and opportunities for further discussion.

Meaning

Of course, the concept of meaning is complex and open to philosophical, linguistic, and other interpretations. Here I take a relatively pedestrian and pragmatic approach and, following the Cambridge dictionaryconsider two ways in which ‘meaning’ is understood in plain English: ‘the meaning of something is what it expresses or represents’, and meaning as ‘importance or value’.

To put it simply, I will argue that AI cannot capture meaning proper. It can carry complex analysis of ‘content in context’, but we should not equate that with meaning. This will be important later on.

AI, meaning, embeddings, and ‘content in context’

The OCP introduction helpfully addresses this issue in relation to an example of ‘sentence similarity’, where the researchers are looking for phrases that are alike in tender notices and predefined green criteria, and therefore want to use AI to compare sentences and assign them a similarity score. Intuitively, ‘meaning’ would be important to the comparison.

The OCP introduction explains that:

Computers don’t understand human language. They need to operate on numbers. We can represent text and other information as numerical values with vector embeddings. A vector is a list of numbers that, in the context of AI, helps us express the meaning of information and its relationship to other information.

Text can be converted into vectors using a model. [A sentence transformer model] converts a sentence into a vector of 384 numbers. For example, the sentence “don’t panic and always carry a towel” becomes the numbers 0.425…, 0.385…, 0.072…, and so on.

These numbers represent the meaning of the sentence.

Let’s compare this sentence to another: “keep calm and never forget your towel” which has the vector (0.434…, 0.264…, 0.123…, …).

One way to determine their similarity score is to use cosine similarity to calculate the distance between the vectors of the two sentences. Put simply, the closer the vectors are, the more alike the sentences are. The result of this calculation will always be a number from -1 (the sentences have opposite meanings) to 1 (same meaning). You could also calculate this using other trigonometric measures such as Euclidean distance.

For our two sentences above, performing this mathematical operation returns a similarity score of 0.869.

Now let’s consider the sentence “do you like cheese?” which has the vector (-0.167…, -0.557…, 0.066…, …). It returns a similarity score of 0.199. Hooray! The computer is correct!

But, this method is not fool-proof. Let’s try another: “do panic and never bring a towel” (0.589…, 0.255…, 0.0884…, …). The similarity score is 0.857. The score is high, because the words are similar… but the logic is opposite!

I think there are two important observations in relation to the use of meaning here (highlighted above).

First, meaning can hardly be captured where sentences with opposite logic are considered very similar. This is because the method described above (vector embedding) does not capture meaning. It captures content (words) in context (around other words).

Second, it is not possible to fully express in numbers what text expresses or represents, or its importance or value. What the vectors capture is the representation or expression of such meaning, the representation of its value and importance through the use of those specific words in the particular order in which they are expresssed. The string of numbers is thus a second-degree representation of the meaning intended by the words; it is a numerical representation of the word representation, not a numerical representation of the meaning.

Unavoidably, there is plenty scope for loss, alteration or even inversion of meaning when it goes through multiple imperfect processes of representation. This means that the more open textured the expression in words and the less contextualised in its presentation, the more difficult it is to achieve good results.

It is important to bear in mind that the current techniques based on this or similar methods, such as those based on large language models, clearly fail on crucial aspects such as their factuality—which ultimately requires checking whether something with a given meaning is true or false.

This is a burgeoning area of technnical research but it seems that even the most accurate models tend to hover around 70% accuracy, save in highly contextual non-ambiguous contexts (see eg D Quelle and A Bovet, ‘The perils and promises of fact-checking with large language models’ (2024) 7 Front. Artif. Intell., Sec. Natural Language Processing). While this is an impressive feature of these tools, it can hardly be acceptable to extrapolate that these tools can be deployed for tasks that require precision and factuality.

Procurement big data and ‘content and context’

In some senses, the application of AI to extract insights from procurement big data is well suited to the fact that, by and large, existing procurement data is very precisely contextualised and increasingly concerns structured content—that is, that most of the procurement data that is (increasingly) available is captured in structured notices and tends to have a narrowly defined and highly contextual purpose.

From that perspective, there is potential to look for implementations of advanced comparisons of ‘content in context’. But this will most likely have a hard boundary where ‘meaning’ needs to be interpreted or analysed, as AI cannot perform that task. At most, it can help gather the information, but it cannot analyse it because it cannot ‘understand’ it.

Policy implications

In my view, the above shows that the possibility of using AI to extract insights from procurement big data needs to be approched with caution. For tasks where a ‘broad brush’ approach will do, these can be helpful tools. They can help mitigate the informational deficit procurement policy and practice tend to encounter. As put in the conference last week, these tools can help get a sense of broad trends or directions, and can thus inform policy and decision-making only in that regard and to that extent. Conversely, AI cannot be used in contexts where precision is important and where errors would affect important rights or interests.

This is important, for example, in relation to the fascination that AI ‘business insights’ seems to be triggering amongst public buyers. One of the issues that kept coming up concerns why contracting authorities cannot benefit from the same advances that are touted as being offered to (private) tenderers. The case at hand was that of identifying ‘business opportunities’.

A number of companies are using AI to support searches for contract notices to highlight potentially interesting tenders to their clients. They offer services such as ‘tender summaries’, whereby the AI creates a one-line summary on the basis of a contract notice or a tender description, and this summary can be automatically translated (eg into English). They also offer search services based on ‘capturing meaning’ from a company’s website and matching it to potentially interesting tender opportunities.

All these services, however, are at bottom a sophisticated comparison of content in context, not of meaning. And these are deployed to go from more to less information (summaries), which can reduce problems with factuality and precision except in extreme cases, and in a setting where getting it wrong has only a marginal cost (ie the company will set aside the non-interesting tender and move on). This is also an area where expectations can be managed and where results well below 100% accuracy can be interesting and have value.

The opposite does not apply from the perspective of the public buyer. For example, a summary of a tender is unlikely to have much value as, with all likelihood, the summary will simply confirm that the tender matches the advertised object of the contract (which has no value, differently from a summary suggesting a tender matches the business activities of an economic operator). Moreover, factuality is extremely important and only 100% accuracy will do in a context where decision-making is subject to good administration guarantees.

Therefore, we need to be very careful about how we think about using AI to extract insights from procurement (big) data and, as the OCP introduction highlights, one of the most important things is to clearly define the task for which AI would be used. In my view, there are much more limited tasks than one could dream up if we let our collective imagination run high on hype.

#Meaning #procurement #thoughts #Crack #Nut

Leave a Comment

x