This site uses cookies to provide you with a great user experience. By using the site, you accept our use of cookies.

28 Dec 2023 • Tom Haley

The rise of machine learning

Are you excitedly redesigning your work methods ready to ride the ChatGPT wave, bunkering down ready for the existential threat, or simply sat waiting to see how all the drama unfolds? Whichever approach you are taking, one thing is for sure: it is inevitable that Large Language Models (LLM’s), such as ChatGPT, will reshape how we work and live.

The astonishing rate at which LLM’s have developed has taken many by surprise. They embody more knowledge than a human has ever known and perform tasks that had been exclusively in the domain of human intelligence.

Here are some facts which signify the scale of their development:

  • GPT-3.5 flunked the American Uniform Bar Examination whereas GPT4 passed in the 90th percentile.
  • GPT-3 can process 2,048 tokens whereas GPT-4 can process up to 32,000 tokens.

The first version of GPT has one-ten-thousandth of the size of GPT-3’s hundreds of layers, billions of weights and training using hundreds of billions of words.

This development rate has raised concerns among prominent voices, including Elon Musk, that LLM’s need to be paused because the capabilities have outrun the understanding and control of their creators. These concerns have raised questions about the impact of LLM’s but, so far, their development has not been paused.

Whilst there is a lot of talk about LLM’s, I found there was very little in the way of information about what they are, how they work, and what are the limitations / risks. The purpose of this article is to summarise research I have undertaken and provide this to you concisely.

What are LLM’s and how do they work?

An LLM is an artificial intelligence (AI) which uses deep learning techniques and large data sets to understand, summarize, generate, and predict new content. The LLM understands language in a statistical way, which differs from humans who understand language grammatically. LLM’s are a development of deep learning but, unlike deep learning, they can be more easily used by people who do not have coding skills; this opens the possibility of using LLM’s on a mass scale.

The LLM works in the following way:

A written query is converted from words into numbers and the text is split into tokens (chunks of characters).

The tokens are embedded into a meaning space where words, with similar meanings, are located in nearby areas.

Attention networks are deployed to make connections between different parts of the prompt.

The LLM repeats a process called autoregression where a word is generated and the LLM feeds the result back into itself; this is repeated until the LLM has finished.

A response is initiated when the prompt has been processed and the attention network has produced a probability of that token being the most appropriate one to use next in the sentence it is generating.

The initial concept of the LLM might be developed by a human but its self-learning, without human prompt, means that a human would not know, with any precision, how the LLM works. The LLM’s self-learning approach involves quizzing itself on a chunk of text it is given, covering up the words at the end, and trying to guess what might go there. The answer is uncovered and compares this to its guess. A loss is generated and sent back into the neural network to nudge the weights in a direction that will produce better answers.

What are the limitations of LLM’s?

Whilst the rate of development has been exponential it appears unlikely that this progress will continue an indefinite straight line, with some saying that GPT-4 has reached an inflection point. It is considered that the following issues will limit the development and use of LLM’s:

Data availability: there is only so much data available and the stock of high-quality language data, available on the internet, will soon be exhausted.

Investment: training GPT-3 cost OpenAI an estimated $4.6m whereas GPT-4 cost disproportionately more, circa $100m, meaning advancements will require even more investment.

Computational power: no new hardware is forthcoming which offers a leap in performance as large as that which came from using GPUs in the early 2010s so training larger models will become increasingly expensive.

Chip manufacture: this is not increasing exponentially, and this will limit how fast LLM’s can improve.

Legal issues: the LLM uses data that is often copyright material being used without permission and, whilst there might be a fair use argument, this issue will inevitably be tested in court one day.

What are the risks?

The most obvious risk is the LLM’s ability to train itself, without human prompt, meaning the content cannot be fully verified before it is relied upon. An example would be two conflicting data sets and the LLM deciding which it considers to be the most authoritative. Particularly in a business setting, where the stakes are often high, this is concerning and raises questions about the accuracy of content created which is drawn from internet information contaminated with fake news.

The risks may go further. What if the LLM single-mindedly pursued a rigid instruction set by a human that causes unintentional harm? As a hypothetical example, Nick Bostrom described an experiment, called the “paperclip maximiser”, where an instruction is given to manufacture as many paper clips as possible and the machine takes any measure necessary to cover the Earth in paper-clip factories.

In the short term, internet harm perpetuating today will be accelerated with the use of LLM’s because they are ideal, if used with ill-intent, for spreading misinformation, scamming people out of money or convincing people to click malware infected emails. Again, in the wrong hands, the use of LLM’s to make new discoveries in science mean it is conceivable that they could be used to produce drugs or exploding devices.

How are the risks being managed?

The risks are being addressed and OpenAI, ChatGP4’s developer, has used several approaches to reduce the risk of accidents and misuse. One approach is to use “reinforcement learning from human feedback”, where humans provide feedback on the appropriateness of a response. Another is “red teaming”, where the model is attacked by trying to get it to do something it shout not do. This is an attempt to get ahead of “jailbreakers” who have put together websites littered with techniques for getting around the model’s guardrails.

There are suggestions that AI needs to police AI to assess whether its responses adhere to constitutional principles. However, this raises questions about who writes the constitution, and it would require alignment between powerful countries: a possibility which seems unlikely given the unstable geopolitical landscape. The possibility of cyber-attacks mean that the AI could, unknown to the user, be changed to generate content that suits the cyber attackers motives.

What next for LLM’s

Whilst their development has been astonishing, LLM’s are, for the most part, still a novelty tool which people are exploring. They are not embedded in day-to-day, or business life, in the same way that the internet or email is, but it is only a matter of time before LLM’s are prevalent in our lives.

Some think that LLM’s in their current form are doomed and that efforts to control their output or prevent them from making factual errors will fail. They believe that LLM’s have taken too many wrong turns and are an “off-ramp” away from the road towards a more powerful AI. This may, or may not be the case, and only exploration and testing of the AI technology will determine whether LLM’s, in their current form, will stand the test of time.

I consider it is likely that people will explore the technology and find new and exciting ways to improve their lives and / or the productivity and performance of their businesses. This exploration means there will be successes and failures, but the development of LLM’s is dependent on both. It seems plausible that, through this process, someone will find a way to scale an LLM for a large group of people (e.g., a large company or a whole industry).

This crucible moment is probably not as far away as it seems, and the opportunity for the construction industry is ripe. We are the worst performing industry in the UK when it comes to productivity and suffer from a chronic, and never ending, skills shortage. We are an industry which has a rich abundance of data and systems but lacks the wherewithal to yield quick and effective performance insights.

The question is whether we, as a construction industry, can learn to harness the technology, or will we start to see tech companies move in and disrupt construction as they have done in so many other industries?

Back to insights

THE SCIENCE OF QUANTIK™

Publications

We publish insights through our LinkedIn newsletter, titled “The science of Quantik”, which are light bites of information covering news and insights relating to the construction industry and quantity surveying.

LinkedIn