A Plethora of Models

This week we read up to frame 17 on page 124 of The Little Learner. Despite my laxity with the blog, we have been progressing!

Recap

At some point in the last few weeks, we put all the pieces together for machine learning via gradient descent. A complete program looks as follows:

(with-hypers        ; 1. Choose hyperparameters
    ((revs 1000)    ; 2. Hyperparameter: number of revisions
     (alpha 0.01))  ; 3. Hyperparameter: learning rate
  (gradient-descent ; 4. Learning algorithm: gradient descent
   ((l2-loss line) line-xs line-ys) ; 5. Objective function
   (list 0.0 0.0))) ; 6. Initial guess

If you run this code, using the data provided in the book, the output will be:

(list 1.0499993623489503 1.8747718457656533e-6) ; 7. Learning!

To begin with, let’s recap this example.

1. Choose hyperparameters

In machine learning, there is a crucial distinction between “parameters” and “hyperparameters.” In a nutshell, “hyperparameters” are supplied by the human programmer, while “parameters” are learned by the program. In malt, the MAchine Learning Toolkit included with The Little Learner, the with-hypers function is provided to help manage the setting of hyperparameters. The function has the following structure:

(with-hypers *list-of-hyperparameters* *learning-problem*)

In this case, the list of hyperparameters contains two items:

((revs 1000) (alpha 0.001))

And for the learning problem, we are trying to fit a line model to the data in line-xs and line-ys using gradient descent.

(gradient-descent
 ((l2-loss line) line-xs line-ys)
 (list 0.0 0.0))

2. Hyperparameter: revisions

The first hyperparameter, revs, tells the programs how many revisions to perform during training. As we have seen in previous weeks, machine learning proceeds via guesswork. Guess the parameters. Check the guess. Revise the guess. Guess again. Check again. Revise again. And so on. The revs hyperparameter tells the program how many times to repeat the cycle.

In this case, revs is set to 1000, so the program will revise the guess $1000$ times before it stops.

In fact this is not the only way to set up a learning problem. In practice, it is also common to set up tolerances for the revisions. For example, there is no point doing all $1000$ revisions if something has gone wrong, and the guess is getting worse each time. Likewise, if each guess is only making a tiny improvement on the previous guess, it may be worth stopping the training process early.

But there is a deep reason why it is common to set an arbitrary number of revisions in machine learning. In machine learning, there is often no single correct answer for the problem. There is no single “correct” sentence that ChatGPT should produce in response to your prompt. Instead, there is a wide range of “good enough” answers, and the point is to try and find one of them. If the machine learning is successful, then the program should find a good enough answer in a finite time, and we can just set a certain number of revisions.

In practice, also, training models can be extremely time- and resource-intensive. Large models might require many computers, many hours and many kilojoules to train. The revs parameter may therefore be determined by economic or environmental factors, rather than scientific ones.

Hyperparameter: learning rate

The second hyperparameter, alpha, sets the learning rate of the gradient descent algorithm. The learning rate is a tricky concept, which I explained in a previous post.

The clever idea behind gradient descent is to learn from error. But if you try to do this naively, the results will be terrible. The learning rate suppresses the learning process, to ensure the guesses are revised incrementally, and the model doesn’t go haywire during training. It should therefore be no surprise alpha is a small number, in this case, $0.001$. When you multiply a number by $0.001$, it makes the number $1000 \times$ smaller. In this way, alpha ensures that the model improves only a little bit each revision.

Learning algorithm

Now we move on to the the second part of the expression. We have given the hyperparameters. Now we need to give the learning problem. In malt, the first thing you specify is the learning algorthm, in this case, gradient-descent. I have already recapped this algorithm, in the discussion of alpha. The gradient-descent function in malt requires two inputs:

(gradient-descent *objective-function* *initial-guess*)

In this case, the objective function is ((l2-loss line) line-xs line-ys), and the initial guess is (list 0.0 0.0).

Objective function

I explained the objective function in a previous post. It is called the objective function because it defines the training objective. In other words, the objective function tells the program what it should try to learn. The objective function itself has three parts:

  1. The loss function, in this case l2-loss: The program will use this function to test each guess, and measure how inaccurate it is, i.e. to measure the “loss.” The l2-loss is so called because to calculate it, you square the error, i.e. you raise it to the power of 2.
  2. The target function, in this case line: This is the template for the model that the function will learn.
  3. The training data, in this case line-xs and line-ys: The program will try to find the right parameters for the target function, so that if you plug a x-value into the target function, a quantity close to the corresponding y-value will come out.

In this case, the target function is line. This is the target function that the authors of The Little Learner use for most of the early examples. It is an appropriate target function when you have two variables—a predictor variable (x) and an output variable (y)—and you think there is a straightforward linear relationship between them. For example, perhaps you predict that the height of a flower (y) is determined by how many seconds have elapsed since the seedling burst through the soil (x). If you think that flowers grow at a constant rate, then the line function would be appropriate.

The line function looks like this:

$$ y = wx + b $$

It has two parameters, w and b. The aim in machine learning is to find the correct w and b using the training data. Let’s say, for example, you are trying to learn the flower model above. You have some data where lab scientists have measured how tall some flowers are at certain times, and you know when the flowers first appeared as seedlings. Let’s say that the seedlings have a minimum height of 1mm when they are first perceived by the human scientist, and they grow roughly at a constant rate of 0.01mm per second. Ideally, if you ran the above code with these data points, the program should learn the following function:

$$ y = 0.01x + 1 $$

In scheme, the output would look like this:

(list 0.1 1) ; w, b

Using these learned parameters, you could predict how tall a flower would be from the number of seconds that have elapsed. E.g. if the flower is 120 seconds old, you could predict that it will be…

$$ y = 0.01(120) + 1 $$

$$ y = 2.2 $$

millimetres tall. In the machine learning world, such a prediction is called “inference.”

Of course, before we do the machine learning, we don’t know what the parameters are. In this case, we want the computer to work out what w and b should be. That leads us to the penultimate part of the example code…

6. Initial guess

The initial guess is the starting point for the learning process. In this case, we set w to $0$ and b to $0$:

(list 0.0 0.0)

The initial guess is called “little theta” ($\theta$) in The Little Learner. The parameters learned by the gradient descent algorithm are called “big theta” ($\Theta$).

When you run the gradient descent algorithm, this initial guess will be improved revs times, hopefully producing much better values for w and b—or for whatever parameters are required by your target function.

7. Parameters!

If all goes well, the machine learning program should spit out some new parameters, which can be combined with the target function to start making predictions or inferences. In the example code, using the provided line-xs and line-ys, the learned parameters ($\Theta$) are:

(list 1.0499993623489503 1.8747718457656533e-6)

If you combine these with the line function, you get the following linear equation (rounded to two decimal places):

$$ y = 1.05x + 0.00 $$

Well, if x is 72, what would y be?

$$ y = 1.05(72) + 0.00 $$

$$ y = 75.6 $$

Or in Scheme, using malt:

((line 72) ; x = 72
 (list 1.0499993623489503 1.8747718457656533e-6) ; w and b
 )

; 75.59995596389626

A plethora of models

There is an obvious limitation with the example above: the target function. line is an extremely simple function. It takes exactly one input, and relates it in a very simple way to the output. It is very easy to think of situations where this target function would be woefully inadequate. Imagine, for example, that you wanted to predict the price of a house. y in this case would be an amount in dollars. What x-values might you have available? You might know the square footage of the house, the number of rooms, the distance from the CBD, the age of the house, the state of repair, and so on. All of these things might seperately affect the price.

line would be hopeless, because you would need a seperate model for each predictor. One line the predicts the price based on the number of rooms, another line that predicts the price based on the square footage of the house, and so on. Each model would be extremely inaccurate because it only accounts for a single feature of the house. What might work better is a target function like this:

$$ y = w_0 x_0 + w_1 x_1 + w_2 x_2 + … + w_n x_n + b $$

In this case, $x_0$ might be the distance from the CBD in kilometres, $x_1$ might be the amount of tree cover in the street, and so on. The model could learn to combine all these factors to predict the overall price, $y$.

In our last session, we saw a target function that could do this, though it was presented differently. The function is called plane. It is identical to the function given above, but uses a more compact notation:

$$ y = \begin{Bmatrix} w_0 & w_1 & w_2 & … & w_n \end{Bmatrix} \cdot \begin{Bmatrix} x_0 & x_1 & x_2 & … & x_n \end{Bmatrix} + b $$

This is exactly identical to the somewhat simpler notation above. Each of the sets of numbers is a tensor1, also known as a vector. This symbol $\cdot$ means “dot product.” When you take the “dot product” of two vectors, it means that you multiply the corresponding elements (e.g. $w_1 \times x_1$), and then add up all the products. This is of course exactly what we do in the more familiar representation of a linear equation above.

Another example where line is an inadequate target function is when the relationship between two variables is non-linear. In visual terms, this means that a graph of the two variables is curved rather than straight. In this case, the problem is that all the x values are raised to the power of $1$. We don’t normally write the power when it is a power of $1$, but we can do so to make the point:

$$ y = w_0 x_0^1 + w_1 x_1^1 + w_2 x_2^1 + … + w_n x_n^1 + b $$

To allow this line to curve, one or more of these $x$-values can be raised to a power other than $1$, e.g.

$$ y = w_0 x_0^1 + w_1 x_1^2 + w_2 x_2^1 + … + w_n x_n^k + b $$

In the example code, we looked at the target function quad. This is a very simple function, similar to line:

$$ y = w_0 x^2 + w_1 x + w_2 $$

Because it contains an $x^2$, it can learn a curved, non-linear relationship between x and y.

Target Functions in the Press

In 2017, a new target function was discovered, which triggered the current phase of AI hype: the “transformer,” or the T in GPT. We do not have the tools yet to describe the transformer, but we do now know what a target function is, and can appreciate why the discovery of a new target function might be significant. The transformer was initially designed for language modelling: predicting which words should come next based on which have come before. None of target functions we have considered so far, line, quad and plane, are up to this task—though plane is getting us closer.

The example of the transformer target function is instructive. It has revolutionised language modelling, allowing engineers to build much more powerful and persuasive models of human speech and writing. It has also revolutionsed image and sound generation. But this algorithm did not arise from linguistic research. It is not based on any insight into the nature of language. It was not chosen because it seems like the ideal target function for capturing how language works. Rather, it was invented primarily to address practical problems with training Large Language Models (LLMs). Basically, when language models get very large, they can become unwieldly. One of the biggest problems is that there can be relationships between words that stretch a very long way through a text, e.g. if the word “Elizabeth” appears at the start of a novel, this increases the chance that the word “Elizabeth” appears at the end of the novel—unless of course the phrase “Elizabeth died” appears halfway through, and so on. In this context, the gradient descent algorithm could grind to a halt, or could be subject to a fiendish issue called “exploding gradients,” where the model overcorrects even if you set the learning rate to a very low number. Through some elegant tricks, the transformer model fixed these mathematical issues.

If this is all that the transformer target function did, then how did it enable such progress in LLMs? It did so by speeding up training (through “parrallelisation”) and eliminating the pernicious “exploding gradients.” These changes allowed LLMs to get much, much larger. Their size increase in three main ways: they could have many many more parameters (billions or trillions of ws); they could be trained efficiently on much larger training sets; and they could look at much wider “context windows” (e.g. predicting a word based on the prior million words, instead of the prior 100 or 1000 words). All these changes, in turn, allowed LLMs to produce much more coherent and plausible text.

If we understand the role of the target function, and the criteria by which it is assessed, then we may be in a better position to evaluate claims about progress in AI. Was the discovery of the transformer architecture a “step on the way” to AI? Or was it just an architectural change to some commercial systems that let them run more efficiently…?

LISP as an expression-language

I love LISP, but are we LISPers wrong to find it elegant? Consider again the key example from the first hundred pages of The Little Learner:

(with-hypers        ; 1. Choose hyperparameters
    ((revs 1000)    ; 2. Hyperparameter: number of revisions
     (alpha 0.01))  ; 3. Hyperparameter: learning rate
  (gradient-descent ; 4. Learning algorithm: gradient descent
   ((l2-loss line) line-xs line-ys) ; 5. Objective function
   (list 0.0 0.0))) ; 6. Initial guess

Wouldn’t this be easier in Python? (This code won’t work, though it could be made to work)

hyperparameters = {
    "revs": 1000,
    "alpha": 0.01
}

objective_function = make_objective_function(
    target=line,
    loss=l2_loss
    training_x=line_xs,
    training_y=line_ys
)

learned_parameters = gradient_descent(
    objective_function=objective_function,
    hyperparameters=hyperparameters,
    initial_guess=[0.0 0.0]
)

The LISP/Scheme/Racket/malt code is extremely concise, and the structure of the code also directly reflects the structure of the program. But is this kind of poetry really the right way to impart knowledge?

It does make me think of some of the great Sanskrit philosophers, such as Nagarjuna and Gaṅgeśa, who composed their works in extremely terse couplets (or shlokas). The books are almost impossible to read (I’ve only tried in English translation!), and can only be understood through copious commentary on each highly compressed couplet. One theory is that these poems were used as University curricula. Philosophers would memorise their own poems, or those of their teachers, and recite them to students in class. Each shloka would be recited individually, and extensively discussed, as a way to explore the problem. To me, The Little Learner hearkens back to this mode of instruction. The code is gnomic and compressed, but it is also fantastically well structured and poetic. The two “voices” in the text discuss the code poetry, and unpack it for the reader. We, the Anticodians, discuss and unpack it further. The only missing piece is the memorisation… and I don’t think any of use is up for that!

Descending further

In this meeting, we reached frame 37 on page 86 of The Little Learner.

Artificial Enlightenment

This algorithm is known as
optimization by gradient descent
Thanks, Augustin-Louis Cauchy (1789-1857)

The Little Learner, p. 86

At the end of our reading last week, we learned the name of the algorithm whose parts have been slowly introduced to us: optimisation by gradient descent. The authors of The Little Learner attribute this algorithm to a nineteenth-century mathematician, Cauchy.

As I write this, I am on a train without Internet. I can’t look up Cauchy and verify his claim to have invented optimisation by gradient descent. But the attribution of the algorithm to Cauchy is intriguing. Cauchy, presumably, never studied anything like an Artificial Neural Network (ANN), the kind of model discussed in The Littler Learner. Cauchy was not at MIT in the 1960s, when Marvin Minsky and others developed the first ANNs, then called “perceptrons.” Cauchy therefore cannot have described how to use gradient descent to set the parameters of an ANN. This work was carried out predominantly by Hinton, Bengio and Lecun, who share a Turing Prize (Hinton also earned a Nobel for this work).

What are the authors of The Little Learner trying to tell us, when they attribute this algorithm to Cauchy? This is one of many such attributions in The Little Learner, and most of the attributions are to long-dead mathematicians like Cauchy. Presumably all the theorems presented in the book were developed by someone, but the authors seem to take especial delight in attributing old discoveries.

To me, these sardonic references seem to have two implications: machine learning is not new; and it has little to do with technology. Machine learning is a collection of mathematical ideas that have been developed mostly for other reasons over the course of centuries. Implicitly, this history is slow, incremental, and intellectual. Machine learning is not a field of wild Promethean inventions, but a field of patient, humble scholarship. If this is the implication that the authors The Little Learner were trying to convey, then they have touched a sympathetic string in the heart of this reader. If not, well, I claim my right as a “strong poet” to “misread” their work.1

Revving the Machine

In our session this week, the main content was an implementation of the revise function. This function takes three inputs:

  1. f: the objective function for the model we want to “fit” to the data. As we saw in a previous post, the objective function is a compound of three things: (a) the x- and y- values of the training data; (b) the loss function that will be used to judge the accuracy of the model when applied to the x and y values;2 and (c) an empty skeleton of the model, which basically knows how many parameters the model has and how they all fit together.3 What f is waiting for is the parameters of the model.
  2. revs: the number of revisions that the revise function should attempt. The authors assure us that “a fixed number of revs is usually a good approach” with large, complex models such as GPT-4. Basically, with a large model like GPT-4, training is very expensive, and there is not well-defined best answer. So you simply let the training loop run a fixed number of times and hope that the resulting model is good enough.
  3. theta: the current guess for the parameters of the model.

In a nutshell, the revise function tries to come up with a good set of parameters for the model. First it plugs theta into f to calculate the loss. Then it uses the loss to improve theta, and goes back to the start. It plugs the new theta into f to calculate the loss. Then it uses the (new) loss to improve the (new) theta, and goes back to the start. It once again plugs the new theta into f to calculate the loss, and continues until it has improved the guess revs times. For the first example of the revise function, the authors suggest running it for $1000$ revs.

LISP for Learning

In order to show the revise function, the authors have to take a detour into the syntax of Scheme/Racket. They introduce the map function, which is needed in order to perform the update on all the parameters in theta. As is so often the case, this detour is both elegant and counter-intuitive. The authors ask readers to develop their understanding of recursive functions and their understanding of machine learning at the same time.

Moderating the group, I have found this kind of thing frustrating. I want everyone in the group to enjoy themselves, but the book is periodically quite demanding because of the way it interleaves multiple strands of understanding at once. The subtitle of the book is “A straight line into deep learning,” but the book is more like a gradient descent down a bumpy, swerving error curve.

Of course, all technical fields are difficult, because they require learners to master new abstractions. Overall, I find that the authors of The Little Learner do a good job of ordering and prioritising their material, and the book certainly appeals strongly to my own imagination. But I wonder if it would be worth hopping off the “straight line” of The Little Learner for a session or two, so we could explore the deep links between recursive function theory and AI in general. A detour into the delightful writing of Douglas Hofstadter might be on the cards…

Notes

  1. Thanks, Harold Bloom (1930–2019). 

  2. In our example, the loss function is the “l2 loss,” which is based on the squared error. 

  3. In our example, the skeleton of the model is line, which has an extremely simple template: $y = wx + b$. There are always exactly two parameters, w and b (or $\theta_0$ and $\theta_1$). 

The uses of error

It’s been a while since we have met, but we have made progress through The Little Learner since our detour into HESPI. We are steadily moving towards the back-propagation algorithm, the fundamental learning algorithm for all artificial neural networks. We will resume our quest next time on page 78.

The back-propagation algorithm was developed by Geoffrey Hinton, Yoshua Bengio and Yann Lecun, who in 2018 jointly received the Turing Award from the Association for Computing Machinery for their discovery. It is called back-propagation because it propagates the loss back through the neural network. What does this mean? I break it down below.

Why do we need back-propagation?

As is their wont, the authors of The Little Learner do not straightforwardly explain why the back-propagation algorithm is necessary. Instead, they try to reveal this necessity gradually, through their Socratic style.

They begin with a bad leaning algorithm: simply pick a parameter of the model, adjust it by an arbitrary amount, and then see if the guess improves. If it improves, do the same thing again. If it doesn’t improve, stop—you’ve reached the best answer you can find. (I covered this algorithm in a previous post.)

There are two big drawbacks to this algorithm:

  1. You can only adjust one parameter at a time. If you are training GPT-4, with hundreds of billions of parameters, this will be an excruciatingly slow process.
  2. The improvement is arbitrary. There is no guarantee that you are changing the right parameter the right amount in the right direction (i.e. should that parameter get bigger by addition or smaller by subtraction?)

Backpropagation remarkably solves both these issues. It allows you to adjust all the parameters at once, and it allows you to calculate how much you should increase or decrease each parameter to (more or less) ensure that the model actually improves.

So far we are focussing on drawback 2: working out how much to increase or decrease each parameter. It turns out that you can use the objective function to determine both the magnitude and direction of change. You measure the error of the model, then adjust its parameters by a multiple of the error. That this works may seem spooky, but can be explained through calculus.

Three pieces: loss, learning rate, propagation

The loss is a measurement of how well the network performs. We have already seen that to train a neural network, you need to define an objective function that the system can use to test itself against the training data. In our reading last session, we found out that you can use this loss to help you improve the network’s parameters.

To do this, we were told, you should first calculate the loss, and then multiply it by a new number, the learning rate or rate of change. The learning rate is necessary because the loss can be very large. If you naively subtract the loss from a parameter, then you are likely to overcorrect the model, and make it worse. Thus you multiply the loss by a small number, e.g. $0.0099$, and then use this much smaller number to improve the relevant parameter.

When you take the loss, and use it to adjust the parameters of the model, this is called propagation. At the moment, we are working with an extremely simple model, with only two parameters in a single layer. Later, when we encounter more complex models that have many layers of parameters, this idea of propagation will make more sense. First you adjust the last layer, then the second last layer, then the third last layer, and so on. Because this process begins in the final layer of the network, and moves back towards the first layer, it is called back-propagation.

Of course this implies that there is something called “forward-propagation,” and indeed there is such a thing. This is when you push data into the model to produce the output. The data begins in the first layer, then the output of the first layer is passed to the second layer, and so on, until it reaches the output layer.

Thus during training, there is a loop of forward- and back-propagation. First a batch of training data is put into the model. It is propagated forward to produce the output, which is this evaluated using the objective function to get the loss. During the back-propagation, the loss is multiplied by the learning rate, and then fed back through the network to adjust all the weights. This is repeated over and over until the network reaches a desired state.

How humans learn

The Little Learner is not only a book about how machines learn, but a book about how humans learn. We have discussed the book’s dialogic style and use of Scheme/LISP many times in the Anticodians, and it is safe to say that the authors’ view of human learning is controversial. The discussion proceeds logically, and the student’s ignorance is (for the most part) modelled by the voice in the second column. But the authors’ decision to avoid all foreshadowing (or to use the wanky word, prolepsis) does sometimes make the presentation a bit confusing. We are always heading in a direction, but they never tell us what that direction is.

Perhaps they could have taken some inspiration from artificial neural networks and the back-propagation algorithm. When a neural network is trained via back-propagation, it is given all the answers in advance, and then is allowed to find a path towards the answers in small incremental steps. The Little Learner might be easier for beginners if they too are provided a picture of the answer before commencing their incremental journey towards it!

ChatGPT sees the herbarium

This week we paused our reading of The Little Learner to entertain a guest: Dr Robert Turnbull, a Senior Research Data Specialist at the Melbourne Data Analytics Platform. He talked us through the HESPI project, which discovers structured data in handwritten herbarium specimen sheets. We then read through llm.py, the part of the software which interfaces with an online LLM to perform data cleaning at the end of the pipeline.

We resume The Little Learner on page 67 in our next session.

AI for GLAM

In The Little Learner, we are reading the code for building an LLM from the ground up. We have been learning about basic data structures and algorithms, such as tensors, linear functions and loss functions, which we will eventually assemble like Lego bricks into an Artificial Neural Network. This week we examined an LLM from the top down. Once you have an LLM available, what use is it to critical and historical scholars like ourselves?

HESPI solves a problem that is common for Galleries, Libraries, Archives and Museums (GLAM). GLAM institutions often possess an enormous amount of data, but in a hard-to-use form. In this case, the data is in the form of botanical specimen sheets. A specimen sheet includes a dried specimen of a plant, and some handwritten information about the specimen, such as the scientific name and the location where it was collected. It is easy to photograph a specimen sheet, or to allow a person to look at it in a cabinet, but what if you wanted to estimate the historic range of a plant? What if you wanted to compare the collections of multiple museums? What if you wanted to validate a climate model against the location data available in the sheets? Such analysis is extremely difficult if the sheets are only available as photographs or physical objects.

HESPI discovers structured data in the specimen sheets. It segments each part of the photograph, distinguishing the specimen, the data card and the color swatch. It interprets the text on the sheet, and organises it for entry into a database. As part of this process, an LLM plays a crucial role. The specimen sheet first passes through a number of specialised subsystems that discover certain parts of the data. At the final stage, the data is passed as a prompt to the LLM, which attempts to remove any errors that have crept in from earlier stages in the pipeline. It cleans typos, OCR glitches and so on, and according to Rob and the HESPI team, improves the overall accuracy of the system by about 10 percentage points.

We are still on the crest of an AI hype wave, and the CEOs of self-proclaimed “AI Companies” continue to tout their products as revolutionary devices that make computers more creative and conversational. HESPI presents a more realistic model of what LLMs can do: boring, routine writing tasks that require more-or-less mechanical manipulation of text. When used in this way, LLMs are genuinely useful, and can enable cultural institutions to achieve projects that were hitherto out of reach.

Describing chats in code

In our close reading, we focussed on llm.py, the part of HESPI that hooks in to an online LLM for the final data-cleaning step. The main purpose of the file is to construct a prompt that describes the data-cleaning task for an individual specimen sheet, and then send this prompt to either Claude or ChatGPT for data cleaning.

First we need the building-blocks for a prompt. These are imported at the top of the file:

from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
from langchain.prompts import ChatPromptTemplate

langchain is a Python module that provides Python classes for interacting with online chatbots. Each class represents an important part of a prompt. In this case, the HumanMessage will be the question that HESPI asks the system: this is analogous to the prompt that you might type into the online chat interface of ChatGPT. The AIMessage is the answer given by the system itself: this is analogous to the answers you see when you ‘chat’ with ChatGPT online. We will see shortly that HESPI actually tells the AI how to begin its response, in order to discipline its output. The SystemMessage is a hidden message which the chatbot considers before interpreting the HumanMessage or generating its own AIMessage. Typically, chatbots are designed to give extra weight to SystemMessages, so that developers of AI systems have power to control the output of the model, without necessarily knowing what users wil type. You are generally unable to provide such system messages in the public interface for ChatGPT and similar services. The ChatPromptTemplate is the glue that holds the different kinds of message together. Essentially, llm.py receives the output from the prior stages of HESPI, generates a HumanMessage, AIMessage and SystemMessage for the current specimen sheet, and then combines them into a single ChatPromptTemplate that is sent to the relevant chatbot.

Which chatbot, I hear you ask?

from langchain.chat_models.base import BaseChatModel
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

The BaseChatModel is an “abstract class,” or a general template for any LLM that you wish to use with langchain. At present, HESPI supports two concrete models: ChatOpenAI (i.e. ChatGPT) and ChatAnthropic (i.e. Claude, Sonnet). As we discussed in the group, this is the first time we have seen brand names in a source file. This is not unusual: it is common enough for source files to contain information about the company that owns them. It is also common for code that interacts with third-party software to include other companies’ brands in it: there are thousands of packages out there with google, SQLite, MySQL, excel or aws scattered among the function names and class definitions. The presence of brand names in HESPI does raise the question, however: as more tasks are offloaded to cloud-based language models such as Claude and GPT-4, how much more frequent will corporate vocabulary become in source code? How will this affect programmers’ relationship to the code they write? Perhaps not at all.

This is the key function of llm.py, which provides the main “entry point” for the script:

def build_template(institutional_label_image:Path, detection_results:dict) -> ChatPromptTemplate:

The function takes two inputs:

  • institutional_label_image: The image file of the “institutional label” on the specimen sheet, which is the textual part of the document with the species name, etc.
  • detection_results: A Python dict containing the output from the prior steps in the HESPI pipeline. This is the data that the LLM will try to clean up, by inspecting the image of the institutional label for itself, and scanning through the language of the detection results.

The output of the function is a ChatPromptTemplate, a composite object containing the various kinds of messages described above. This template can be sent to one of OpenAI or Anthropic’s systems, and the corrections will be returned in a (hopefully) well-structured and useable form.

prompt-english as a programming language

At the heart of the build_template function is a Python f-string that collects information from HESPI and massages it into a textual prompt for the LLM. This then forms the HumanMessage for the system.

main_prompt = f"""
        We have a pipeline for automatically reading the institutional labels and extracting the following fields:\n{', '.join(label_fields)}.
        
        You need to inspect an image and see if the fields have been extracted correctly. 
        If there are errors, then print out the field name with a colon and then the correct value. Each correction is on a new line.
        If the values provided are correct, then don't output anything for that field.
        When you are finished, print out 5 hyphens '-----' to indicate the end of the text.

        For example, if the 'genus' field and the 'species' field were extracted incorrectly, then you would print:
        genus: Abies
        species: alba
        -----

        Here are the following fields that we have extracted from the institutional label:
        {values_string}

        {ocr_results}

        Here is the image of the institutional label:
    """

The three parts in curly braces ({}) contain data that is spliced into the prompt. These are specific to the specimen sheet whose data is to be corrected. The rest of the prompt is fixed, and is fed to the LLM every time.

The companies that own LLMs often present them as “natural,” “intuitive” and “conversational.” But it is apparent from this example that they are nothing of the sort. In order to get an LLM to provide sensible answers to a question, you need to craft the prompt to carefully and repeatedly instruct it what to do. In this case, the relatively simple instructions are given in extremely clear detail. First the task is described in words, e.g. "If there are errors, then print out the field name with a colon and then the correct value. Each correction is on a new line.". Then exactly this instruction is shown in an explicit example:"genus: Abies". Clearly some effort went into the engineering of this prompt, to ensure that the LLM gave sensible answers. In addition, a SystemMessage is provided to determine the LLM’s output:

system_message = SystemMessage("You are an expert curator of a herbarium with vast knowledge of plant species.")

And the human programmer also begins the AI’s response for it:

ai_message = AIMessage("Certainly, here are the corrections:")

Not only this, but these prompts are provided to the LLM every single time. If 1,000 specimen sheets are corrected, then the LLM is instructed 1,000 times that it is an “expert curator.” It is instructed 1,000 times to put each correction on a new line, with a colon in between the field and the corrected value. It is instructed 1,000 times to begin its answer with the phrase “Certainly, here are the corrections:”. If it corrects 10,000 specimen sheets, it receives these instructions 10,000 times. If it corrects 100,000 specimen sheets, it receives these instructions 100,000 times.

This activity of interacting with a chatbot is nothing like conversing with a research assistant. What it is like is programming a computer. The “English” used in the prompt is not spoken or written, but engineered. It is written in a clear, technical way to elicit a predictable response from a mechanical system. What LLMs have started to enable is the old dream of “natural language programming,” where programmers can become (partly) free of the strictures of formal language. What LLMs have not even come close to doing is replacing programming as a human activity: it is still necessary for the human to design the program, to determine the requirements, to structure the system—even if they write some English as they do so.

Are abstractions high or low?

As an aside, we considered how abstraction is represented in source code. The BaseChatModel is the most abstract kind of chat model in langchain. All chatmodels are instances of BaseChatModel. Here abstraction is metaphorically low. The more abstract something is, the more basic is it. There is a different metaphor for abstraction in ChatPromptTemplate. A Template is more abstract than a simple Prompt, because all Prompts fit into the template. In this case, abstraction is metaphorically empty. The more abstract something is, the less filled it is with content.

These are not the only metaphors for abstraction. As we discussed in the group, abstraction can sometimes also be cloudy or high. The metaphors of emptiness and lowness don’t really work well together, because the lower something is, the more basic it is, then the closer it is to the solid, filled-in, all-too-real ground.

Abstraction is a fundamental concept of computer science, and of the art of programming. Even in relatively straightforward, practical code like llm.py, programmers are obliged to grapple with what abstraction is, and express ideas about its nature, even if they don’t mean to.

How to make a guess

Today we read up to Chapter 3, Frame 37, on page 67 of The Little Learner. You can find the code for the session in the Github repo.

Three steps to learning

So far we have only seen the data structures for deep learning. Today, we started to see how to apply learning algorithms the data.

At a high level, we found that deep learning involves three steps:

  1. Guess the function parameters
  2. Measure the ‘loss’
  3. Improve the guess

You keep doing 1., 2. and 3. until finally the ‘loss’ is nearly $0.0$. At this point, further improvement becomes impossible, and you stop.

What a guess looks like

The first step is the “guess the function parameters.” How do we do this?

For the sake of this chapter, a guess is a list of parameters. Let’s say we are trying to fit a straight line to some data. A straight line can be represented by the following function:

$$ f(x) = wx + b $$

In a machine learning situation, we already know $x$ and $y$ for a large number of entities. What we don’t know is what $w$ and $b$ should be. So, let’s make a guess! We can put $w$ and $b$ together into a list, and call that list $\theta$. In the book, the initial guess was $w = 0.0$ and $b = 0.0$. That results in the following $\theta$:

$$ \theta = (0.0, 0.0) $$

or, in Scheme/Racket:

(define theta (list 0.0 0.0))

That’s it—a guess is a list of numbers. Now the line function is very simple. There are only two parameters, $w$ and $b$. A more complex function may have many parameters, and those parameters may have a very complex structure. In the future, theta may be a very complex variable—a whole collection of tensors all linked together. But that is, of course, the future.

How to calculate the loss

So far we have encountered just one “loss function,” which allows us to measure how good our latest guess it. The loss function works as follows:

  1. Use the current guess for the parameters to produce some predicted y-values for the x-values in the training set
  2. Subtract these predicted y-values from the real y-values that we already know
  3. Square the differences
  4. Take the sum of the squares

This is called the “l2 loss”. The “2” comes from the fact that we square the errors, which is the same as raising them to the power of 2.

The l2-loss is defined as follows in the text:

(define l2-loss
  (lambda (target) ; the function we are trying to 'learn'
    (lambda (xs ys) ; the data: the x-values and y-values we have
      (lambda (theta) ; our current guess for the function parameters
        (let ((pred-ys ((target xs) theta)))
          (sum
           (sqr
            (- ys pred-ys))))))))

Since the target function and xs and ys will stay the same throughout the whole training process, we can fix these in place, producing an “objective function.” The objective function already knows which function we would like to learn parameters for. It already knows what the x-values and y-values are in the training data. It is just waiting for some guesses that it can try out. In the session, we tried the following guesses for $\theta$, using line as the target function, and some simple points as the xs and ys:

; define objective function
(define objective-function (expectant-function line-xs line-ys))

; try first guess
(objective-function (list 0.0 0.0))
; = 33.21

; try second guess
(objective-function (list 0.0099 0.0))
; = 32.5892403

We could keep on trying the algorithm of increasing $\theta_0$ by $0.0099$:

(objective-function (list 0.0198 0.0))
; = 31.9743612

(objective-function (list 0.0297 0.0))
; = 31.3653627

If you perform this operation 106 times, you get the best possible answer:

(objective-function (list (* 106 0.0099) 0.0))
; = 0.13501079999999996

If you keep going, the loss starts to go up again. That is, the guesses get worse:

(objective-function (list (* 107 0.0099) 0.0))
; = 0.13759470000000001

As the authors of The Little Learner would say, after 107 iterations, we have “overshot” the best possible answer.

How to improve the guess

Clearly just increasing the $\theta_0$ by $0.0099$ is not a great algorithm for learning the function. It takes many iterations to learn an extremely simple function. Looking at the graph on page 58, the correct function is clearly something very close to $y = (1)x + 0$. Shouldn’t this be pretty easy to find? And what if we need to learn $b$ as well as $w$? This algorithm assumes that $b = 0$!

Well, we’ve not seen a better solution yet, but there are many chapters to go, and many layers left in this onion. Assuredly by the end of all this, we will know the arcana of the field.

First interlude

Next meeting, we take a break from The Little Learner and will look at the LLM code for HESPI, a fascinating AI-for-GLAM project.