Working your way through the volumes of The Art of Computer Programming is a fairly challenging task even for many experienced programmers and computer scientists (at least so I'm told). Most of the text is fairly easy to understand and digest but Knuth is a mathematician, so many sections finish with some hard-core-no-holds-barred-maths. I have no formal maths education beyond GCSE maths (high-school maths for the Americans amongst you), so you can imagine my horror when faced with an enormous block of text involving set theory, series, functions, and the rest at the end of the 'Basic Concepts' section. This article is essentially my notes as I work my way through the serious maths.
In the first chapter, at the end of the 'Basic Concepts' section is the first time Knuth turns to formal mathematical definitions and notation to describe phenomena that has previously been described in plain text. It's at this point that those without any advanced formal maths education will have difficulty. If the reader is anything like me however, they are a completionist, and won't want to skip those bits to move forward. They'll want to understand them. Let's make a start where he does.
When Knuth refers to a computational method as a quadruple he is simply saying that a computational method is composed of four distinctly defined parts. He labels these four parts as (Q, I, Ω, f). He then moves on to briefly describe each component of this quadruple. I and Ω are sets (collections of things), and Q is also a set which contains the things in the sets I and Ω. At this point it's easy to mistakenly assume that he means that Q contains only the sets I and Ω and nothing else. But we later find that this is not the case. Lastly he describes f as a function from Q into itself. What this means is that f is a process which takes an input which is an element from the set Q and returns or outputs another element from Q.
What this essentially means is that, what our function f returns, will be the same as its argument (i.e. the value will not change) if the argument is a member or element of (thing in) set Ω. This makes more sense when Knuth makes a clarification in his next statement; Spoiler alert - Ω is the set of possible outputs of our computational method. Once we know this it's a little easier to understand. Passing an output back into our function will not change it.
So Q is a set that contains all possible states of the computation i.e. all the possible variations of input, output and all the stages in between. The set I contains all possible inputs. The set Ω contains all possible outputs (sorry if I spoiled that revelation for you earlier). And finally, f represents the computational rule; that is, the process/es applied to each state to get the next state, eventually (hopefully) until we get our output.
I wrestled with the above lines for some time, and had to ask the opinion of a friend with a maths degree what he thought about it before securely arriving at a consensus. My first thought was to think that each x (or each element) in the set of all inputs ( I ), was a sequence. But when Knuth says "Each input x in the set I defines a ... sequence", that is not what he appears to mean. The only interpretation that makes sense here is that for each x in the set of all inputs there exists a corresponding sequence in which the first state in the sequence is x0 and that this first state is the raw input. Hence, x0 = x. When he says that xk+1 = f(xk) for k ≥ 0 he's saying that for every state in the sequence that has an index of 0 or more we can get the next state in the sequence by putting it through a function f(xk). So, if you put x1 into the function, you get x2 out (assuming of course that x1 is not the final state. We'll come to that later) f(x1) = x2. That way, we can repeatedly apply the function to each new state to get the state that follows. An important note here, when Knuth earlier states that Q is the set of all computational states, these are the states he is talking about (along-side the inputs and outputs of course).
The sequence is said to terminate in k steps where k is the first position in the series that exists in Ω. What this means of course, is that a series runs from x0 (the input) to xk and xk is then the output, and therefore a member of Ω.
Now here's the clincher, thus far we've been defining a computational method rather than an algorithm.
What we mean here when we say "finitely many steps for all x in I" is that for all the elements ( x ) in I (all our potential inputs) there is a sequence of computational states that is finite, i.e. does eventually end. Or in layman's terms, no input we put into our algorithm will cause the algorithm to run forever.
And there ends our formal definition. I hope this has been as helpful to you reading this as it was for me to write it. And now, if your brain isn't fried from reading the above my brain is certainly fried from having written it. The only thing left to do at this point is to go through an example, which is exactly what Knuth does next. In my next instalment I will go through that example.
So, until then, happy reading!