The exact point at which one ceases to be certain is the degree of certainty as opposed to the degree of belief measured as a probability function.
Probability is quantified as a number between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty). The higher the probability of an event, the more certain we are that the event will occur.
The words “certainty” and “probability” do not apply to propositions that are either true or false. These propositions entertained by us with suspended judgment should never be qualified as either certain or probable.
In American common law there are degrees of certainty and doubt. Certainty attaches to judgments beyond the shadow of doubt; not certain are judgments made with a reasonable doubt; and less certain still are judgments made by a preponderance of the evidence.
The last two are judgments to which some degree of probability must be attached, the former more probable, the latter less probable.
The propositions in each of these two cases, when entertained with suspended judgment, are either true or false. Certainty and probability qualify our judgments about the matters under consideration on the propositions entertained with suspended judgment.
This statement brings us to consider what happens by chance and what is causally determined.
Here we must distinguish between the mathematical theory of probability and the philosophical theory of what happens by chance.
In the mathematical theory of probability, which begins with an essay by Blaise Pascal, one can calculate the chances of anything happening by the number of possibilities present; for example, in the toss of a coin, the chance of its being heads or tails on any toss is fifty-fifty, because in the long run, with many tosses, that is how one should wager on the next toss, if we know that the coin being tossed is not affected by an extraneous factors.
In the philosophical theory of probability, what happens by chance is what happens without a cause. Consider the coincidence of two individuals who happen to meet on a particular street at a particular time. Why do we call this a coincidental meeting, and regard it as an uncaused event?
The answer is that each of the two individuals is caused to be at the spot where the chance meeting occurs by all the causal factors operating in his own past, but nothing in their separate pasts causes them to meet each other there. The coincidence is, therefore, an uncaused or a chance event.
Probability can be used to measure degree of belief in two ways: objectively and subjectively.
The objective measure is a measure of the rational degree of belief in a proposition given a set of evidential propositions.
The subjective measure is the measure of a particular subject’s dispositions to decide between options. In both measures, certainty is a degree of belief, however, there can be cases where one belief is stronger than another yet both beliefs are plausibly measurable as objectively and subjectively certain.
In ordinary language, we can say that while both beliefs are certain, one belief is more certain than the other.
Uncertainty is all around us; we can’t expect certainty. But uncertainty can often be “quantified” — that is, we can talk about degrees of certainty or uncertainty. This is the idea of probability: a higher probability expresses a higher degree of certainty that something will happen.
Contemporary statistician Xiao-Li Meng reiterates and expands on this idea, using the words “randomness” and “variation” instead of uncertainty:
Statistics, in a nutshell, is a discipline that studies the best ways of dealing with randomness, or more precisely and broadly, variation. As human beings, we tend to love information, but we hate uncertainty — especially when we need to make decisions. Information and uncertainty, however, are actually two sides of the same coin.
Statistical techniques can’t eliminate uncertainty, but can help us gain some knowledge despite it. They can help us see patterns through it, and help us quantify the certainty/uncertainty that the patterns are real and not just chance artifacts of our data or of our perception . The following quote from mathematics educator Alan Schoenfeld nicely expresses reasonable expectations in fields where statistics is likely to be applied:
Consider the theory of evolution, for example. Biologists are in general agreement with regard to its essential correctness, but the evidence marshalled in favor of evolution is quite unlike the kind of evidence used in mathematics or physics. There is no way to prove that evolution is correct in a mathematical sense; the arguments that support it consist of (to borrow the title of one of Pólya’s books) “patterns of plausible reasoning”, along with the careful consideration of alternative hypotheses. In effect, biologists have said the following: “We have mountains of evidence that are consistent with the theory, broadly construed; there is no clear evidence that falsifies the proposed theory, and no rival hypotheses meet the same criteria.” 3
In other words, in many areas, we can’t expect certainty, or even anything approaching it, from a single study. But an accumulated body of evidence based on high quality research can give us a high degree of certainty. Working well in a field with high degrees of uncertainty requires patience and often humility while the mountains of evidence accumulate — and might not turn out to support our pet theories.

Like knowledge, certainty is an epistemic property of beliefs. (In a derivative way, certainty is also an epistemic property of subjects: S is certain that p just in case S‘s belief that p is certain.) Although some philosophers have thought that there is no difference between knowledge and certainty, it has become increasingly common to distinguish them. On this conception, then, certainty is either the highest form of knowledge or is the only epistemic property superior to knowledge. One of the primary motivations for allowing kinds of knowledge less than certainty is the widespread sense that skeptical arguments are successful in showing that we rarely or never have beliefs that are certain but do not succeed in showing that our beliefs are altogether without epistemic worth.

As with knowledge, it is difficult to provide an uncontentious analysis of certainty. There are several reasons for this. One is that there are different kinds of certainty, which are easy to conflate. Another is that the full value of certainty is surprisingly hard to capture. A third reason is that there are two dimensions to certainty: a belief can be certain at a moment or over some greater length of time.

There are various kinds of certainty. A belief is psychologically certain when the subject who has it is supremely convinced of its truth. Certainty in this sense is similar to incorrigibility, which is the property a belief has of being such that the subject is incapable of giving it up. But psychological certainty is not the same thing as incorrigibility. A belief can be certain in this sense without being incorrigible; this may happen, for example, when the subject receives a very compelling bit of counterevidence to the (previously) certain belief and gives it up for that reason. Moreover, a belief can be incorrigible without being psychologically certain. For example, a mother may be incapable of giving up the belief that her son did not commit a gruesome murder, and yet, compatible with that inextinguishable belief, she may be tortured by doubt.

A second kind of certainty is epistemic. Roughly characterized, a belief is certain in this sense when it has the highest possible epistemic status. Epistemic certainty is often accompanied by psychological certainty, but it need not be. It is possible that a subject may have a belief that enjoys the highest possible epistemic status and yet be unaware that it does. In such a case, the subject may feel less than the full confidence that her epistemic position warrants. I will say more below about the analysis of epistemic certainty and its relation to psychological certainty.