What I'm discussing now is a different question: Is it impossible (not just extremely unlikely) that a human being flipping a coin is not perfectly random and will not actually produce all possible combinations of, say 100 flips, no matter how long it goes on?
The question now is, what are the odds that a human flipping a coin, if it could go on indefinitely, would not in fact produce all possible combinations at any given scale? In other words, what are the odds that it is in some way biased in a way that would prevent extremely long streaks from happening?
Although the "new question" may look superficially similar to the original thread topic, it is a very different question - here you are using the word "odds" in a way that requires another interpretation of the whole concept of probability.
Q1. Is it impossible (P1=0) for a sequence of 100 heads to come up when a coin is fairly tossed 100 times?
Q2. Is it impossible (P2=0) for "humans flipping coins" to be unfair in such a way that they are prevented from getting a sequence of 100 heads?
The first question asks about a fair, random experiment, and makes it clear what constitutes a trial and what constitutes a success. It implies frequentist interpretation of probability P1, and in that context, it can be easily and clearly answered. (The answer is "no, the probability is very low, but non-zero".)
The second question doesn't define any fair, random experiment. It isn't clear what constitutes a trial: are we somehow supposed to construct humans randomly (and what would still constitute "humans flipping coins"?) and analyze how often we may end up with humans unable to flip 100 heads in a row? That doesn't make any sense, and the question doesn't ask that anyway. It refers to a single scenario that's already been set up - existing humans either being prevented from flipping 100 heads in a row or not - and asks about the "probability" of the answer (which is already established but unknown to us) being this or that.
The second question therefore intrinsicially implies Bayesian interpretation of probability P2 and it is a question of our belief or confidence in something that we can't know.
Considering that results of real humans flipping coins are in principle affected by the entire observable universe (it's possible for a meteorite to fall nearby, shake the ground and alter the result; it's possible for a cosmic ray to strike a neuron controlling the arm and alter the result, etc.), the hypothesis that humans are prevented from flipping 100 heads is actually a statement about the entire observable universe being set up in a particular way that rules out humans flipping 100 heads. It would therefore seem that the upper bound of probability P2 (our confidence in this hypothesis) might be extremely low, dramatically lower than 2
-100 (but let's not forget that probability P1=2
-100 is a different kind of probability and the two can't be directly compared).
Could we rule out the hypothesis altogether, though (P2=0)? It seems that we can't, because, after all, we could always be living in a virtual reality which was for some weird reasons set up to prevent flipping 100 heads.
The answer to Q2 might therefore ultimately be the same as the answer to Q1 ("no, the probability is very low, but non-zero"), but it is important to note that the two probabilities P1 and P2 represent different concepts that cannot be directly mixed.