• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Moderated Coin Flipper

Yes you do most definitely disagree... backpaddling now is not going to erase your words that prove it.





And neither will PRETENDING to have miscommunicated anything... you were very "affirmatively clear about it"...




And QUOTING your own affirmations is not misrepresenting you...
You're trying to tell me I believe something other than what I believe. You're always going to be wrong about that, no matter how you interpret my words.

I believe that, at a fundamental level, we cannot know if natural processes are truly random, or if they are ultimately the result of some deterministic condition or event.

This doesn't prevent me from agreeing with the general consensus about being able to predict the 50/50 convergence of large numbers of (pseudo)random coin flips. That part I agree with, even if I don't know for sure if the underlying flips are truly random or actually deterministic.

And, again: I agree with FZ on that issue of 50/50 convergence. It's not use telling me I disagree with him; I know that's wrong.
 
You're trying to tell me I believe something other than what I believe. You're always going to be wrong about that, no matter how you interpret my words.


I am not trying to tell you anything... I am quoting your own words...

I apologize for not being clearer. I'm affirmatively agnostic about randomness in nature. It's not that I don't have the knowledge. It's that I don't think the knowledge can be had. Not by me, not by you, not by anyone.
 
Computer PRNG Empirical Experimentation

...
• Your app isn't "empirical experimentation" because it isn't actually random. It uses an algorithm that is completely deterministic. You might get the occasional actually random result caused by a bit flip induced by a cosmic ray particle, but barring that slim possibility, if you run the same algorithm from the same initial starting condition on two different computers you'll get the same sequence of numbers from both. That is not a random sequence of numbers - it's pseudo random, and therefore not scientifically valid for a study of random events.


The above is arrantly mistaken on many levels...

And I already addressed this error many pages ago... if only you read the posts in the thread before repeating errors already proven as errors.


You are right of course that PRNGs are only a SIMULATION of the randomness of reality.

However... you are wrong that it is not wise to use them for the purposes of SIMULATING naturally occurring randomness.

Simulations are used extensively in numerous fields of science and humanities... although it depends on the requirements of the application, most of the time PRNGs are used and are sufficient. And if the application requires a TRNG then PRNGs are still used anyway just with an added seeding using a hardware source of randomness.

Nevertheless... although you are assuredly right that PRNGs are not the real randomness of a natural random event... you are still wrong about the wisdom of using PRNGs as a SIMULATION to EXPERIMENT and ESTIMATE and RESEARCH such naturally occurring randomness without having to spend $$$ and prohibitive real time.

I suggest you look into the benefits and wisdom of simulating natural events in all sorts of fields of science and humanities. (Here is just ONE example of a very serious application)


In the late 1940s, Stanislaw Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:

The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations.​

...Monte Carlo methods were central to the simulations required for the Manhattan Project, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948.[20] In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.

...

  • Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.

Monte Carlo and random numbers
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known. Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally.

Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
...
 
Last edited:
• The overwhelming scientific consensus is that the results of a sequence of random events with two equally possible outcomes will get closer and closer to 50/50 the more you repeat the event.

:sdl:
The above is... ironically... more than perfectly replied to by your own posts... which I cheered on as "you getting it".

.... such a poor understanding of probability and statistics that he they thinks that it predicts that as more and more coins are tossed, they will somehow magically begin to non-randomly alternate between heads and tails. Like after a billion tosses the results will converge to ...HTHTHTHTHT... for eternity. He They thinks that the probability curve generated by a large sample of events, or the tendency for total results of coin flips to converge very close to 50/50 with larger and larger samples, means that the discrete events are somehow no longer random. He They apparently thinks that statisticians are claiming that the number of events somehow alters the behavior of the coins, which strikes me as very similar to a gamblers fallacy.

: clap :

Thanks.... so so sooooo much.... QED!!!


You see, in statistical analysis, when statisticians talk about the frequencies of certain events converging on a certain ratio of results, they don't mean that it affects the outcomes of individual events. Nothing about the number of previous coin tosses has any effect on the next coin toss. That's why it's a fallacy when a gambler thinks that a number of same results in succession means that a different result is "due" on the next turn. Even though the distribution of heads and tails gets statistically closer to 50/50 with more tosses, if you "zoom in" on any part of the recorded results to look at, say, ten successive results, it won't be distinguishable from any other sample of ten tosses in the same overall series. The statistical convergence is simply a result of the sample size, not any change in the probability of coin tosses.


Halleluiah... Halleluiah.... by Jove you got it...

:bigclap


Just a small additional note: when you have a large number of data points that fluctuate randomly +/- above and below a reference line then when you add them up the +/- values are more statistically likely to add to 0 or close to it with a +/- smaller fluctuation than the data points do.... and since the data points are random then even if you happen to add up to 0 in the last N data points the next M data points will not add up to 0 and thus making the N+M data points not be zero... although of course not as much off the 0 as the individual M or N points.

So... you are very correct... it is all a statistical TRICK a deceptive sleight of statistics and averaging.
And.... as you and psion10 and even thermal, correctly stated... it is indeed RANDOM and will stay that way no matter how many tosses you do.

Thanks for agreeing with me... although you thought you were not...:thumbsup:
 
....

I believe that, at a fundamental level, we cannot know if natural processes are truly random, or if they are ultimately the result of some deterministic condition or event.

This doesn't prevent me from agreeing with the general consensus about being able to predict the 50/50 convergence of large numbers of (pseudo)random coin flips. That part I agree with, even if I don't know for sure if the underlying flips are truly random or actually deterministic.

And, again: I agree with FZ on that issue of 50/50 convergence. It's not use telling me I disagree with him; I know that's wrong.


:sdl:I am not the one who says that you disagree with Foster... he says so himself... as he describes you in the post below... although of course because of his refusal to read posts... he thought it was me he was describing.:sdl:

.... such a poor understanding of probability and statistics that he they thinks that it predicts that as more and more coins are tossed, they will somehow magically begin to non-randomly alternate between heads and tails. Like after a billion tosses the results will converge to ...HTHTHTHTHT... for eternity. He They thinks that the probability curve generated by a large sample of events, or the tendency for total results of coin flips to converge very close to 50/50 with larger and larger samples, means that the discrete events are somehow no longer random. He They apparently thinks that statisticians are claiming that the number of events somehow alters the behavior of the coins, which strikes me as very similar to a gamblers fallacy.


And here he explains to YOU how wrong you are yet again... far from agreeing with you... although he thought he was explaining to me because of again refusing to read posts.

You see, in statistical analysis, when statisticians talk about the frequencies of certain events converging on a certain ratio of results, they don't mean that it affects the outcomes of individual events. Nothing about the number of previous coin tosses has any effect on the next coin toss. That's why it's a fallacy when a gambler thinks that a number of same results in succession means that a different result is "due" on the next turn. Even though the distribution of heads and tails gets statistically closer to 50/50 with more tosses, if you "zoom in" on any part of the recorded results to look at, say, ten successive results, it won't be distinguishable from any other sample of ten tosses in the same overall series. The statistical convergence is simply a result of the sample size, not any change in the probability of coin tosses.
 
Last edited:
...
I already addressed this error many pages ago... if only you read the posts in the thread before repeating errors already proven as errors.

You are right of course that PRNGs are only a SIMULATION of the randomness of reality.

I could quibble over the use of the word, simulation, but ok....

However... you are wrong that it is not wise to use them for the purposes of SIMULATING naturally occurring randomness.

It can be very wrong. For the case at hand, your program is very wrong on two counts.
  1. You are pushing the random number generator beyond the limit of appropriateness. A sequence of about 16,000 numbers is all you can expect to be sufficiently pseudorandom to be useful.
  2. The simulation is totally unnecessary in that Statistics provides a complete solution without the nonsense of an incompetent computer program.

You went on in your post at great length trying to equate your application to Monte Carlo methods. Monte Carlo simulations are quite useful for estimating data in situations where the model being simulated is incomplete or needing calculations that are intractable. Not this case at all.
 
I am not trying to tell you anything... I am quoting your own words...

Those words don't mean what you think they mean. I know what they mean, because I wrote them. If you have to choose between what you think I mean, and what I tell you I mean, there is only one correct choice. It's very easy for you to choose correctly, but somehow you keep doing the other thing.
 
Empirically Zooming In On The Data

Foster Zygote gave me a great idea with his insightful post

... if you "zoom in" on any part of the recorded results to look at, say, ten successive results, it won't be distinguishable from any other sample of ten tosses in the same overall series. The statistical convergence is simply a result of the sample size, not any change in the probability of coin tosses.


It is such a great idea that I already anticipated it long before Foster even came in on the thread and the "us" started agreeing with him all of a sudden backpaddling their previous "affirmative" disagreement.

For example....

... Since convergence is a long-term phenomenon, zooming in on a cherry-picked short-term series of trials, your latest tactic, is meaningless. As you have proved yourself immune to evidence and argument, I'm out.


But nevertheless... I thought it is such a great idea that I had to preemptively adopt the idea for myself... :sdl:

...
However... anyone with the slightest bit of discernment can easilly see how the graph in fact proves you arrantly wrong.

All you have to do is zoom in a little on the part you call convergent and you can see the erratic oscillations that clearly never settle down as any definition of convergent means...

I appreciate this admirable effort of yours to prove yourself wrong.... well done... and QED!!!

...
And here is another graph that is not mine that is claimed to support convergence when it ironically supports erratic oscillation... which is what is to be expected from a RANDOM process.

The deceptive zoomed out scale might give the impression to the uninformed that it is converging.... but all the discerning have to do is actually ZOOM IN on the DETAILS and see for themselves how it is a random process.
....

It is arrantly demonstrated that you are wrong by your very own words here in this post... and by your data here in this post and its graph zoomed in to remove the deceptive aspect of its scale... here in this post.... not to mention the data and graphs in this post.

...
See the graphs below of the Full data for all the running averages for all the runs... but also the next graph does only the last 70 runs, so as to make the scale zoom in and see the fluctuations better....

...

Note: Notice how the zoomed out scale for the Crypto running averages is DECEPTIVE appearing to be "converging" when in fact if one zooms in on the DETAILS one can observe all the FLUCTUATIONS and not any "converging" going on at all... nor any dampening of the oscillations either... cannot happen dues to the inherent RANDOMNESS of the process.

Yes... but... regardless... it is still random whatever "approximately" means.
....

See the post below about Coin Flipper V5 and the nifty little Coin Flipper Game and the snazzy new graphing feature with zooming and panning and placing the mouse on a graph point to read its values.

And here is the Empirical results of Zooming in as Foster post-suggested.

Here are some plots of data... generated in Coin Flipper V5 by the click of a mouse.... and the zoomed in stuff also by a click and a drag and then another click to save the images.
  1. Deceptively tricks the not careful observer that it indeed looks like converging.
  2. Shows that it is not... when one zooms in on the details... much like a terrain from 20,000 feet in the air might look flat but when one gets down to the level of the terrain one can see that it is not.
  3. And lest one say AHA... but more runs will ... blah blah blah... this plot nips that AHA right in the bud

[IMGW=700]http://godisadeadbeatdad.com/CoinFlipperImages/NotConverging(1).png[/IMGW]

[IMGW=700]http://godisadeadbeatdad.com/CoinFlipperImages/NotConverging(2).png[/IMGW]

[IMGW=700]http://godisadeadbeatdad.com/CoinFlipperImages/NotConverging(3).png[/IMGW]
 
I could quibble over the use of the word, simulation, but ok....


Of course you would... and of course you would be arrantly wrong as is the running theme in all your posts in this thread.


It can be very wrong. For the case at hand, your program is very wrong on two counts.


You do not ever admit you are wrong... even when all the facts irrefragably rive to smithereens all your errors... I can understand you not being able to admit your error... but is it out of not being able to recognize the FACTS OF REALITY that rend asunder your baseless bare assertions and arrant errors... or do you recognize your errors but cannot bring yourself to admit it???


You went on in your post at great length trying to equate your application to Monte Carlo methods.


And your incessant strawmanning and false misrepresentation of my posts is yet another arrantly obvious error that you will not relent let alone admit.... :eye-poppi:eek:


Monte Carlo simulations are quite useful for estimating data in situations where the model being simulated is incomplete or needing calculations that are intractable.


Aah... so you are begrudgingly admitting your error now... but without in fact admitting it... :sdl:

Here see how your above implicit admittance of your error compares to your previous error...

...
  1. Computer simulations do not produce empirical data.
  2. Random number generation algorithms have their limitations, some severe.


But hey... at least you have implicitly admitted your error... albeit inadvertently.... QED!!!


Not this case at all.


More baseless bare assertions of arrant errors.... here have a read...

  • Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.



QED!!!



.
 
Last edited:
Of course you would... and of course you would be arrantly wrong as is the running theme in all your posts in this thread.

I am arrantly wrong for doing something I didn't even do? Your reasoning is a bit faulty, there. So which specific parts of the quibble I never offered would be wrong?

You do not ever admit you are wrong... even when all the facts irrefragably rive to smithereens all your errors... I can understand you not being able to admit your error... but is it out of not being able to recognize the FACTS OF REALITY that rend asunder your baseless bare assertions and arrant errors... or do you recognize your errors but cannot bring yourself to admit it???

Yet, you did not address either of the two issues I raised in my post (and previously supported with other references). You clipped them from your quote of my post so you could engage in a fact-free rant. I'd think you better than these kindergarten playground antics.

And your incessant strawmanning and false misrepresentation of my posts is yet another arrantly obvious error that you will not relent let alone admit.... :eye-poppi:eek:

Are you claiming you didn't bring up Monte Carlo methods as a valid technique? Are you claiming you didn't use it in support of your program's use of random simulations? Where be that strawman or false misrepresentation you claim to be so obvious?

Aah... so you are begrudgingly admitting your error now... but without in fact admitting it... :sdl:

Here see how your above implicit admittance of your error compares to your previous error...


But hey... at least you have implicitly admitted your error... albeit inadvertently.... [ B]QED!!![ /B]

Are you of the opinion the statements, "Monte Carlo simulations are quite useful for estimating data in situations where the model being simulated is incomplete or needing calculations that are intractable" and "Computer simulations do not produce empirical data" are contradictory? I suppose it could be just a difference in understandings of the phrases "estimated data" and "empirical data". Perhaps you could help clarify? Better to clarify than to talk past each other.

More baseless bare assertions of arrant errors.... here have a read...

You accuse others of not reading your posts. Projection is it? I do not believe I claimed your program wasn't an attempt at a Monte Carlo simulation of a series of coin tosses. What I did say was:
For the case at hand, your program is very wrong on two counts.
  1. You are pushing the random number generator beyond the limit of appropriateness. A sequence of about 16,000 numbers is all you can expect to be sufficiently pseudorandom to be useful.
  2. The simulation is totally unnecessary in that Statistics provides a complete solution without the nonsense of an incompetent computer program.

Care to address either or preferably both points without all the histrionics?


With respect to Monte Carlo simulations, note that I said "Monte Carlo simulations are quite useful for estimating data in situations where the model being simulated is incomplete or needing calculations that are intractable" Please take special note of the realm in which I consider Monte Carlo simulations useful and appropriate, then contrast that to your program which is neither useful nor appropriate.
 
Foster Zygote gave me a great idea with his insightful post

It is such a great idea that I already anticipated it long before Foster even came in on the thread and the "us" started agreeing with him all of a sudden backpaddling their previous "affirmative" disagreement.

For example....
I was talking about actual random events. You can't disprove probability and statistics with a toy app that uses a non-random, deterministic algorithm to generate numbers using a formula that will repeat, eventually, no matter how complex you make the algorithm.

Saying, "let's look at my pseudo random number generator again to see how random chance works" is like a non-doctor saying, "let's look at this crayon picture I drew of the patient's enlarged heart" and make a diagnosis.

https://m.youtube.com/watch?v=GtOt7EBNEwQ&pp=ygUecHNldWRvIHJhbmRvbSBudW1iZXIgZ2VuZXJhdG9y
 
<snip rehashed baseless concerns >


Foster... I suggest you read the thread... your REHASHED and repeated baseless concerns about PRNGs have been riven very early on in the thread.

In short... your concerns about PRNGs are antiquated and baseless and pointless.

Moreover... it is evident that you have YET AGAIN not read posts in reply to you directly let alone other posts in the thread.

Have you read this post... of course not as evinced by the rehashing you are doing again.

Do you think Von Neumann and the Los Alamos physicists doing the Manhattan Project were "drawing with crayons" as you so fallaciously assert?


In the late 1940s, Stanislaw Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:

The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations.​

...Monte Carlo methods were central to the simulations required for the Manhattan Project, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948.[20] In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.

...

  • Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.

Monte Carlo and random numbers
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known. Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally.

Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
...
 
Last edited:
Nope. You claim to have addressed them but haven't adequately.

You took a random process and in attempting to reuse the data turned it into a deterministic, repeatable nonrandom process.

You seem to think that "pseudorandom" is synonymous with "random" and are using a completely misguided approach to try to answer a question that has been well understood since before the United States, in fact before George I was crowned, let alone George III.
 
...
In short... your concerns about PRNGs are [ B]antiquated and baseless and pointless[ /B].
...


Quite the contrary, and you continue to ignore the objections raised. You don't get to simply sweep aside any random facts that are raised just because don't like them.

Your program grossly misuses pseudorandom number generators, and your program is totally unnecessary since real Statistics provides a precise solution that your faulty program attempts to estimate.
 

Not a bad video. I wonder how Leumas would misrepresent it were he to watch.

I do wish the video had gone into the details of number pairs from a pseudorandom number generator. A good example of what I mean might be to think of a good generator for values 1 through 6. Each of the six values are equally likely, so I could use it in a simulation of a die roll.

Suppose I wanted to simulate the roll of a pair of dice. I could use the generator twice in sequence to get the first then the second roll.

The problem is, of course, that whereas the possible values for the first roll are all equally likely, the second one is fixed by the first. There are 36 possible outcomes in "the real world", but the simulation can exhibit only 6 of them.

Of course, dice and coins are completely different, so this is all just arrantly strawmanning slander or something.
 


Did you watch the video you cited? Had you read the thread you would have noticed these posts that say the same as the video you cited.

...
Random processes are said to be nondeterministic,
since they are impossible to determine in advance.
Machines, on the other hand, are deterministic.
Their operation is predictable and repeatable.
Edited by Agatha: 
Trimmed for rule 4


Pseudo-random in computer random number generators means that an algorithm generated the sequence.... and the set of data is random in its arrangement.... but because it is algorithmic then the sequence will always generate the same set of random numbers if the same starting point is used (i.e. a seed).

And therein lies the BEAUTY of pseudo-random number generators.

The fact that you can have a set of random numbers that can be repeated is vital for testing and experimenting.
But... if the SEED... i.e. the atarting off point for the algorithm is changed then a totally different set of numbers is generated that is RANDOMLY different than the previous set.
Which is yet another very useful quality of pseudo-number generators.... in that you can now have two sets of random number randomly different from each other but can be repeated because you know the starting off SEED and of course the algorithm

However... if you do not know the SEED and you use a different seed all the time that you have no way of knowing or determining then the total set of random numbers is not pseudo-random anymore despite it being still algorithmic.
Of course all this is because we are using a computer and we want to SIMULATE things like REAL LIFE randomness.

Moreover... if the SEED is random then the whole process is random....

So despite the limitation of using a computer you can still have a TRUE random number generator by having a NATURAL SOURCE of randomness from REALITY... e.g. TIME for the easiest way to do it on a computer... but more fancy equipment can use radio signal noise or even the noise in the computer's own circuitry or for really fancy stuff using background microwave noise or radiation sources etc.
For the system I am using the SEED is randomized by using microsecs ticks of the computer's clock and thus every time you run the algorithm unless you can repeat the exact same microsecond of time and can predict when the key stroke happened to start the process off... it is random.
The other way to do it... is to actually get a coin and start tossing for 10,000,000 times and recording the results and then repeating that for say 150 times... how long do you think it will take?

However... this is a simulation of reality... reality is still random... fully random and the computer simulation gives us an extremely good simulation of this... without having to spend the rest of one's life doing the experiment.

As to your other question, if you know the seed values, you can repeat the calculation. That is not what happens with a genuine random event.


And when you do not... then you cannot... which is what happens in genuine random events.... no?

So when you use a random seed... it is random.

That is not random. That is with hidden variables..


Nope... it is with RANDOM variable just like in nature.

If the seed is an instant in time that you could not determine because it all depends on the instance of a key stroke or the instant when the algorithm is run to milliseconds which you have no way to predict because the OS is multitasking and could have queued the process or not and it could have been third or first etc. etc.
Or the seed could be noise in the machine or in the atmosphere etc. etc.

You know... random... as in unless you are god you cannot possibly predict or determine.
...

....
The point I was raising was the usability of the pseudorandom number generator for a sequence of one billion or more values. Yes, it will provide that many values, but at what point do the values no longer have the required properties?


The PRNG is not asked to provide 109 values.

It is asked to provide ONE value for each flip to decide the flip result.

Each flip is independent of the previous or any other flip.

Doing 107 flips is just like doing one flip 107 times.

And doing 100x107 is just like doing one flip at a time for 109 times.

The PRNG is not asked to produce more than one random number at a time each time there is a flip.

And each time the flip is done a new seed is used so a new sequence is generated from which one result is used for the flip.

So doing 10 flips the PRNG is asked 10 times each time anew to generate a new random number with a new seed.

Doing that 1000 times or 109 is the same.

In your QBasic it is like doing a RANDOMIZE TIMER before each time you use the Rnd function.

...reseeding itself is reseeding itself... get that?

And when reseeding happens the next sequence is different from the one before and is random.

If the first seed is an instant in time then unless you are OMNIPRESENT and OMNISCIENT you cannot figure out the first seed and thus not any other of the seeds and thus totally random numbers.
And what you said is that you doubt it can generate 109 random numbers.... can you now see that it can???

And why are you still harping on and on about this red herring... I thought you already chucked it back into the ocean????
 
Last edited by a moderator:
Not a bad video. I wonder how Leumas would misrepresent it were he to watch.


I represented it with my posts long before throughout the thread... any misrepresentation is what YOU are doing and will continue to do and will never stop doing.
 
<snip incessantly repeated and rehashed errors already debunked numerous times>

<snip incessantly repeated and rehashed errors already debunked numerous times>


Repeating your errors over and over will never make them converge to truths... watch the video Foster posted or read the transcript I posted below it and learn something new and stop relentlessly repeating your errors over and over.

The video rives to smithereens your claims and ratifies all mine.... QED!!!


ETA: and then there is this... yet again... QED!!!
Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.




.
 
Last edited:

Repeating incorrect things you posted earlier does not make them correct now.

For example, we have this golden oldie bit of confusion:

....
The point [ HILITE]I was raising was the usability of the pseudorandom number generator for a sequence of one billion or more values[ /HILITE]. Yes, it will provide that many values, but at what point do the values no longer have the required properties?

The PRNG is not asked to provide 109 values.

It is asked to provide ONE value for each flip to decide the flip result.

Each flip is independent of the previous or any other flip.

Doing 107 flips is just like doing one flip 107 times.

And doing 100x107 is just like doing one flip at a time for 109 times.

The PRNG is not asked to produce more than one random number at a time each time there is a flip....
Wait for it...wait for it...
[ HILITE][ B]And each time the flip is done a new seed is used so a new sequence is generated from which one result is used for the flip[ /B][ /HILITE].

Great bit of fiction, that. Javascript's random number generator algorithms do not do that. (As it works out, too, if they did they'd be substandard in their randomness characteristics.) But, no, the seed is initialized when the javascript environment is initialized. That is once. Thereafter, the generator returns the next pseudorandom number in sequence according to its algorithm. There is no restarting at some random spot.


Leumas, your misunderstandings of the actual workings of your program are breath-taking.
 

Back
Top Bottom