My Overview of the First GSIC Test Protocol
The GSIC TEST - REDUCTIONISM, LOGICAL ERROR, AND PSEUDOSCIENCE: How do we KNOW something real about a complex physics-oriented phenomenon?
A Criticism of the Initial "Wellfed" Test Claim and Protocol.
Prelimary Remarks.
This is my first post to the new GSIC forum. Many of you who read the older AUDIO CRITIC forum encountered me and my posts. Some may not have found, in one of the earlier ones, my extended bio. In that forum, I asked if others would be willing to provide a statement about their general training or experience or interests or qualifications: background that would help others to see what kind of perspective they brought to the discussion. As far as I remember, no one did so. I might have encountered a hint, here and there; but otherwise one could only infer, by the language, "tone", and focus on specifics.
I have difficulties working *entirely* in a vacuum; and I am aware that others here often do not. Repeatedly, I was told that the investigation was "simple". It seemed to me that it was, actually, complex and multi-dimensional. My bio, below, will help you understand what I've experienced, that leads me to have an informed opinion.
Concise bio of "PianoTeacher"
I am a male, almost 60, who teaches youngsters how to play the piano. I operate a home business doing this with my wife, a trained concert pianist and pedagogue. Before I retired and focused on my home business, I had a 30-year career related to audio. I have also been a computer programmer; product developer and tester for an optical company making astronomical devices; and electronic and audio engineer and sound recordist. I started my career as a classical music radio announcer in the early-60's, benefiting from the training I had in playing musical instruments and in learning repertoire. In college, I studied philosophy, music, biology and science, but eventually majored in communications. I bring to the table an interest in high fidelity, and a background as a test developer who has helped shape commercial products (including a series of industry-standard audio processors.)
I am an avid audiophile, and own nearly 10,000 classical CDs (and during my years in classical radio, ultimately 13,000 LPs.) I have also operated my own recording studio that evolved from 1975 to 1991. In addition to my professional work testing electronic and optical products (sometimes with very highly controlled tests), I have also participated in countless tests of audio technology and advanced optical products for astronomy, as an amateur enthusiast, though not since the late 1980's, when my interest began to be directed back to music.
I first heard about the alleged GSIC-effect by reading James Randi's commentary; and as an ardent Skeptical Rationalist myself, was attracted to the subject, which seemed to offer a fruitful field for trying to delve below the advertising hype and enthusiastic, vernacular "blather", to search for any potentially concrete facts.
-----
Below, I discuss some premises, explain an example of my own "bias correction", examine some speculative protocols and tests, and then discuss the modern paradigm that influences many scientific observers to be drawn to the "elegant concept of resolving a binary dichotomy".
What "Digital Audio Recording" Does
There is no such thing, at the level of a fully semantically-reduced declarative statement, as a process that digitally "records audio".
"Audio", simplified to sound pressures whose vibrations are in the frequency range of animals and humans, can only be accurately recorded by "a theoretically perfect analogue process" that can preserve, and replicate, the continuum of pressure variations (if we remain in the purely classical domain of physics.) Unfortunately no such perfect analogue recorder exists. All have, among other flaws: noise, distortion, speed variations, and spectral response and amplitude-range limits.
Devices that are called "digital audio recorders" are actually using an algorithm to create a MAP, whose dimensional coordinates save a simplified, quantized series of discrete values of data, ultimately reduced to either a 1 or a 0 in the smallest possible temporal subdivision in assembling a digital word. The map is then saved in a "digital carrier system"; and when digital audio playback occurs, the map coordinates are converted by a process that produces a continuously-varying voltage that may be used to energize a transducer that produces air pressure variations.
The complexity of the map has been adjusted, by its designers, according to two basic principles: (a) the scientific knowledge derived from the study of psychoacoustics, which has statistically quantified markers that represent useful parameters of a large population of individuals who were tested; (b) and psychological testing, which has provided data about "what people prefer" and how close the map's simulation of audio seems to be to the normal perception of NON-RECORDED audio.
We know now that the first promoters of the compact disk were actually pseudoscience propagandists. Many insisted that the medium was PERFECT. The assumption was, that because problematical variables of noise and distortion and pitch fluctuations and response had been slightly corrected over the performance of the best analogue recorders (irrespective of any new artifacts introduced), that audio recording science had progressed from the state of being "messy" to the state of being "under perfect control".
(Actually, the original developers of digital recording knew from the start that "the map was too simple". It was, at the time, a best-fit, constrained by available technology, including the practicalities of affordable and available memory chips.)
This pseudoscientific rationalizing of the first commercial CD-exploiters and promoters was almost immediately exposed BY HIGH END AUDIOPHILES. Their perceptions of both (a) new artifacts; and (b) degradations of some parameters that were dealt with more effectively by existing best-case analogue recording, were important contributions to science. Their observations were falsified; and what was learned by the public, and audio corporations, was that THE MAP WAS TOO SIMPLE. Furthermore, it was discovered that the mapping was influenced by design flaws in the first practical commercial digital algorithms, DACs, and filters.
High end audiophiles with acute sensitivities were REALLY perceiving flaws, variables, and nuances that hard-nosed but limited audiometricians had denied. More advanced testing procedures confirmed the sensible observations of rational listeners; and the MAP WAS IMPROVED and adopted when technology and economics permitted it.
For instance: the standard home compact disk medium, nearly unchanged since 1982, still uses THE ORIGINAL MAP that had been refined for the 16-bit, 44.1 kHz sampling rate system, the industry-standard for the introduction of the compact disk. But original music masters are recorded by a process that CREATES A BETTER MAP: using higher sampling rates, and digital quantizing to a resolution level achieved by as many as 24 bits, conventionally. Thus, most modern CDs are mastered at (say) 96/24, but have to be downconverted and simplfied to 16/44. This downcoversion is a highly complex process *develoiped by following processed of scientific psychoacoustical testing*. The conversion process is generally not linear, and factors in many variations in human hearing cognition, in order to create a newly-refined, simplified 16/44 map that tends to resemble the 96/24 map. New CD player design also use refined circuits, especially advanced DACs and sophisticated filters (which render the discrete quantized variations in voltage amplitudes, into smoothed analogue variations.)
At each step in the process, HUMAN HEARING COGNITION, tested by science, helped refine the design of both analogue and digital circuitry.
I might add that in rare cases could the testing be reduced to mere binary dichotomies. Where the "digital recording process" is TRULY binary, is where a 1 or a 0 is chosen to represent a single bit that comprises a more complex digital word. We have, in effect, two funnels with large mouths and tiny terminations: sound goes into the large mouth of one; the small end only passes 1's and 0's, transmitted by a carrier into the small mouth of another funnel; from the large end emerge air pressure vibrations in our environment that *seem* to convince us that we are hearing an exceptionally close facsimile of "real sounds".
We can't use this construct, above, to record analogue audio; we must make significant changes and simplifications, returning from the complexly-quantized and highly controlled digital process, to the simpler analogue one; in fact, to a NON-ELECTRICAL, purely mechanical, one.
As we all know, Edison invented that process (which symbolically resembles my "two funnels" visual analogy), except that instead of passing 1's and 0's through the small ends of the funnels, he passed restricted continous vibrations that were only crudely derived from soundwaves, and saved in a HIGHLY LOSSY carrier with huge quantities of noise and distortion, barely able to record the spectrum of intelligible speech. In the Edison process -- which remained fundamentally unchanged from 1877 to about 1925/6 -- the improvements were only marginal, and gradual: glacially slow to occur. By 1924, sound recording was STILL effected only by means of air vibrations and "mechanical wiggles". After 1925, we introduced electrical/analogue transducers, which improved various aspects of the system. It remained fundamentally fixed (excepting for lab experiments) until the introduction of commercial musical digital recording as early as 1972: when the MAP CONSTRUCT parameters were formalized in a rudimentary, but "satisfactory to some listeners" system.
Every advance in digital recording from 1972 to today, arose both from theoretical digital theory, and the pressure of the audiophile community for BETTER MAPS. They vote with their dollars (yen, pounds, marks...) and that economic stimulus moves an industry to provide us with better compact disks, and better players.
Yet, there are still limits, based on the first practical 16/44 system used for the compact disk carrier: including the minimalistic non-repeatability that is a consequence of the error correction system (discussed in one of my last posts to the AUDIO CRITIC forum.)
Finally: at the heart of the success of digital audio recording is the fundamental fact that an inaccurate symbolic map, full of identifiable flaws, works in concert with the limits of human aural cognition to manage, wonderfully, to make us all have the delightful enjoyment of seeming to experience music. (This also suggests a speculation that the map, as it stands now, would actually FAIL to replicate music, with a listener who had a much higher degree of evolutionary advancement and neuroperception!)
The Contributions of the Audiophile Community
I want no "JREF reductionist" here to have the *misapprehension* that "there is no useful utility in the aesthetic judgments of high end audiophiles". I'm not claiming that JREF participants do; but there are indeed comments heard all the time, and posted everywhere on the Net, from those who are quite comfortable with the technology they own, who do make such grouchy statements. Had they been heeded, we would have no improvements over the "glassy, hard, edgy sound" that analogue audiophiles correctly described, back in 1982-3.
Long before digital recording had been developed, a social norm existed in which audio developers gave value to the empirical experiences and opinions of audiophiles. This has been refined and expanded, and now today we see one of the most marvelous results of that relationship: superb modern digital sound mastering in high resolution, demonstrably superior to the first digital tapes from Denon in 1972. On the other hand, it has been shown that chicanery, charlatanism, exaggeration, propaganda, and intellectual sloppiness exists to some extent in the discussion and promotion of audio products (as in all other walks of life.)
To summarize: (a) digital recording/playback is a process of matching SYMBOLIC MAPS with expectations of non-deterministic human cognition; and (b) acute critics and judges of audio performance have made a scientific contribution to the development of audio, even through social pressure, discussion, and untested criticism.
Critique of Dogmatic Reductionism
The Contention between PianoTeacher and "Diogenes", "Grw8ight", et al.
The first and then consistent criticisms addressed to my contributions to the Audio Critic forum -- aside from the objections about their length -- focused on what many if not most other forum members viewed as my presenting irrelevancies. "This is simple!" was the repeated cry and assertion.
Finally, having covered all the background I wished others to consider -- what psychoacoustical testing is, and what it can do; the known variables in human neuroperception (including the very *real* incidences of acoustical hypertrophism in some individuals); the "real" nature of nuances that are important to the aesthetic judgment and appreciation of persons interested in art; and the highly complex and systematic processes of real analogue and digital audio technologies -- I returned at last to consider what IS, and is NOT, "simple".
I was finally prodded by "Diogenes" on the audio critic forum to re-examine my own perspective. Assuming that he was exactly right, "it was simple", and that I was precisely wrong -- and knowing that my own training was biasing me and preventing me from looking at the issue from the perspective of NON-musicians, NON-engineers, and only strict logical reductionists -- I tried a new technique, which I discussed in my last post to the Audio Critic forum ("I Correct Myself"). I had to create new scenarios of what I believed to be "fully reduced" and ultra-simple dichotomous issues that could be resolved in JREF-type testing, and THEN work backwards and adjust parameters until they fit the conditions of the first GSIC-Effect Claim.
I think I acquired a greater understanding of why almost everyone else was telling me it was just a very "simple" matter though I was uncomfortable with that concept, and that -- ultimately, reduced to a dichotomy -- "Diogenes" and others were correct: my own mind was still too concerned with the engineering and neurophysical aspects of any alleged effect. (In my defense: I did not see much comprehension of the variables that I thought relevant to the crafting of the test protocol.)
Now that I have been able to see this, I still perceive TWO, parallel, situations, not exactly ONE fully integrated and "simple" one.
A. There is indeed a "Diogeneseque" simplicity: a dichotomy that the JREF Challenge may help to resolve. It is so utterly self-evident that I had really considered it part of the fundamentals of what we were all doing here; the process had been refined; and I did not care to focus on it. Surely, as I understood at the very outset: either somebody actually *hears* GSIC-effect (inferring that it EXISTS), or *does not hear* it (not necessarily inferring that it does NOT EXIST.) That is the fully reduced dichotomy; I felt that I was stipulating this. But I was going beyond it: to examine the protocol, and HOW the consequences of any such alleged effect would be evaluated in order to resolve the claim.
What my examination of Diogenes' critique helped me to appreciate, was that the JREF Challenge test process *could indeed* operate independently of "audio", "hearing", and any one person's perceptions and judgments. I had already realized this; felt it was stipulated; but again: was focusing on theoretical possibilities of the "effect" (precisely how to falsify it.)
B. I do understand now why Diogenes, Grw8ight, and others were so impatient with me. But by trying to sweep the details of the myriad of issues related to any such alleged device, and human ability to perceive it, totally away, how can one craft a useful protocol to have a meaningful test?
So we have TWO parallel investigations: (I) if we can create a neutral context in which a fully reduced dichotomy is tested without possiblities of error, bias, or chicanery (stipulated); and (II) what meaningful, related process may be developed in the protocol, that will enable investigation I to be able to have a meaningful resolution.
While I don't believe I had failed to stipulate the need for (I) above, some others feel that I wasn't doing so. By re-examining per Diogenes, I hope I've cleared up with all readers what my comprehension of JREF principles happens to be. (And for all I know, some may still be unsatisfied.)
A Meaningful Protocol
I don't think that "ultimate reductionism" is easily achieved in this particular investigation, unless we start first at the complex, and then move to the simple, with intelligence and efficiency.
We should not jump from "complicated nuance perception claims" *in one step* to "simple dichotomy". The reason that this is a fallacy is this: we must use a meaningful protocol to enable the reduced dichotomy to be resolved.
My critics, who wanted to read nothing at all about audio and electronics, were precisely correct that the ultimate dichotomy was the final issue.
But what attracted me to the investigation was NOT the ultimate dichotomy, but the protocol. Creating THAT process is exactly what I have done professionally.
Let me work below from "a not meaningful protocol" to "a meaningful one" in a few discrete jumps. And, Rational Logicians: remember, not all of this information is meant for YOU. I stipulate that you already know about this. I want to relate it for potential claimants.
-------
A claimant named Unfed thinks he hears a sonic effect that he alleges to have been created by a "black box" based on no known technology. Unfed is neither engineer nor technologist so he's never examined the possibilities from that perspective. He only concerns himself with what he believes he hears, and even admits that it is pretty hard to detect under certain conditions. Sometimes it seems more concrete than at other times; but he's SURE he can actually detect it every time.
Stupid Protocol/Pointless Test
Unfed is shown a color chart while the allegedly treated CDs are played. He is required to tell us what colors are related to his impressions of the state of the CDs he is hearing. Are colors that are in the red-yellow-orange range of the spectrum related to "treated" or is it the spectral range of "green-blue"? The CDs are alternated, and the colors are tallied. The testers analyse the statistics. And a final arbitrater decides if Unfed was right, "since we all know that 'untreated CDs make you see red, not blue', he would have related reds to the original CDs, and bluish hues to the untreated ones." He didn't; ergo he failed the test.
This protocol (and test) is so preposterous that many of you will impatiently insist that I have wasted your time by describing it. But, this has close parallels with the "witch tests" of the medieval era. Many innocent people were tortured, or even put to death, after having been judged by similarly nonsensical processes (and nobody seemed to realize how awful the situation was! Yet, long-dead Aristotle himself could have shown them the errors of their ways.)
We now move a few points along the continuum toward reifiability...
Stupid Protocol/Partially Meaningful Test
After much painful negotiation, Unfed and the Test Coordinators agree that he will, in double-blind fashion, try to identify "treated" and "untreated" CDs by their sound. Carefully, half of matched pairs of the CDs are treated and marked. The markings are completely obscured. Then, carefully and out of Unfed's sight, the CDs are played, pair by pair. Unfed takes his time and has the ability to control the CD player in order to pause, rewind, and repeat passages; he may also control the volume during every play. He listens for a while to ONE of the pair; out of his sight the CD is changed; he listens again for a while to the OTHER of the pair; and finally he decides and declares the state of the CD. The claims are tallied against the marks, and if he registers no false positives in all ten pairs, he "wins", and gets a million dollars.
There are many things wrong with the protocol: virtually the ENTIRE aspect of the way Unfed listens to the CDs. But to know that, we would have to have a background in how a person can listen to a CD: how a person can make judgments, and a reasonable means for himself to be able to do so. If we bias the test so that NO person could make a judgment, the test is moot.
The actual part that is valid is the SOME of the end: relating his hits, to the CDs marked to indicate their "state".
But the requirement that he have no false positives (the dichotomy) is reached by faulty reduction.
We have oversimplified the test. And at the same time, we have overcomplicated it (as it is known by prior scientific study that human capabilities for cognition are limited by uncertainties that MUST be controlled for by practical means.)
The systematic errors made by the persons who have crafted the protocol include (i) tendencies to create complexity without providing properly limited controls; and (ii) tendency to reduce to an ultimate logical dichotomy inappropriately, leading only to a forgone conclusion. One could assert that lack of scientific perspective regarding human cognition factors; ignorance of proper phenomenon-related testing; and bias demanding premature logical reduction, have influenced a bad, unworkable, design. Both rationalists, and subjectivists, have erred.
I would describe the test above as bearing evidence of *pseudoscientific methods* and *illogical reductions*.
We now move one further step down the continuum to a test that MAY be more conclusive...
Proposed Meaningful Protocol Leading to Meaningful Test -- and why it is not likely to be used by JREF!
Unfed has decided to use -- rather than a loose, sloppy method of listening to CDs and varying the volume level -- a scientifically-verified process to falsify his own beliefs, since he is open minded enough to allow for the possibility that he might err. Unfed subjects himself to a type of double blind testing that has been shown by scientific processes to yield results and resolutions of SIMILAR kinds of perceptual claims, and to be able to falsify them. (At the moment, based on my current understanding, I -- PianoTeacher -- would suggest that DB testing using the ABX methodology may be the best known procedure.)
All participants in the test agree to respect the data existing for scientific neural test criteria related to average statistical weighting of human cognitive uncertainties. No attempt is made to insist on absolutely no false positives. It has to be agreed on by the administrators, who hold out a valuable prize, where the cutoff point is for false positives: above that, a preponderance of evidence indicates that the phenomenon is proved to have been detected; below that, there is likelihood that too many errors indicate randomness. A judgment is made how to interpret the hits that fall into the range between "GO" and "NO GO".
As you can see, the JREF test, a dichotomous one that has been fully reduced, is NOT the "meaningful test" I have described above. It is not likely that the reductionist logicians of JREF will allow a non-dichotomous though "scientifically realistic and practical" test. The reason for this is that THE JREF DICHOTOMY DOES NOT TAKE INTO CONSIDERATIONS THE SCIENTIFIC FACTORS FOR HUMAN COGNITIVE UNCERTAINTY.
The test decribed immediately above is actually considered meaningful by: (a) audio designers and analysts; (b) neurophysicians, psychoacousticians, and neuropsychologists; (c) sociologists.
--------
We have moved along the continuum from Stupid Protocol/Pointless Test, to Meaningful Protocol/Meaningful Test, but we have NOT YET ARRIVED at Meaningful Protocol/JREF Test!
My comprehension of the next step to take is lacking here (I'm reminded of the cartoon in SciAm: a chalkboard full of complex equations culminates in an arrow, leading to the statement: "At this point a miracle occurs!")
I do know and understand how to do a dichotomous test, but not one that has a meaningful relation to a human cognitive judgment related to hearing the complex totality of musical sounds; at least not sounds that have been selected for by the claimant BEFORE the test protocol is constructed!
Beleth made a powerful analytical contribution. She proposed a "lossy copying scheme" to prepare the second CD in any given pair; and her scheme, as suggested, was SO lossy that nobody could miss it.
With a matched pair of an original CD, and a "Beleth lossy CD", having differences that all "hearing-equipped" persons can differentiate, the original plan of Unfed to require no false positives in ten successive tests has a REASONABLE likelihood of being achieved. But, of course JREF will not bet a million dollars on that; it is not a falsification of a paranormal claim, because Beleth's lossy copying process is REAL, and related to known technology; and perceived by all; while the GSIC device is "unknown"; has "secret" technology (vaguely suggested but not back up by specific engineering documents); and no trained engineer can infer whether it is likely to work! Furthermore, there is no data to support GSIC-effect to a scientific certainty or even a rough likelihood; all we have is something generally described as "reviews" to document CLAIMS about it.
The Beleth-lossy analogy is not pertinent to the GSIC effect, but we can ADJUST it until it is indeed more pertinent. The lossiness can be controlled until the difference is "subtle", but that is also a matter of judgment. We run into Platonic absolutes. And as I showed in one of my last posts to the Audio Critic forum, there is an actual "classical physical uncertainty principle" in existence, with respect to the playback of ANY audio CD recording, due to the error correction algorithm and its unique solution for each playback experiment. This is in conflict with the Platonic absolute of the ultimately reduced "difference" whose subtlety is not a matter of judgment.
At this point, gentle readers, my mind starts to splinter. If you wish to look at my deconstruction of absolutes from complexities, see the last few posts I made to the Audio Critic forum. I don't think it is worth my time, in THIS essay, to create a logical table that goes one step at a time from the "Practical Test, allowed for by scientific human cognitive investigators" to the "JREF Test".
*******
My hypothesis is here, in a nutshell: that there is NO SIMPLE WAY to do that. We cannot make ONE LEAP from the complex situation I proposed in my third test example, to the dichotomous Randi test. Indeed, I forsee that the leaps may involve the allowance for complex processes (that muddy the waters of desired simplicity) after making many logical iterations and corrections to the protocol.
*******
Once again, per what I have called "The Diogenes Paradox", old PianoTeacher may be missing something! It is possible that I lack the ability to see what the steps might be.
The problem is: we must factor in an unbiased control process to consider human variability; and we must not set the bar too high for ANY HUMAN BEING ON THE PLANET; nor should it be lower than necessary to detect the alleged GSIC effect. If we've done that, then indeed *the test is moot.*
We could indeed be looking at "A GSIC/JREF PARADOX": a claim that is (at least at present, and in the JREF context) untestable by neutral, dichotomous, and unweighted processes.
Conclusion - The Limited Utility of the JREF Challenge: a warning for applicants, and skeptics.
Modern investigators are drawn 'magnetically' to the inexorable logic of the dichotomous resolution.
Yet, for some thousands of years in human history, this known tool of logic was not considered to have any significant practical weight by workers in many fields.
The "resolution of two states" is a concept that we may trace back to antiquity. It was a favorite logical tool of Aristotelians. It was helpful in a world without instrumentation and Baconian science, where pure thought was the only way to come to grips with mysteries. Indeed, syllogistic reasoning is what ubiquitously survived from ancient Greek culture, while only recently have we possibly discovered an actual scientific tool that they may have used for calculations: arguably a crude analogue computing machine.
But practical engineering, developed more effectively by the Latins than the Greeks, used a different process: an evaluation of the continuum, NOT merely the resolution of two states, or contemplation of syllogisms.
This has been refined over the ages. The fluxions of Newton gave us the calculus. Powerful mathematics crafted the science of statistical analysis. A sort of golden age of rationalism using these techniques to investigate matter and energy managed to coexist with paranormal belief: indeed, many scientists of the late nineteenth century were ardent fundamentalist religious believers who had the conviction that "the tools of science were the gifts of God to reveal His work". Furthermore, actual physical scientists believed, until nearly the end of the late 19th century, that a sort of PARANORMAL FIELD existed: ether. It was the only logical inference that could explain observed phenomena whose interactions and causes were otherwise invisible to existing instruments.
The first shudders that disturbed this comfortable smugness ("we finally have the tools to enable us to KNOW about things, and to make accurate PREDICTIONS about physical forces") arose when the ether postulate was falsified by Michelson and Morley; then Einstein and Dirac and their predecessors demonstrated that space and time had unforseen properties and relationships.
But the two fundamental "attractors" that have caused many modern investigators and rationalists to return to the Aristotelian process to reduce and resolve a dichotomy, were quantum mechanics and computer science.
I perceive that the viewpoint of "JREF-type Amateur Skeptical Rationalists" (at least the ones I have engaged with on the Audio Critic Forum) are VERY smugly satisfied and comfortable with "simple" tests that resolve fully-reduced dichotomies. Indeed, one of them, who called me "disgusting" said (paraphasing) "there ain't no way that this GSIC can exist". His general reliance on, and veneration for, the resolution of a dichotomy, influenced by ardent skepticism, convinced him (I think) to overlook the actual complexities and uncertainties that REALLY are related to the testing of such a proposed weird and unstably-detectable ALLEGED effect.
Quantum mechanics tends to influence us to "respect the dichotomy" since it has shown that there are quantum transitions, not a continuum: an electron may only be HERE, or THERE. It does not "move slowly along an infinitely variable continuum" to go from one "place" to another. "Place" in the classical sense, is more of an concrete abstraction than it is in the classical world. In classical physics, there are infinite "places" and, indeed, metricians know that "nothing real can occupy a precise place" because that entity continually jitters, its atomic boundaries fluctuating ceaselessly. In quantum theory, "place" is a real abstraction, and an electron is in "one place" or in "another place" -- unless we muddy the waters with the arguments about quantum superposition.
Quantum transitions are actually very real and important to the technology of the laser that creates the beam of light used as part of the means to write, and read, a CD. We indeed could not have "CD audio" without our knowledge of quantum mechanics; nor without our implementation of serial binary computers.
That introduces the next phenomenon: the technology of serial computation, ultimately reduceable to 0's and 1's: another dichotomy.
So, modern amateur and professional scientists today are very biased by the "social environment and existing paradigms" that have been shaped by quantum mechanics and computing: they influence intellectual biases that validate the appreciation of the elegance of resolving a dichotomy.
On a social level, I perceive the JREF Challenge as the manifestation, in the amateur science community, of this paradigm and these biases.
I merely ask, though, that all participants not lose sight of "the continua" since neuropsychologists don't typically resolve to dichotomies in studying the functionality of human cognition. Ultimately, of course, perhaps somewhere in the brain there might be "gates" that are influenced by the firings of ONE out of TWO neurotransmitters. But working back from the observed human cognitive uncertainties toward this point, modern neuroscientists don't seem to be so sure of that; there remains much to be known about it. We seem to have only progressed to the state of knowledge in which we *infer* much about the the fundamentals of about cognition, merely from complicated evidence drawn from the extremely narrow experiments and data acquisition.
What Wellfed (the first GSIC-effect claimant) does, when he "believes" (or, if we take him at his word, "knows") that he hears "subtle effects" is to engage mental processes involved, with high degrees of uncertainty, to finally decide for himself a judgment that resolves his dichotomy: that he DOES hear the effect, and infers that it exists. Those processes are, of course, influenced by mistakes and self-delusion.
How, my friends, do we acknowledge those uncertainties in crafting a protocol that would resolve a dichotomy, IF THE EFFECT EXISTED? If he DID hear it, but that it was of the order of magnitude of the *real* and now-known flaws in early digital audio that many "reductionists" denied, before they had devised better measurement techniques and theories? Under such a condition, that same protocol would resolve that it DID NOT EXIST -- though it DID!
As I see it, the crux of this matter is that the first GSIC-effect protocol could not be used to detect actual perceptions. Many argue that this was NOT REQUIRED. But, then how do you detect "if Wellfed hears something"? We end up in a vicious circle; you may start anywhere and go nowhere except around and around. "There is the hypothetical speculation, based on belief, that a certain subtle nuance exists; we use a very insensitive protocol, unable actually to resolve that nuance if real, to be used for the test process; and by definition we fail to confirm it."
Under this remarkably frustrating set of conditions, I'd argue -- in response to the "keep it simple" reductionists -- that to break the vicious circle, we need for them to propose a solution: replace the statement "we use a very insensitive protocol, unable actually to resolve that nuance if real" with something practical.
We have, at the time of this essay, NO SCIENTIFIC DATA about alleged GSIC effect. We only have vernacular comments, and speculative scientifically-informed inferences, about its reality or non-reality.
To test for GSIC-effect, with a million dollars riding on the attempt to verify it, we must be fair. (Or, must we?) I claim that a test with a predetermined conclusion, using clumsy pseudoscientific processes lacking proper controls, is basically unfair; ESPECIALLY if it is crafted with a logical predestination only to "conserve the million bucks". It MUST be A "neutral" test process. It must be able to falsify the claim; and to be designed so that falsification is not actually impossible.
And, I ask you further: by limiting yourselves TO THE TOOLS OF ARISTOTLE, what practical aspects of life can you improve?
The JREF Paradox
In my view, related to this test of claims that must be controlled via processes related to human aural cognition, the JREF Challenge modalities, if demanding ultimate simplicity and relying on the claimant's naivete, lack of professional scientific competence to propose a practical protocol, and likelihood of setting the bar too high, merely act in totality to "conserve the million dollars."
If the claimant had accepted the best offers of the JREF administrators for a final proposed protocol, and the test had been done that way, I assert that there was no other possibility than for him to fail, because of the internal logical consequences of the construction of that test.
And, unfortunately, as I've shown, a "subtle effect" could actually exist -- and under the outrageously bad protocol, would fail to be falsified without error (yes, yes: even though we skeptics all KNOW it can't exist!)
So, in a social sense -- despite my own biases AGAINST the possibility of GSIC-effect -- I have to admit that I see the proposed test as a farce, ultimately merely acting to embarrass someone.
It would have been, in effect, a test of the limits of the claimant's intelligence, and also of the horizons of the test administrators and their moral willingness to allow a naif to procede, in consideration of the likely chances for achieving "million dollar conservation".
But, the larger world would look on the consequences as being one data point in "the falsification of absurd or unlikely claims of some silly audiophiles". Yet, as I've asserted, since the test was moot, it would not have been able to quantify an existing effect.
For instance: with an equivalently bad, yet seemingly pertinent protocol, Werner Heisenberg would fail. Albert Einstein would fail. Newton might fail! Actual scientific progress is not achieved without allowing for human uncertainties, lab errors, problems with data reduction, practical ranges of variability (along with the means to prevent chicanery.) Furthermore: advanced theories lacking "simple" evidence cannot be tested by JREF...but dowsers can!
All who act to discourage me and insist that everything must be reduced to "simplicity" (while studying a complicated issue) are being driven by biases. Many have resisted my thought experiments to wonder about the alleged effect. I have looked at it from two perspectives: if it EXISTS, what could cause it? If it is NONSENSE, how could people seem to believe it in? They consider an analysis of phenomena, or skills, to be irrelevant. They are: if the test is really only for determining some aspect of ignorance or foolishness.
This is why the academics who are often critiqued by James Randi in his Commentaries will NEVER apply. I am willing to consider that some of them may be intelligent, even honest. Being both, they won't submit to a "foolishness test", which is -- in fact -- what the Wellfed protocol would have engendered.
My logical analysis of the process -- by widening the lens a bit -- has given me an understanding of a negative social aspect of what I'd call "biased and uninformed amateur skeptical rationalism": that it is a dominance contest. One group of amateurs makes a fool of another amateur; all are either a bit ignorant, or deluded, or very anxious to get an ego-boost.
Nothing "real" is learned. No universally-useful tests are done; proven by the fact that despite Mr. Randi's heroic work for decades, he is still approached by dowsers! (I allow for the fact that they are stupid, and haven't read his website.)
The JREF test, if conducted with the first GSIC-effect claim protocol, would AND COULD -- I argue with an informed opinion -- have acted only to confirm the biases of certain hardcore uninformed amateur skeptical rationalists, and would have established that an amateur, not knowing how to be tested, would have allowed himself to fail -- and to be embarrassed.
It would appear -- if we take Kramer's evidence into account -- that the usual obfuscations, whining, one-sided changes in rules, refusal to negotiate, and lack of both a willingness to learn about oneself, and to understand proper scientific testing -- all added up to converge on a "cancelled claim." The claimant has another (some would argue pathological) point of view, and charges (my summary) "dishonesty; the JREF did not want a test and made it impossible!" (unlikely, since the test would have failed, conserving the million dollars and advancing an argument to favor rationalism and against silly paranormalism.)
The sad thing, for me, is that we still have NO DATA.
No selfless, honorable, semi-professional, responsible person has agreed to abide by rules; abide by best possible testing processes; and sort out his thoughts into a coherent plan to allow a test -- as yet. Furthermore, the first test proponent refused my suggestions for increasing or decreasing his certainty by using some personal test techniques (testing an ostensibly "spent" chip in his machine at home, by himself, and comparing the results he felt he was getting using an "energized" chip. Right there, "red flags" were raised; but one was not exactly sure why? Was it a reluctance to be "pressured by an admitted skeptic?" Was it a nagging self-doubt and preference for staying confidently convinced? Was it a deceptive evasion? I could not tell...
In the present context, COULD THERE BE any such selfless, honest, abide-by-the-rules person who asserted he heard nuances to a degree of confidence (while still allowing for mistaken impressions, and with an open-hearted willingness to be refuted) who would take on the JREF Challenge? Unlikely. (Of course, you could find open-minded audiophiles to take another kind of test.)
The "simple, simple!" reductionists who have had their fun with me, tend (I'd assert) to pull the test protocol into the direction of "exposing idiots". Actually the JREF Challenge could be a powerful scientific tool, properly applied. The paradox would seem to be that "it will not be applied in any other kind of test".
In the total phenomenon of the first GSIC-effect claim to be offered for the JREF Challenge, I conclude that there had been *only* "a predestined and biased test for the existence of a degree of uninformed naivete". That MAY satisfy some of the Skeptical Rationalists on these forums.
I ask you: how much more convincing is necessary to establish the fact that there are uninformed, misled, or sneaky people in the world? May we not stipulate that?
The paradox that I discuss above also shows that many people, con artists and scientists alike, are (in effect) too intelligent to take the JREF Challenge. Therefore, skeptical rationalists who assert moral superiority to these people might consider that their "morals" are not confirmed by anyone's refusal to take the challenge, or to withdraw from one.
PianoTeacher
27 April 2005