• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

GSIC AUDIO

Timothy said:
And as long as you're tossing out *no* treated disks, you should also toss out *all* treated disks for the same reason.

1022:1

Absolutely not!

The only reason we toss out the "none are treated" result is that out of the 11 disks, they can't all be the same.

The only reason why it's okay to have a second cointoss if none of the test disks are treated is precisely because none of the test disks have been treated yet.

But the moment a test disk has been treated with the GSIC, the entire result of the cointoss must stand as it is, or you have to dump all the disks and start over from scratch.

This is why you cannot toss out the "all are treated" result.

Thankfully, on 11 disks where one is guaranteed to be an untreated control, you can't have all 11 disks treated, by definition, so this possibility doesn't need to be removed.

But if you add a 12th disk, a treated control, then you have both a treated and untreated control, and the requirement that not all disks are the same is satisfied regardless. It then doesn't actually matter what the outcome of the cointoss is.
 
new drkitten said:
From a practical standpoint, this device is being marketed and sold with the claim that it will improve the sound of a CD, and not with the claim that it will improve the sound of particular kinds of CD's when used on specific kinds of high-end systems. Legally speaking, whenever you offer something for sale, there are two crucial implicit waranties -- "merchandisability" and "fitness for purpose." Basically, if the gadget has to do what the seller says it does, or the seller is at fault.

This is not a test of the seller's integrity ... personally I think the seller is a fraudulent scumbag, and I hope he is incarcerated for this scam.

This test is a scientific test of whether it works.

I therefore disapprove in principle of any test design where a fraudulent device that does nothing whatsoever cannot ever, even in principle, be proven not to work, because all tests return "inconclusive."

I didn't say no test ever. I'm saying that it's wrong to do a test where the fidelity of the test is insufficient to distinguish the results.

From an epistemological standpoint, the claim is that the chip works. If no one can detect any differences whatsoever, the chip does not work (under those circumstances). In which case another claimant may be able to produce another set of circumstsances under which the chip works -- but that's another claim.

So I disapprove in principle of any test where a complete failure of the claimant to perform as claimed can be regarded as "inconclusive."

But no one said no one can hear differences. Mr. Anda said he heard differences. Other people claim to have heard differences. The differences are almost certainly tainted by knowledge of using the chip. *Those* are the people who should do the test, not someone who can't hear a difference.

Remember, I'm suggesting that the applicant is *told* which control CD is treated and which one is not. If the applicant *can't* distinguish between the two, there's no point in continuing the test. But if the applicant *claims* to be able to distinguish the two, then you can proceed with a valid test.

See the difference?

-Timothy
 
Originally posted by alfaniner:

Wouldn't this render the test very probably passable if for some reason they all wound up untreated?? (Granted, the probability of a coin flip coming up the same 10x in a row is the same probability needed for passing this test, right?)

Hmm. I think you're right.

Okay, then "all untreated" would have to be rejected in all cases, while "all treated" would still have to be accepted for the reason I point out above.

I think I still recommend both control disks, however, so that the applicant has a positive "difference" to compare against at all times.

Timothy, it's a JREF staple that applicants are given the opportunity to perform a "sighted" test, which they must sign off on, before they can proceed with the double-blind test.

This is, from the applicants perspective, to confirm their "ability" to distinguish a known case in that specific environment. From JREF's position, it eliminates a great deal of wiggle-room for excuses.
 
new drkitten said:
As stated in my previous post, if the applicant can't distinguish between the two controls, then the chip does not work.
I vehemently disagree.

If *no one* can distinguish, then it doesn't work.

However, we have people who claim they can.

That's like saying because I can't distinguish between a 2001 and a 2002 Pinot Noir, that there is no tastable difference between them.

That may be true, or it may mean that *I* can't distinguish the difference while other people can.

It would make *no* sense, then, to have me identify ten unlabelled glasses. The outcome is irrelevant.

But if someone who *claims* to be able to taste the difference is given the same test, the results are valid ... if they were unable to correctly identify the vintage, it would mean that their original premise was faulty.

You don't throw the additional variable into the mix of an applicant who hasn't made the same claim. Let's do the whole thing with 1000 tone-deaf applicants and then publish the result ... see the difference?

- Timothy
 
Moose said:
Timothy, it's a JREF staple that applicants are given the opportunity to perform a "sighted" test, which they must sign off on, before they can proceed with the double-blind test.
Thank you for reminding me of that ... that's precisely what I'm saying is necessary. If the "combative" applicant (one who's doing it without actually believing that it works) cannot honestly distinguish between the two controls, then the test has little value.

- Timothy
 
Timothy said:
Thank you for reminding me of that ... that's precisely what I'm saying is necessary. If the "combative" applicant (one who's doing it without actually believing that it works) cannot honestly distinguish between the two controls, then the test has little value.

- Timothy

Um, if the sincere applicant can't distinguish anything, then the test certainly has the obvious value of showing that he can't detect a difference under the particular test conditions.

But what we need are real controls related to known human hearing abilities.
 
Timothy said:
If the fidelity of the system (including the listener) is not good enough to distinguish between known treated and untreated disk, you can't run a test on unknown disks with the same system and claim that the chip failed.

Most tests involve a "dress rehearsal" where the applicant demonstrates that he can do what he says he can do when all the information is known. Dowsers know which bucket has the water or the Rolex in it, for instance. Then, once everyone is satisfied that he can do what he says he can do when he knows the answer ahead of time, he does the real test where he shows he can do what he says he can do without knowing the answer ahead of time.

This is my objection to having a skeptic like LA perform the test - since there is, in essence, no "dress rehearsal", the value of the final test is greatly diminished when the result is failure.

So yes, in this case the "dress rehearsal" would consist of Michael listening to the two controls and saying "I can definitely hear a difference between the two discs."

I'd also add these points to the protocol:
- Keep the untreated control as far away from the GSIC device as possible. Preferably, it would not ever be in the same room with it.
- Let the applicant treat the treated control him/herself.
 
My Overview of the First GSIC Test Protocol

The GSIC TEST - REDUCTIONISM, LOGICAL ERROR, AND PSEUDOSCIENCE: How do we KNOW something real about a complex physics-oriented phenomenon?

A Criticism of the Initial "Wellfed" Test Claim and Protocol.

Prelimary Remarks.

This is my first post to the new GSIC forum. Many of you who read the older AUDIO CRITIC forum encountered me and my posts. Some may not have found, in one of the earlier ones, my extended bio. In that forum, I asked if others would be willing to provide a statement about their general training or experience or interests or qualifications: background that would help others to see what kind of perspective they brought to the discussion. As far as I remember, no one did so. I might have encountered a hint, here and there; but otherwise one could only infer, by the language, "tone", and focus on specifics.

I have difficulties working *entirely* in a vacuum; and I am aware that others here often do not. Repeatedly, I was told that the investigation was "simple". It seemed to me that it was, actually, complex and multi-dimensional. My bio, below, will help you understand what I've experienced, that leads me to have an informed opinion.

Concise bio of "PianoTeacher"

I am a male, almost 60, who teaches youngsters how to play the piano. I operate a home business doing this with my wife, a trained concert pianist and pedagogue. Before I retired and focused on my home business, I had a 30-year career related to audio. I have also been a computer programmer; product developer and tester for an optical company making astronomical devices; and electronic and audio engineer and sound recordist. I started my career as a classical music radio announcer in the early-60's, benefiting from the training I had in playing musical instruments and in learning repertoire. In college, I studied philosophy, music, biology and science, but eventually majored in communications. I bring to the table an interest in high fidelity, and a background as a test developer who has helped shape commercial products (including a series of industry-standard audio processors.)

I am an avid audiophile, and own nearly 10,000 classical CDs (and during my years in classical radio, ultimately 13,000 LPs.) I have also operated my own recording studio that evolved from 1975 to 1991. In addition to my professional work testing electronic and optical products (sometimes with very highly controlled tests), I have also participated in countless tests of audio technology and advanced optical products for astronomy, as an amateur enthusiast, though not since the late 1980's, when my interest began to be directed back to music.

I first heard about the alleged GSIC-effect by reading James Randi's commentary; and as an ardent Skeptical Rationalist myself, was attracted to the subject, which seemed to offer a fruitful field for trying to delve below the advertising hype and enthusiastic, vernacular "blather", to search for any potentially concrete facts.

-----

Below, I discuss some premises, explain an example of my own "bias correction", examine some speculative protocols and tests, and then discuss the modern paradigm that influences many scientific observers to be drawn to the "elegant concept of resolving a binary dichotomy".

What "Digital Audio Recording" Does

There is no such thing, at the level of a fully semantically-reduced declarative statement, as a process that digitally "records audio".

"Audio", simplified to sound pressures whose vibrations are in the frequency range of animals and humans, can only be accurately recorded by "a theoretically perfect analogue process" that can preserve, and replicate, the continuum of pressure variations (if we remain in the purely classical domain of physics.) Unfortunately no such perfect analogue recorder exists. All have, among other flaws: noise, distortion, speed variations, and spectral response and amplitude-range limits.

Devices that are called "digital audio recorders" are actually using an algorithm to create a MAP, whose dimensional coordinates save a simplified, quantized series of discrete values of data, ultimately reduced to either a 1 or a 0 in the smallest possible temporal subdivision in assembling a digital word. The map is then saved in a "digital carrier system"; and when digital audio playback occurs, the map coordinates are converted by a process that produces a continuously-varying voltage that may be used to energize a transducer that produces air pressure variations.

The complexity of the map has been adjusted, by its designers, according to two basic principles: (a) the scientific knowledge derived from the study of psychoacoustics, which has statistically quantified markers that represent useful parameters of a large population of individuals who were tested; (b) and psychological testing, which has provided data about "what people prefer" and how close the map's simulation of audio seems to be to the normal perception of NON-RECORDED audio.

We know now that the first promoters of the compact disk were actually pseudoscience propagandists. Many insisted that the medium was PERFECT. The assumption was, that because problematical variables of noise and distortion and pitch fluctuations and response had been slightly corrected over the performance of the best analogue recorders (irrespective of any new artifacts introduced), that audio recording science had progressed from the state of being "messy" to the state of being "under perfect control".

(Actually, the original developers of digital recording knew from the start that "the map was too simple". It was, at the time, a best-fit, constrained by available technology, including the practicalities of affordable and available memory chips.)

This pseudoscientific rationalizing of the first commercial CD-exploiters and promoters was almost immediately exposed BY HIGH END AUDIOPHILES. Their perceptions of both (a) new artifacts; and (b) degradations of some parameters that were dealt with more effectively by existing best-case analogue recording, were important contributions to science. Their observations were falsified; and what was learned by the public, and audio corporations, was that THE MAP WAS TOO SIMPLE. Furthermore, it was discovered that the mapping was influenced by design flaws in the first practical commercial digital algorithms, DACs, and filters.

High end audiophiles with acute sensitivities were REALLY perceiving flaws, variables, and nuances that hard-nosed but limited audiometricians had denied. More advanced testing procedures confirmed the sensible observations of rational listeners; and the MAP WAS IMPROVED and adopted when technology and economics permitted it.

For instance: the standard home compact disk medium, nearly unchanged since 1982, still uses THE ORIGINAL MAP that had been refined for the 16-bit, 44.1 kHz sampling rate system, the industry-standard for the introduction of the compact disk. But original music masters are recorded by a process that CREATES A BETTER MAP: using higher sampling rates, and digital quantizing to a resolution level achieved by as many as 24 bits, conventionally. Thus, most modern CDs are mastered at (say) 96/24, but have to be downconverted and simplfied to 16/44. This downcoversion is a highly complex process *develoiped by following processed of scientific psychoacoustical testing*. The conversion process is generally not linear, and factors in many variations in human hearing cognition, in order to create a newly-refined, simplified 16/44 map that tends to resemble the 96/24 map. New CD player design also use refined circuits, especially advanced DACs and sophisticated filters (which render the discrete quantized variations in voltage amplitudes, into smoothed analogue variations.)

At each step in the process, HUMAN HEARING COGNITION, tested by science, helped refine the design of both analogue and digital circuitry.

I might add that in rare cases could the testing be reduced to mere binary dichotomies. Where the "digital recording process" is TRULY binary, is where a 1 or a 0 is chosen to represent a single bit that comprises a more complex digital word. We have, in effect, two funnels with large mouths and tiny terminations: sound goes into the large mouth of one; the small end only passes 1's and 0's, transmitted by a carrier into the small mouth of another funnel; from the large end emerge air pressure vibrations in our environment that *seem* to convince us that we are hearing an exceptionally close facsimile of "real sounds".

We can't use this construct, above, to record analogue audio; we must make significant changes and simplifications, returning from the complexly-quantized and highly controlled digital process, to the simpler analogue one; in fact, to a NON-ELECTRICAL, purely mechanical, one.

As we all know, Edison invented that process (which symbolically resembles my "two funnels" visual analogy), except that instead of passing 1's and 0's through the small ends of the funnels, he passed restricted continous vibrations that were only crudely derived from soundwaves, and saved in a HIGHLY LOSSY carrier with huge quantities of noise and distortion, barely able to record the spectrum of intelligible speech. In the Edison process -- which remained fundamentally unchanged from 1877 to about 1925/6 -- the improvements were only marginal, and gradual: glacially slow to occur. By 1924, sound recording was STILL effected only by means of air vibrations and "mechanical wiggles". After 1925, we introduced electrical/analogue transducers, which improved various aspects of the system. It remained fundamentally fixed (excepting for lab experiments) until the introduction of commercial musical digital recording as early as 1972: when the MAP CONSTRUCT parameters were formalized in a rudimentary, but "satisfactory to some listeners" system.

Every advance in digital recording from 1972 to today, arose both from theoretical digital theory, and the pressure of the audiophile community for BETTER MAPS. They vote with their dollars (yen, pounds, marks...) and that economic stimulus moves an industry to provide us with better compact disks, and better players.

Yet, there are still limits, based on the first practical 16/44 system used for the compact disk carrier: including the minimalistic non-repeatability that is a consequence of the error correction system (discussed in one of my last posts to the AUDIO CRITIC forum.)

Finally: at the heart of the success of digital audio recording is the fundamental fact that an inaccurate symbolic map, full of identifiable flaws, works in concert with the limits of human aural cognition to manage, wonderfully, to make us all have the delightful enjoyment of seeming to experience music. (This also suggests a speculation that the map, as it stands now, would actually FAIL to replicate music, with a listener who had a much higher degree of evolutionary advancement and neuroperception!)

The Contributions of the Audiophile Community

I want no "JREF reductionist" here to have the *misapprehension* that "there is no useful utility in the aesthetic judgments of high end audiophiles". I'm not claiming that JREF participants do; but there are indeed comments heard all the time, and posted everywhere on the Net, from those who are quite comfortable with the technology they own, who do make such grouchy statements. Had they been heeded, we would have no improvements over the "glassy, hard, edgy sound" that analogue audiophiles correctly described, back in 1982-3.

Long before digital recording had been developed, a social norm existed in which audio developers gave value to the empirical experiences and opinions of audiophiles. This has been refined and expanded, and now today we see one of the most marvelous results of that relationship: superb modern digital sound mastering in high resolution, demonstrably superior to the first digital tapes from Denon in 1972. On the other hand, it has been shown that chicanery, charlatanism, exaggeration, propaganda, and intellectual sloppiness exists to some extent in the discussion and promotion of audio products (as in all other walks of life.)

To summarize: (a) digital recording/playback is a process of matching SYMBOLIC MAPS with expectations of non-deterministic human cognition; and (b) acute critics and judges of audio performance have made a scientific contribution to the development of audio, even through social pressure, discussion, and untested criticism.

Critique of Dogmatic Reductionism

The Contention between PianoTeacher and "Diogenes", "Grw8ight", et al.

The first and then consistent criticisms addressed to my contributions to the Audio Critic forum -- aside from the objections about their length -- focused on what many if not most other forum members viewed as my presenting irrelevancies. "This is simple!" was the repeated cry and assertion.

Finally, having covered all the background I wished others to consider -- what psychoacoustical testing is, and what it can do; the known variables in human neuroperception (including the very *real* incidences of acoustical hypertrophism in some individuals); the "real" nature of nuances that are important to the aesthetic judgment and appreciation of persons interested in art; and the highly complex and systematic processes of real analogue and digital audio technologies -- I returned at last to consider what IS, and is NOT, "simple".

I was finally prodded by "Diogenes" on the audio critic forum to re-examine my own perspective. Assuming that he was exactly right, "it was simple", and that I was precisely wrong -- and knowing that my own training was biasing me and preventing me from looking at the issue from the perspective of NON-musicians, NON-engineers, and only strict logical reductionists -- I tried a new technique, which I discussed in my last post to the Audio Critic forum ("I Correct Myself"). I had to create new scenarios of what I believed to be "fully reduced" and ultra-simple dichotomous issues that could be resolved in JREF-type testing, and THEN work backwards and adjust parameters until they fit the conditions of the first GSIC-Effect Claim.

I think I acquired a greater understanding of why almost everyone else was telling me it was just a very "simple" matter though I was uncomfortable with that concept, and that -- ultimately, reduced to a dichotomy -- "Diogenes" and others were correct: my own mind was still too concerned with the engineering and neurophysical aspects of any alleged effect. (In my defense: I did not see much comprehension of the variables that I thought relevant to the crafting of the test protocol.)

Now that I have been able to see this, I still perceive TWO, parallel, situations, not exactly ONE fully integrated and "simple" one.

A. There is indeed a "Diogeneseque" simplicity: a dichotomy that the JREF Challenge may help to resolve. It is so utterly self-evident that I had really considered it part of the fundamentals of what we were all doing here; the process had been refined; and I did not care to focus on it. Surely, as I understood at the very outset: either somebody actually *hears* GSIC-effect (inferring that it EXISTS), or *does not hear* it (not necessarily inferring that it does NOT EXIST.) That is the fully reduced dichotomy; I felt that I was stipulating this. But I was going beyond it: to examine the protocol, and HOW the consequences of any such alleged effect would be evaluated in order to resolve the claim.

What my examination of Diogenes' critique helped me to appreciate, was that the JREF Challenge test process *could indeed* operate independently of "audio", "hearing", and any one person's perceptions and judgments. I had already realized this; felt it was stipulated; but again: was focusing on theoretical possibilities of the "effect" (precisely how to falsify it.)

B. I do understand now why Diogenes, Grw8ight, and others were so impatient with me. But by trying to sweep the details of the myriad of issues related to any such alleged device, and human ability to perceive it, totally away, how can one craft a useful protocol to have a meaningful test?

So we have TWO parallel investigations: (I) if we can create a neutral context in which a fully reduced dichotomy is tested without possiblities of error, bias, or chicanery (stipulated); and (II) what meaningful, related process may be developed in the protocol, that will enable investigation I to be able to have a meaningful resolution.

While I don't believe I had failed to stipulate the need for (I) above, some others feel that I wasn't doing so. By re-examining per Diogenes, I hope I've cleared up with all readers what my comprehension of JREF principles happens to be. (And for all I know, some may still be unsatisfied.)

A Meaningful Protocol

I don't think that "ultimate reductionism" is easily achieved in this particular investigation, unless we start first at the complex, and then move to the simple, with intelligence and efficiency.

We should not jump from "complicated nuance perception claims" *in one step* to "simple dichotomy". The reason that this is a fallacy is this: we must use a meaningful protocol to enable the reduced dichotomy to be resolved.

My critics, who wanted to read nothing at all about audio and electronics, were precisely correct that the ultimate dichotomy was the final issue.

But what attracted me to the investigation was NOT the ultimate dichotomy, but the protocol. Creating THAT process is exactly what I have done professionally.

Let me work below from "a not meaningful protocol" to "a meaningful one" in a few discrete jumps. And, Rational Logicians: remember, not all of this information is meant for YOU. I stipulate that you already know about this. I want to relate it for potential claimants.

-------

A claimant named Unfed thinks he hears a sonic effect that he alleges to have been created by a "black box" based on no known technology. Unfed is neither engineer nor technologist so he's never examined the possibilities from that perspective. He only concerns himself with what he believes he hears, and even admits that it is pretty hard to detect under certain conditions. Sometimes it seems more concrete than at other times; but he's SURE he can actually detect it every time.


Stupid Protocol/Pointless Test

Unfed is shown a color chart while the allegedly treated CDs are played. He is required to tell us what colors are related to his impressions of the state of the CDs he is hearing. Are colors that are in the red-yellow-orange range of the spectrum related to "treated" or is it the spectral range of "green-blue"? The CDs are alternated, and the colors are tallied. The testers analyse the statistics. And a final arbitrater decides if Unfed was right, "since we all know that 'untreated CDs make you see red, not blue', he would have related reds to the original CDs, and bluish hues to the untreated ones." He didn't; ergo he failed the test.

This protocol (and test) is so preposterous that many of you will impatiently insist that I have wasted your time by describing it. But, this has close parallels with the "witch tests" of the medieval era. Many innocent people were tortured, or even put to death, after having been judged by similarly nonsensical processes (and nobody seemed to realize how awful the situation was! Yet, long-dead Aristotle himself could have shown them the errors of their ways.)

We now move a few points along the continuum toward reifiability...

Stupid Protocol/Partially Meaningful Test

After much painful negotiation, Unfed and the Test Coordinators agree that he will, in double-blind fashion, try to identify "treated" and "untreated" CDs by their sound. Carefully, half of matched pairs of the CDs are treated and marked. The markings are completely obscured. Then, carefully and out of Unfed's sight, the CDs are played, pair by pair. Unfed takes his time and has the ability to control the CD player in order to pause, rewind, and repeat passages; he may also control the volume during every play. He listens for a while to ONE of the pair; out of his sight the CD is changed; he listens again for a while to the OTHER of the pair; and finally he decides and declares the state of the CD. The claims are tallied against the marks, and if he registers no false positives in all ten pairs, he "wins", and gets a million dollars.

There are many things wrong with the protocol: virtually the ENTIRE aspect of the way Unfed listens to the CDs. But to know that, we would have to have a background in how a person can listen to a CD: how a person can make judgments, and a reasonable means for himself to be able to do so. If we bias the test so that NO person could make a judgment, the test is moot.

The actual part that is valid is the SOME of the end: relating his hits, to the CDs marked to indicate their "state".

But the requirement that he have no false positives (the dichotomy) is reached by faulty reduction.

We have oversimplified the test. And at the same time, we have overcomplicated it (as it is known by prior scientific study that human capabilities for cognition are limited by uncertainties that MUST be controlled for by practical means.)

The systematic errors made by the persons who have crafted the protocol include (i) tendencies to create complexity without providing properly limited controls; and (ii) tendency to reduce to an ultimate logical dichotomy inappropriately, leading only to a forgone conclusion. One could assert that lack of scientific perspective regarding human cognition factors; ignorance of proper phenomenon-related testing; and bias demanding premature logical reduction, have influenced a bad, unworkable, design. Both rationalists, and subjectivists, have erred.

I would describe the test above as bearing evidence of *pseudoscientific methods* and *illogical reductions*.

We now move one further step down the continuum to a test that MAY be more conclusive...


Proposed Meaningful Protocol Leading to Meaningful Test -- and why it is not likely to be used by JREF!

Unfed has decided to use -- rather than a loose, sloppy method of listening to CDs and varying the volume level -- a scientifically-verified process to falsify his own beliefs, since he is open minded enough to allow for the possibility that he might err. Unfed subjects himself to a type of double blind testing that has been shown by scientific processes to yield results and resolutions of SIMILAR kinds of perceptual claims, and to be able to falsify them. (At the moment, based on my current understanding, I -- PianoTeacher -- would suggest that DB testing using the ABX methodology may be the best known procedure.)

All participants in the test agree to respect the data existing for scientific neural test criteria related to average statistical weighting of human cognitive uncertainties. No attempt is made to insist on absolutely no false positives. It has to be agreed on by the administrators, who hold out a valuable prize, where the cutoff point is for false positives: above that, a preponderance of evidence indicates that the phenomenon is proved to have been detected; below that, there is likelihood that too many errors indicate randomness. A judgment is made how to interpret the hits that fall into the range between "GO" and "NO GO".

As you can see, the JREF test, a dichotomous one that has been fully reduced, is NOT the "meaningful test" I have described above. It is not likely that the reductionist logicians of JREF will allow a non-dichotomous though "scientifically realistic and practical" test. The reason for this is that THE JREF DICHOTOMY DOES NOT TAKE INTO CONSIDERATIONS THE SCIENTIFIC FACTORS FOR HUMAN COGNITIVE UNCERTAINTY.

The test decribed immediately above is actually considered meaningful by: (a) audio designers and analysts; (b) neurophysicians, psychoacousticians, and neuropsychologists; (c) sociologists.

--------


We have moved along the continuum from Stupid Protocol/Pointless Test, to Meaningful Protocol/Meaningful Test, but we have NOT YET ARRIVED at Meaningful Protocol/JREF Test!

My comprehension of the next step to take is lacking here (I'm reminded of the cartoon in SciAm: a chalkboard full of complex equations culminates in an arrow, leading to the statement: "At this point a miracle occurs!")

I do know and understand how to do a dichotomous test, but not one that has a meaningful relation to a human cognitive judgment related to hearing the complex totality of musical sounds; at least not sounds that have been selected for by the claimant BEFORE the test protocol is constructed!

Beleth made a powerful analytical contribution. She proposed a "lossy copying scheme" to prepare the second CD in any given pair; and her scheme, as suggested, was SO lossy that nobody could miss it.

With a matched pair of an original CD, and a "Beleth lossy CD", having differences that all "hearing-equipped" persons can differentiate, the original plan of Unfed to require no false positives in ten successive tests has a REASONABLE likelihood of being achieved. But, of course JREF will not bet a million dollars on that; it is not a falsification of a paranormal claim, because Beleth's lossy copying process is REAL, and related to known technology; and perceived by all; while the GSIC device is "unknown"; has "secret" technology (vaguely suggested but not back up by specific engineering documents); and no trained engineer can infer whether it is likely to work! Furthermore, there is no data to support GSIC-effect to a scientific certainty or even a rough likelihood; all we have is something generally described as "reviews" to document CLAIMS about it.

The Beleth-lossy analogy is not pertinent to the GSIC effect, but we can ADJUST it until it is indeed more pertinent. The lossiness can be controlled until the difference is "subtle", but that is also a matter of judgment. We run into Platonic absolutes. And as I showed in one of my last posts to the Audio Critic forum, there is an actual "classical physical uncertainty principle" in existence, with respect to the playback of ANY audio CD recording, due to the error correction algorithm and its unique solution for each playback experiment. This is in conflict with the Platonic absolute of the ultimately reduced "difference" whose subtlety is not a matter of judgment.

At this point, gentle readers, my mind starts to splinter. If you wish to look at my deconstruction of absolutes from complexities, see the last few posts I made to the Audio Critic forum. I don't think it is worth my time, in THIS essay, to create a logical table that goes one step at a time from the "Practical Test, allowed for by scientific human cognitive investigators" to the "JREF Test".

*******
My hypothesis is here, in a nutshell: that there is NO SIMPLE WAY to do that. We cannot make ONE LEAP from the complex situation I proposed in my third test example, to the dichotomous Randi test. Indeed, I forsee that the leaps may involve the allowance for complex processes (that muddy the waters of desired simplicity) after making many logical iterations and corrections to the protocol.
*******

Once again, per what I have called "The Diogenes Paradox", old PianoTeacher may be missing something! It is possible that I lack the ability to see what the steps might be.

The problem is: we must factor in an unbiased control process to consider human variability; and we must not set the bar too high for ANY HUMAN BEING ON THE PLANET; nor should it be lower than necessary to detect the alleged GSIC effect. If we've done that, then indeed *the test is moot.*

We could indeed be looking at "A GSIC/JREF PARADOX": a claim that is (at least at present, and in the JREF context) untestable by neutral, dichotomous, and unweighted processes.


Conclusion - The Limited Utility of the JREF Challenge: a warning for applicants, and skeptics.

Modern investigators are drawn 'magnetically' to the inexorable logic of the dichotomous resolution.

Yet, for some thousands of years in human history, this known tool of logic was not considered to have any significant practical weight by workers in many fields.

The "resolution of two states" is a concept that we may trace back to antiquity. It was a favorite logical tool of Aristotelians. It was helpful in a world without instrumentation and Baconian science, where pure thought was the only way to come to grips with mysteries. Indeed, syllogistic reasoning is what ubiquitously survived from ancient Greek culture, while only recently have we possibly discovered an actual scientific tool that they may have used for calculations: arguably a crude analogue computing machine.

But practical engineering, developed more effectively by the Latins than the Greeks, used a different process: an evaluation of the continuum, NOT merely the resolution of two states, or contemplation of syllogisms.

This has been refined over the ages. The fluxions of Newton gave us the calculus. Powerful mathematics crafted the science of statistical analysis. A sort of golden age of rationalism using these techniques to investigate matter and energy managed to coexist with paranormal belief: indeed, many scientists of the late nineteenth century were ardent fundamentalist religious believers who had the conviction that "the tools of science were the gifts of God to reveal His work". Furthermore, actual physical scientists believed, until nearly the end of the late 19th century, that a sort of PARANORMAL FIELD existed: ether. It was the only logical inference that could explain observed phenomena whose interactions and causes were otherwise invisible to existing instruments.

The first shudders that disturbed this comfortable smugness ("we finally have the tools to enable us to KNOW about things, and to make accurate PREDICTIONS about physical forces") arose when the ether postulate was falsified by Michelson and Morley; then Einstein and Dirac and their predecessors demonstrated that space and time had unforseen properties and relationships.

But the two fundamental "attractors" that have caused many modern investigators and rationalists to return to the Aristotelian process to reduce and resolve a dichotomy, were quantum mechanics and computer science.

I perceive that the viewpoint of "JREF-type Amateur Skeptical Rationalists" (at least the ones I have engaged with on the Audio Critic Forum) are VERY smugly satisfied and comfortable with "simple" tests that resolve fully-reduced dichotomies. Indeed, one of them, who called me "disgusting" said (paraphasing) "there ain't no way that this GSIC can exist". His general reliance on, and veneration for, the resolution of a dichotomy, influenced by ardent skepticism, convinced him (I think) to overlook the actual complexities and uncertainties that REALLY are related to the testing of such a proposed weird and unstably-detectable ALLEGED effect.

Quantum mechanics tends to influence us to "respect the dichotomy" since it has shown that there are quantum transitions, not a continuum: an electron may only be HERE, or THERE. It does not "move slowly along an infinitely variable continuum" to go from one "place" to another. "Place" in the classical sense, is more of an concrete abstraction than it is in the classical world. In classical physics, there are infinite "places" and, indeed, metricians know that "nothing real can occupy a precise place" because that entity continually jitters, its atomic boundaries fluctuating ceaselessly. In quantum theory, "place" is a real abstraction, and an electron is in "one place" or in "another place" -- unless we muddy the waters with the arguments about quantum superposition.

Quantum transitions are actually very real and important to the technology of the laser that creates the beam of light used as part of the means to write, and read, a CD. We indeed could not have "CD audio" without our knowledge of quantum mechanics; nor without our implementation of serial binary computers.

That introduces the next phenomenon: the technology of serial computation, ultimately reduceable to 0's and 1's: another dichotomy.

So, modern amateur and professional scientists today are very biased by the "social environment and existing paradigms" that have been shaped by quantum mechanics and computing: they influence intellectual biases that validate the appreciation of the elegance of resolving a dichotomy.

On a social level, I perceive the JREF Challenge as the manifestation, in the amateur science community, of this paradigm and these biases.

I merely ask, though, that all participants not lose sight of "the continua" since neuropsychologists don't typically resolve to dichotomies in studying the functionality of human cognition. Ultimately, of course, perhaps somewhere in the brain there might be "gates" that are influenced by the firings of ONE out of TWO neurotransmitters. But working back from the observed human cognitive uncertainties toward this point, modern neuroscientists don't seem to be so sure of that; there remains much to be known about it. We seem to have only progressed to the state of knowledge in which we *infer* much about the the fundamentals of about cognition, merely from complicated evidence drawn from the extremely narrow experiments and data acquisition.

What Wellfed (the first GSIC-effect claimant) does, when he "believes" (or, if we take him at his word, "knows") that he hears "subtle effects" is to engage mental processes involved, with high degrees of uncertainty, to finally decide for himself a judgment that resolves his dichotomy: that he DOES hear the effect, and infers that it exists. Those processes are, of course, influenced by mistakes and self-delusion.

How, my friends, do we acknowledge those uncertainties in crafting a protocol that would resolve a dichotomy, IF THE EFFECT EXISTED? If he DID hear it, but that it was of the order of magnitude of the *real* and now-known flaws in early digital audio that many "reductionists" denied, before they had devised better measurement techniques and theories? Under such a condition, that same protocol would resolve that it DID NOT EXIST -- though it DID!

As I see it, the crux of this matter is that the first GSIC-effect protocol could not be used to detect actual perceptions. Many argue that this was NOT REQUIRED. But, then how do you detect "if Wellfed hears something"? We end up in a vicious circle; you may start anywhere and go nowhere except around and around. "There is the hypothetical speculation, based on belief, that a certain subtle nuance exists; we use a very insensitive protocol, unable actually to resolve that nuance if real, to be used for the test process; and by definition we fail to confirm it."

Under this remarkably frustrating set of conditions, I'd argue -- in response to the "keep it simple" reductionists -- that to break the vicious circle, we need for them to propose a solution: replace the statement "we use a very insensitive protocol, unable actually to resolve that nuance if real" with something practical.

We have, at the time of this essay, NO SCIENTIFIC DATA about alleged GSIC effect. We only have vernacular comments, and speculative scientifically-informed inferences, about its reality or non-reality.

To test for GSIC-effect, with a million dollars riding on the attempt to verify it, we must be fair. (Or, must we?) I claim that a test with a predetermined conclusion, using clumsy pseudoscientific processes lacking proper controls, is basically unfair; ESPECIALLY if it is crafted with a logical predestination only to "conserve the million bucks". It MUST be A "neutral" test process. It must be able to falsify the claim; and to be designed so that falsification is not actually impossible.

And, I ask you further: by limiting yourselves TO THE TOOLS OF ARISTOTLE, what practical aspects of life can you improve?


The JREF Paradox

In my view, related to this test of claims that must be controlled via processes related to human aural cognition, the JREF Challenge modalities, if demanding ultimate simplicity and relying on the claimant's naivete, lack of professional scientific competence to propose a practical protocol, and likelihood of setting the bar too high, merely act in totality to "conserve the million dollars."

If the claimant had accepted the best offers of the JREF administrators for a final proposed protocol, and the test had been done that way, I assert that there was no other possibility than for him to fail, because of the internal logical consequences of the construction of that test.

And, unfortunately, as I've shown, a "subtle effect" could actually exist -- and under the outrageously bad protocol, would fail to be falsified without error (yes, yes: even though we skeptics all KNOW it can't exist!)

So, in a social sense -- despite my own biases AGAINST the possibility of GSIC-effect -- I have to admit that I see the proposed test as a farce, ultimately merely acting to embarrass someone.

It would have been, in effect, a test of the limits of the claimant's intelligence, and also of the horizons of the test administrators and their moral willingness to allow a naif to procede, in consideration of the likely chances for achieving "million dollar conservation".

But, the larger world would look on the consequences as being one data point in "the falsification of absurd or unlikely claims of some silly audiophiles". Yet, as I've asserted, since the test was moot, it would not have been able to quantify an existing effect.

For instance: with an equivalently bad, yet seemingly pertinent protocol, Werner Heisenberg would fail. Albert Einstein would fail. Newton might fail! Actual scientific progress is not achieved without allowing for human uncertainties, lab errors, problems with data reduction, practical ranges of variability (along with the means to prevent chicanery.) Furthermore: advanced theories lacking "simple" evidence cannot be tested by JREF...but dowsers can!

All who act to discourage me and insist that everything must be reduced to "simplicity" (while studying a complicated issue) are being driven by biases. Many have resisted my thought experiments to wonder about the alleged effect. I have looked at it from two perspectives: if it EXISTS, what could cause it? If it is NONSENSE, how could people seem to believe it in? They consider an analysis of phenomena, or skills, to be irrelevant. They are: if the test is really only for determining some aspect of ignorance or foolishness.

This is why the academics who are often critiqued by James Randi in his Commentaries will NEVER apply. I am willing to consider that some of them may be intelligent, even honest. Being both, they won't submit to a "foolishness test", which is -- in fact -- what the Wellfed protocol would have engendered.

My logical analysis of the process -- by widening the lens a bit -- has given me an understanding of a negative social aspect of what I'd call "biased and uninformed amateur skeptical rationalism": that it is a dominance contest. One group of amateurs makes a fool of another amateur; all are either a bit ignorant, or deluded, or very anxious to get an ego-boost.

Nothing "real" is learned. No universally-useful tests are done; proven by the fact that despite Mr. Randi's heroic work for decades, he is still approached by dowsers! (I allow for the fact that they are stupid, and haven't read his website.)

The JREF test, if conducted with the first GSIC-effect claim protocol, would AND COULD -- I argue with an informed opinion -- have acted only to confirm the biases of certain hardcore uninformed amateur skeptical rationalists, and would have established that an amateur, not knowing how to be tested, would have allowed himself to fail -- and to be embarrassed.

It would appear -- if we take Kramer's evidence into account -- that the usual obfuscations, whining, one-sided changes in rules, refusal to negotiate, and lack of both a willingness to learn about oneself, and to understand proper scientific testing -- all added up to converge on a "cancelled claim." The claimant has another (some would argue pathological) point of view, and charges (my summary) "dishonesty; the JREF did not want a test and made it impossible!" (unlikely, since the test would have failed, conserving the million dollars and advancing an argument to favor rationalism and against silly paranormalism.)

The sad thing, for me, is that we still have NO DATA.

No selfless, honorable, semi-professional, responsible person has agreed to abide by rules; abide by best possible testing processes; and sort out his thoughts into a coherent plan to allow a test -- as yet. Furthermore, the first test proponent refused my suggestions for increasing or decreasing his certainty by using some personal test techniques (testing an ostensibly "spent" chip in his machine at home, by himself, and comparing the results he felt he was getting using an "energized" chip. Right there, "red flags" were raised; but one was not exactly sure why? Was it a reluctance to be "pressured by an admitted skeptic?" Was it a nagging self-doubt and preference for staying confidently convinced? Was it a deceptive evasion? I could not tell...

In the present context, COULD THERE BE any such selfless, honest, abide-by-the-rules person who asserted he heard nuances to a degree of confidence (while still allowing for mistaken impressions, and with an open-hearted willingness to be refuted) who would take on the JREF Challenge? Unlikely. (Of course, you could find open-minded audiophiles to take another kind of test.)

The "simple, simple!" reductionists who have had their fun with me, tend (I'd assert) to pull the test protocol into the direction of "exposing idiots". Actually the JREF Challenge could be a powerful scientific tool, properly applied. The paradox would seem to be that "it will not be applied in any other kind of test".

In the total phenomenon of the first GSIC-effect claim to be offered for the JREF Challenge, I conclude that there had been *only* "a predestined and biased test for the existence of a degree of uninformed naivete". That MAY satisfy some of the Skeptical Rationalists on these forums.

I ask you: how much more convincing is necessary to establish the fact that there are uninformed, misled, or sneaky people in the world? May we not stipulate that?

The paradox that I discuss above also shows that many people, con artists and scientists alike, are (in effect) too intelligent to take the JREF Challenge. Therefore, skeptical rationalists who assert moral superiority to these people might consider that their "morals" are not confirmed by anyone's refusal to take the challenge, or to withdraw from one.

PianoTeacher
27 April 2005
 
Piano Teacher, you offend my love for concisness with your well-thought insights, honest admittals of being wrong, and geniuine interest in helping out by padding it with so much unnessecary stuff. And by that, I mean, Newton? Quantum Mechanics? The hell?

All that and not a word about what ABX is or a link. We need the links like we're a bunch of old, rich, male Gentiles.

http://www.hydrogenaudio.org/forums/index.php?showtopic=16295&hl=ABX

In fact, it even mentions putting an amulet next to the CD player! *snicker* (Piano, I'm mostly kidding. I'm giving you crap. There's good-natured prodding in that post that I can't inflect over TEH INTARWEB.)

...In this kind of test, the listener has access to three sources labeled A, B, and X. A and B are the references. They are the audio source with and without the tweak. For example the wav file and the MP3 file. X is the mystery source. It can be A or B. The listener must guess it comparing it to A and B.

But if the listener says that X is A, and that X is actually A. What does this prove ?
Nothing of course. If you flip a coin in my back and a state that it's heads, and I'm right, it doesn't prove the existence of my para-psychic abilities that allow me to see what's in my back. This is just luck, nothing more !
That's why a statistical analysis is necessary.

Let's imagine that after the listener has given his answer, the test is run again, choosing again X at random 15 times. If the listener gives the correct answer 16 times, what does it prove ? Can it be luck ?
Yes it can, and we can calculate the probability for it to happen. For each test, there is one chance out of two to get the right answer, and 16 independant tests are run. The probability to get everything correct by chance is then 1/2 at the power 16, that is 1/65536. In other words, if no difference is audible, the listener will get everything correct one time out of 65536 in average.
We can thus choose the number of trials according to the tweak tested. The goal being to get a success probability inferior to the likelihood, for the tweak, to actually have an audible effect.
For example if we compare two pairs of speakers. It is likely that they won't have the same sound. We can be content doing the test 7 times. There will be 1 chance out of 128 to get a "false success". In statistics, a "false success" is called a "type I error". The more the test is repeated, the less type I errors are likely to happen.
Now, if we put an amulet besides a CD player. There is no reason that it changes the sound. We can then repeat the test 40 times. The success of probability will then be one out of one trillion (2 to the power 40). If it ever happens, there is necessarily an explanation : the listener hears the operator moving the amulet, or the operator always takes more time to launch the playback once the amulet is away, or maybe the listener perceives a brightness difference through his eyelids if it is a big dark amulet, or he can smell it when it is close to the player...

Let p be the probability of getting a success by chance. It is generally admitted that a result whose p value is inferior to 0.05 (one out of 20) should be seriously considered, and that p < 0.01 (one out of 100) is a very positive result. However, this must be considered according to the context. We saw that for very suspectful tweaks, like the amulet, it is necessary to get a very small p value, because between the expected probability for the amulet to work (say one out of a billion, for example), and the probability for the test to succeed by chance (1 out of 100 is often chosen), the choice is obvious : it's the test that succeeded by chance !
Here's another example where numbers can fool us. If we test 20 cables, one by one, in order to know if they have an effect on the sound, and if we consider that p < 0.05 is a success, then in the case where no cable have any actual effect on the sound, since we run 20 tests, we should all the same expect in average one accidental success among the 20 tests ! In this case we can absolutely not tell that the cable affects the sound with a probability of 95%, even while p is inferior to 5 %, since anyway, this success was expected. The test failed, that's all.

But statistic analyses are not limited to simple powers of 2. If, for example, we get 14 right answers out of 16, what happens ? Well it is perfectly possible to calculate the probability that it happens, but mind that what we need here is not the probability to get exactly 14/16, but the probability to get 16/16, plus the one to get 15/16, plus the one to get 14/16.
An Excel table gives all needed probabilities : http://www.kikeg.arrakis.es/winabx/bino_dist.zip . It is based on a binomial distribution.
...

There's more, including rule of thumb, but I don't know what the borderline is on copying a post and copyrights.

All right. So we've got the "dichotimous" Moose protocol and the ABX. From what Piano's saying and from the googling, it looks like the audiophile community puts a lot of stock in ABX testing. Although, from what others have said, it's not really the whole of the audiophile community we're out to convince, just the some.
 
Splitting posts so we don't have long post after long post after long post.

Moose protocol with necessary changes and DR:
Dress rehersal/Dry run - We take two idenitcal CDs of Peter and the Wolf. One is treated in my sight, the other is not. Both are played and I say, "Yep. I hear a difference all right," or, "No. There's not difference." On the former, we continue on, on the latter, we just get a pizza or go home or something.

Moose-
You need eleven bit-by-bit identical disks. One of these disks is designated (and identified) as the control and is guaranteed to not have been treated.

The other ten are either treated or not treated by a coin toss, and recorded by paper in a sealed envelope and continuous video tape.

It's okay if a rep for the applicant is present, so long as this rep does not touch the equipment nor the coin, nor contacts the applicant in any way between that point and the time of the test. If any of these things happen, the test is invalidated.

The treatment machine is borrowed/provided by JREF, set once prior to the initial treatment, and not adjusted until the treatment is done.

This is done entirely out of sight of the applicant.

Once this is done, all "treaters" leave the area. One neutral, who was not present for the treatment, enters and transports the CDs to the room where the applicant awaits with their preferred equipment setup. (This can include packaging and mailing, in terms of a remote test.)

The applicant then receives all eleven CDs, and determines by any means they desire which CDs have been treated and which have not.


ABX method:
D.R. same as before.

We have three copies of the same CD. One(A or B) is treated out of my sight, one(B or A) is not, the last's(X's) status is determined by coin toss. There will be sixteen on these triplets. The status of all sixteen As, Bs, and Xs will be recorded and placed into a sealed envelope. Rest of the procedure follows the same as Moose's.

Most importantly, on either procedure, we follow this rule from where I found the ABX info:

The p values given in the table linked above are valid only if the two following conditions are fulfilled :
-The listener must not know his results before the end of the test, exept if the number of trials is decided before the test.
...otherwise, the listener would just have to look at his score after every answer, and decide to stop the test when, by chance, the p value goes low enough for him.

-The test is run for the first time. And if it is not the case, all previous results must be summed up in order to get the result.
Otherwise, one would just have to repeat the serial of trials as much times as needed for getting, by chance, a p value small enough.
Corollary : only give answers of which you are absolutely certain ! If you have the slightest doubt, don't answer anything. Take your time. Make pauses. You can stop the test and go on another day, but never try to guess by "intuition". If you make some mistakes, you will never have the occasion to do the test again, because anyone will be able to accuse you of making numbers tell what you want, by "starting again until it works".
Of course you can train yourself as much times as you whish, provided that you firmly decide beforehand that it will be a training session. If you get 50/50 during a training and then can't reproduce this result, too bad for you. the results of the training sessions must be thrown away whatever they are, and the results of the real test must be kept whatever they are.
Once again, if you take all the time needed, be it one week of efforts for only one answer, in order to get a positive result at the first attempt, your success will be mathematically unquestionable ! Only your hifi setup, or your blind test conditions may be disputed. If, on the other hand, you run again a test that once failed, because since then, your hifi setup was improved, or there was too much noise the first time, you can be sure that there will be someone, relying on statistic laws, to come and question your result. You will have done all this work in vain.

So much for "Protocol in a Day," but admit it, we're getting quite far, quite fast.
 
LostAngeles said:
I don't see a problem with doing a "dress rehersal" initially.
We'd have to do one, except that you will almost certainly come to an "I can't tell the difference" conclusion and, as you suggest, we can all go get a pizza. Which is nice if you're hungry but doesn't justify the whole setup rigamarole.
 
There is a fine art in knowing how to express oneself briefly and concisely and when to shut up.

I will illustrate. There. See?

6915 words (the length of Piano Teacher's last post) is way beyond the pale.

And don't we have a rule against referencing Aristotle, Newton and Einstein all in the same post? To prevent meltdown? Of our brains?
 
Beleth said:
We'd have to do one, except that you will almost certainly come to an "I can't tell the difference" conclusion and, as Moose suggests, we can all go get a pizza. Which is nice if you're hungry but doesn't justify the whole setup rigamarole.

Ah, the pizza's LostAngeles' idea, not mine. Not a bad idea, though.

Personally, in this case, I suspect LostAngeles is going to have to fib a bit, I think. Rather than claim that she can hear a difference between the controls, the dry run will probably end with a signoff that LA's satisfied with the proceedings and is ready to make the attempt.
 
Sherman Bay said:
There is a fine art in knowing how to express oneself briefly and concisely and when to shut up.

I will illustrate. There. See?

Drawing on my fine command of language, I said nothing.
-- allegedly said by Mark Twain

6915 words (the length of Piano Teacher's last post) is way beyond the pale.

Twenty-two scrolls of my scroll wheel to skip, in fact.
 
LostAngeles said:
*hands BPSG a nit comb*

Someone has to do it. :D
That's BPSCG, to you, madame (so long as we're picking nits). :D

And my hair isn't thick enough any more that any nits have much of a place to hide... :(
I thought Wellfed's claim was that the GSIC made a audible difference that he could detect. Mine is that it makes an audible difference that I can detect.
Okay, then I withdraw my objection. When you stated on page 1 here that
So, yeah, if the GSIC chip does work, it'll be wicked f***ing pissah. I'm not counting on it, but hey, maybe there is a really good explanation for it. That'd be even neater.:p :)
(emphasis mine) I interpreted that as your saying you didn't think you could do it.
The difference is that he's an audiophile and I'm an average Jane. They're identical in that we're testing a product and not a person, that can do something that doesn't have an apparent scientific explanation.
I'm going to pick another nit (there's meds for that, right?): Your claim is not a test of the product; unless I misunderstood earlier posts, it's already been established that it does nothing physically detectable to the CD. Your paranormal claim is that you can detect differences in the quality of a CD's sound anyway.

Is that a fair statement of your claim?
 
BPSCG said:
That's BPSCG, to you, madame (so long as we're picking nits). :D

And my hair isn't thick enough any more that any nits have much of a place to hide... :(
Okay, then I withdraw my objection. When you stated on page 1 here that (emphasis mine) I interpreted that as your saying you didn't think you could do it.
I'm going to pick another nit (there's meds for that, right?): Your claim is not a test of the product; unless I misunderstood earlier posts, it's already been established that it does nothing physically detectable to the CD. Your paranormal claim is that you can detect differences in the quality of a CD's sound anyway.

Is that a fair statement of your claim?


There are meds for nits. I've had them twice as a kid. *shudder* Stupid daycare.

I suppose I should have said something closer to, "I'm not counting my chickens before they hatch." It's not like I'm actually thinking that I can withdraw my Financial Aid and internship applications or anything.

I thought the product did do something to the CD since it's supposed make the change permanent. I can tell the difference between two different systems. I could tell the difference after I had friend redo the setup with the same set of wires on a system years ago. I can tell that the sound coming out of our two T.V.s have a different quality. I can say, "Wow. Your car stereo sucks." I'm under the impression that that's normal. What we're looking for is if the GSIC can make a change on the same system such that Average Jane can tell.
 
LostAngeles said:
I thought the product did do something to the CD since it's supposed make the change permanent.

According to the manufacturer, it does permanently change the CD - so you're correct in your thinking. Of course, subsequent testing showed no apparent detectable changes, but that's where the "paranormal" aspect comes in, doesn't it? :)
 
Moose said:
Personally, in this case, I suspect LostAngeles is going to have to fib a bit, I think. Rather than claim that she can hear a difference between the controls, the dry run will probably end with a signoff that LA's satisfied with the proceedings and is ready to make the attempt.
But...

But that would be wrong.
 
Beleth said:
But...

But that would be wrong.

Yes, it would and it could also be the result of self delusion.

*sigh*

Ok, look. There is no way I can convince any of you that I will not fib during the DR, or that I won't sabotage the test, or that there won't be any anti-GSIC bias. I completely accept that and I'm not going to groan or sigh everytime it comes up. I'll just point you to this post now, rather than the old thread.

Because should the test fail, we're going to have this problem with the GSIC supporters making these same allegations. We can't negate this problem within the test, so we're just going to have to accept it.

Beleth, Timothy, BPSCG, Piano, I'm not at all upset with you guys for asking these kinds of questions and being nitty. Like I told BPSCG, someone has to do it. BPSCG is the nit-picker, Timothy can be the suspicious one and Beleth can be the more suspicious one, Piano can be the wise, rambling man, Lisa Simpson can be the evil witch (I remember your CD suggestions, woman.), KRAMER can be the hot, but bubblehead daughter, and webfusion can be the wacky neighbor.

So long as I get to be the Emperor Palpatine type, we're good. :D

(because the Rules say nothing about using The Force to convince them you've passed the test.)
 

Back
Top Bottom