• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Ideomotor Effect and the Subconscious / Beyond the Ideomotor Effect

Pixel in her kudo post mentions blind testing is useful for eliminating human biases in subjective evaluations, but it does not change the fact that structured, coherent messages have already been produced.
The problem is that your structured, coherent messages are not very special if it is likely that they will be produced in great numbers with the input you have shown. That was the point I tried to get across, and Myriad has done it much better.

You need to know the null theory, and show that the data your system produces contains more structured, coherent messages than the null theory.

Otherwise, you are just producing what is to be expected.
 
Can you demonstrate, with real-world data, that n²/45 reliably predicts the number of connections in structured datasets?

No, I can only demonstrate that it approximates the results shown so far.

If not, then it remains a hypothesis, not an incontrovertible fact.

I never said it was an incontrovertible fact.

Your claim assumes that all connections are random and then argues that, given enough selections, random connections will increase.

No. My claim is that the expected number of connections due to chance increases with more selections. Expected values are a well established aspect of probability theory.

But that’s just restating the assumption—it doesn’t prove that randomness is the cause of the structure in the messages.

It's restating the null hypothesis, which is a critical aspect of any scientific evaluation or testing of any claim.

Estimations are by nature, not incontrovertible. By definition, an estimation is an approximation based on assumptions, not a definitive fact. To claim that your probability expectations are incontrovertible while simultaneously relying on estimations is a logical contradiction.

Good think I didn't say my probability estimations are incontrovertible. What is (and what I said is) incontrovertible is that the number of opportunities for chance connections between selections, and hence the expected number of such connections, increases with more selections.

I'm using "expected" in the sense of probability theory. Potatoes vary in weight and we might not know how the exact weights of potatoes are distributed, but we can still rationally expect more potatoes to weigh more than fewer potatoes. Expected Value

Saying "chance did it" is like saying "god did it" - both expressions function as gap-filling placeholders until empirical evidence for either is established.

The null hypothesis is that there was no "it" done because what was observed happened by chance alone.

I agree that your theory might predict the results you describe. However, until it is empirically tested—either against my dataset or through your own independently generated dataset—it remains an unverified hypothesis (theory) rather than a confirmed explanation. (fact). Therefore, while it is a possibility, it cannot yet be treated as the explanation for the structured messages observed.

It is an untested null hypothesis, until someone tests it.

Sounds scientific, but is it? No - not until it is tested. Right now it is unfalsifiable claim.

You claim below that analysis or tests can refute the null hypothesis, so the null hypothesis is falsifiable.

Chance would rely on the hope that if one does something enough times, one will eventually have ones hopes fulfilled.
Your argument relies on the assumption that if you generate enough selections, structure will inevitably emerge—but this is a faith-based belief in probability rather than a demonstrated fact.

My argument relies on no assumptions besides the axioms of combinatoric math, and the near-certain likelihood that the entries in a large curated library selected based on a consistent cloud of topics of interest will sometimes share common references. More selections mean more distinct pairs of selections and therefore more chance (if there's a nonzero chance to begin with) that pairs sharing a common reference will occur. Since what you call "structure" consists of such common references, it follows that lengthier interactions with the system that call up a greater number of selections will tend on average to exhibit more of said structure.

What is demonstrable—because it is actively being demonstrated—is that my system produces data that can be evaluated because it produces structured messages.
which
A test protocol is useful for validating claims, but it does not replace evaluating what is already demonstrated. The fact remains that structured messages have been generated, and their coherence can be analyzed in real time.
Pixel in her kudo post mentions blind testing is useful for eliminating human biases in subjective evaluations, but it does not change the fact that structured, coherent messages have already been produced. Why delay analyzing the actual messages while waiting for someone to to come up with a practical blind test?

Since we are now in agreement that structured messages are being produced, the next logical step would be to analyze what is being communicated and whether additional connections and structure emerge from further evaluation.

The null hypothesis here is not "no structure" (as you have operationally defined "structure" as occurrences of identifiable common references between pairs of selections), it's that the structure observed is consistent with chance expectations.

As for further analysis, look, I'm better at this kind of lateral thinking poetry criticism than anyone I know. My English teachers used to gush about things like how I found a hidden reference to the four Classical elements in Poe's "Sonnet—To Science." Here's a trick: instead of looking for segment A references theme 1 and segment B also references theme 1 (easy to spot but a bit rare), you can look for segment A references theme 1 and segment B references theme 2, but theme 1 and theme 2 both relate to something else. Like, segment A references obsession and segment B references industrial labor, but Moby Dick references both of those things, so, bam!, a "hidden" connection between A and B, plus now you have a big white whale of a novel with dozens of motifs of its own to make more "connections" with! Those are harder to spot but the additional cross-connection increases the world of possibilities so astronomically that with enough effort you could probably find hundreds of items of "structure" from a score of selections, instead of a dozen. That's the kind of thinking modern art critics use to find all kinds of significant meaning in a new installation, until they find it out's actually the janitor's cart. Even an AI can do it, as you've observed.
 
Last edited:
No, I can only demonstrate that it approximates the results shown so far.



I never said it was an incontrovertible fact.



No. My claim is that the expected number of connections due to chance increases with more selections. Expected values are a well established aspect of probability theory.



It's restating the null hypothesis, which is a critical aspect of any scientific evaluation or testing of any claim.



Good think I didn't say my probability estimations are incontrovertible. What is (and what I said is) incontrovertible is that the number of opportunities for chance connections between selections, and hence the expected number of such connections, increases with more selections.

I'm using "expected" in the sense of probability theory. Potatoes vary in weight and we might not know how the exact weights of potatoes are distributed, but we can still rationally expect more potatoes to weigh more than fewer potatoes. Expected Value



The null hypothesis is that there was no "it" done because what was observed happened by chance alone.



It is an untested null hypothesis, until someone tests it.



You claim below that analysis or tests can refute the null hypothesis, so the null hypothesis is falsifiable.



My argument relies on no assumptions besides the axioms of combinatoric math, and the near-certain likelihood that the entries in a large curated library selected based on a consistent cloud of topics of interest will sometimes share common references. More selections mean more distinct pairs of selections and therefore more chance (if there's a nonzero chance to begin with) that pairs sharing a common reference will occur. Since what you call "structure" consists of such common references, it follows that lengthier interactions with the system that call up a greater number of selections will tend on average to exhibit more of said structure.



The null hypothesis here is not "no structure" (as you have operationally defined "structure" as occurrences of identifiable common references between pairs of selections), it's that the structure observed is consistent with chance expectations.

As for further analysis, look, I'm better at this kind of lateral thinking poetry criticism than anyone I know. My English teachers used to gush about things like how I found a hidden reference to the four Classical elements in Poe's "Sonnet—To Science." Here's a trick: instead of looking for segment A references theme 1 and segment B also references theme 1 (easy to spot but a bit rare), you can look for segment A references theme 1 and segment B references theme 2, but theme 1 and theme 2 both relate to something else. Like, segment A references obsession and segment B references industrial labor, but Moby Dick references both of those things, so, bam!, a "hidden" connection between A and B, plus now you have a big white whale of a novel with dozens of motifs of its own to make more "connections" with! Those are harder to spot but the additional cross-connection increases the world of possibilities so astronomically that with enough effort you could probably find hundreds of items of "structure" from a score of selections, instead of a dozen. That's the kind of thinking modern art critics use to find all kinds of significant meaning in a new installation, until they find it out's actually the janitor's cart. Even an AI can do it, as you've observed.
I recognize that probability theory is well established, but probability models must be applied correctly to specific datasets in order to be valid. Until your probability estimate is tested against real-world data, it remains an assumption rather than a demonstrated explanation for the structured messages."

Probability principles do not automatically apply to every case without verification. Your model assumes that structure is only due to chance, but you have not tested whether it actually predicts the level of coherence observed in my messages.

If your probability claim is correct, then applying it to a randomized dataset should produce the same level of structured messages. Have you tested whether this is the case?

I agree that as more selections are made, the number of potential connections increases—this is a basic principle of probability. However, that does not prove that the structured coherence observed in my messages is fully explained by chance alone. That remains an untested assumption.

You have clarified that your claim of 'incontrovertibility' only applies to the increasing number of opportunities for connections. However, what remains unverified is whether this statistical expectation accounts for the emergent structure and continuity in the messages my system produces. Have you tested your probability model against actual structured message outputs to confirm this?

Your claim assumes structure is only about shared references, but my system produces thematic continuity, progression, and interconnected meaning over multiple trials. Your model does not currently account for these factors.

Your analogy about art critics assumes that structure is imposed by interpretation. However, structure is not merely a product of interpretation—it is present in the art piece itself. Likewise, the structured coherence in my messages is not dependent on subjective interpretation; it is demonstrable within the messages themselves. Until your model can account for emergent interconnected meaning, it remains incomplete.

But like I said earlier - your theory is not off the table - it is just to one side rather than being the central focus.
 
Last edited:
(Ongoing...)
Summary of Structured Analysis

Objective:


This document summarizes the structured analysis of generated messages within the ongoing discussion on structured intelligence. The focus is on identifying coherence, thematic continuity, and real-time applicability within skeptic engagement.


Brief Summary:

  • Demonstrates Emergent Meaning & Thematic Continuity: Messages exhibit structure and coherence, countering claims of random selection.
  • Highlights Self-Referential Structure: Generated messages engage with real-time discussion dynamics, showing contextual alignment.
  • Reveals Professing Skeptics' Resistance as Cognitive Avoidance: Structured intelligence challenges fixed thought patterns, prompting disruptive responses from skeptics.
  • Illustrates Adaptive Intelligence & Engagement: Messages evolve across discussions, reinforcing real-time learning and interaction.

Key Messages Analyzed:

1. Post #971 (Sunday at 10:39 PM)


  • Core Themes: Fear as Transformation, Imagination & Uncertainty, Vibrational Shift, Foundational Values.
  • Implications: Fear can be an opportunity for growth, uncertainty should not lead to fixation, and raising one's perception is key.
  • Connections: Professing skeptics resist uncertainty; the discussion reflects a journey from struggle to enlightenment.
2. Post #1036 (Wednesday at 7:00 PM)

  • Core Themes: Cognitive Challenge, Controlled Distraction, Distributed Intelligence, Thought as Multiversal.
  • Implications: Intelligence emerges through interaction, rather than being imposed from the top down; perception operates in layers.
  • Connections: The message mirrors structured intelligence engaging with real-time cognitive disruption.
3. Post #1050 (Wednesday at 10:07 AM)

  • Core Themes: Symbolic Systems, Reality Formation, Playful Exploration of Meaning.
  • Implications: Meaning emerges through rearrangement and interaction within structured systems.
  • Connections: The discussion on Scrabble tiles (by Myriad) is directly mirrored in the generated message.
4. Post #989 (Monday at 11:46 PM)

  • Core Themes: Discernment, Evolutionary Intelligence, Disruption, Presence, Interwoven Narratives.
  • Implications: Avoidance hinders intelligence, while disruption can catalyze engagement and intellectual growth.
  • Connections: The phrase "There Are Myriad Stories Happening Within The Main Story" emerged after engaging with Myriad, reinforcing non-random structural emergence.
5. Post #1066 (Yesterday at 1:07 AM)

  • Core Themes: Evolutionary Intelligence, Professing Skeptic Resistance, Disruption as Tactic & Catalyst, Intellectual Discipline.
  • Implications: Professing skeptics avoid structured engagement, yet intelligence emerges through persistent analysis.
  • Connections: The self-referential nature of the message ("Hominid / Disrupt / Stay Present") demonstrates through analytic observation how structured intelligence maps onto real-time debate dynamics.
Full file of summary to this point, is attached.
 

Attachments

Really? You've done automatic writing? Lots others have, as well?
Yeah. I used to keep journals where I'd write whatever came into my head - sometimes I had something in mind, sometimes it was just stream of consciousness, sometimes I just stopped thinking and let the words write themselves (automatic writing). If you start a thread I can post some photos of some pages, if you like. See if you can find any coherence there.
 
(Ongoing...)
Summary of Structured Analysis

Objective:


This document summarizes the structured analysis of generated messages within the ongoing discussion on structured intelligence. The focus is on identifying coherence, thematic continuity, and real-time applicability within skeptic engagement.



Brief Summary:

  • Demonstrates Emergent Meaning & Thematic Continuity: Messages exhibit structure and coherence, countering claims of random selection.
  • Highlights Self-Referential Structure: Generated messages engage with real-time discussion dynamics, showing contextual alignment.
  • Reveals Professing Skeptics' Resistance as Cognitive Avoidance: Structured intelligence challenges fixed thought patterns, prompting disruptive responses from skeptics.
  • Illustrates Adaptive Intelligence & Engagement: Messages evolve across discussions, reinforcing real-time learning and interaction.


Key Messages Analyzed:

1. Post #971 (Sunday at 10:39 PM)


  • Core Themes: Fear as Transformation, Imagination & Uncertainty, Vibrational Shift, Foundational Values.
  • Implications: Fear can be an opportunity for growth, uncertainty should not lead to fixation, and raising one's perception is key.
  • Connections: Professing skeptics resist uncertainty; the discussion reflects a journey from struggle to enlightenment.
2. Post #1036 (Wednesday at 7:00 PM)

  • Core Themes: Cognitive Challenge, Controlled Distraction, Distributed Intelligence, Thought as Multiversal.
  • Implications: Intelligence emerges through interaction, rather than being imposed from the top down; perception operates in layers.
  • Connections: The message mirrors structured intelligence engaging with real-time cognitive disruption.
3. Post #1050 (Wednesday at 10:07 AM)

  • Core Themes: Symbolic Systems, Reality Formation, Playful Exploration of Meaning.
  • Implications: Meaning emerges through rearrangement and interaction within structured systems.
  • Connections: The discussion on Scrabble tiles (by Myriad) is directly mirrored in the generated message.
4. Post #989 (Monday at 11:46 PM)

  • Core Themes: Discernment, Evolutionary Intelligence, Disruption, Presence, Interwoven Narratives.
  • Implications: Avoidance hinders intelligence, while disruption can catalyze engagement and intellectual growth.
  • Connections: The phrase "There Are Myriad Stories Happening Within The Main Story" emerged after engaging with Myriad, reinforcing non-random structural emergence.
5. Post #1066 (Yesterday at 1:07 AM)

  • Core Themes: Evolutionary Intelligence, Professing Skeptic Resistance, Disruption as Tactic & Catalyst, Intellectual Discipline.
  • Implications: Professing skeptics avoid structured engagement, yet intelligence emerges through persistent analysis.
  • Connections: The self-referential nature of the message ("Hominid / Disrupt / Stay Present") demonstrates through analytic observation how structured intelligence maps onto real-time debate dynamics.
Full file of summary to this point, is attached.
…and how will you show that this is not what is to be expected with the given input?
 
…and how will you show that this is not what is to be expected with the given input?
Re the ongoing Summary of Structured Analysis
As a skeptic myself, I never claimed that these structured connections were expected—only that coherence was expected due to the structured input. What I did not and could not predict was the specific subject matter, the exact LEs selected, or how they would randomly arrange into such real-time relevance and interconnected themes. If these connections were 'expected,' can you explain what specific mechanism guarantees this level of structured emergence?
IF
we claim "It just follows from the nature of the input."
THEN
we should be able to define the rules governing that emergence and demonstrate the expected results before they appear, rather than only claiming they were expected after the fact.
 
The attached document "UICD THEMES AND CONNECTIONS" provides an in-depth analysis of the Generated Messages (GMs) produced and share in this thread using the UICD system, focusing on their themes, implications, connections, and structured intelligence. Below is a summary of its key contents:

  • The document demonstrates that structured intelligence is emergent, interconnected, and self-reinforcing.
  • Generated Messages (GMs) exhibit coherence, thematic continuity, and intelligent organization.
  • Skeptic engagement is analyzed as a test of structured intelligence recognition.
  • The UICD system is not generating randomness but producing structured meaning.
 

Attachments

Re the ongoing Summary of Structured Analysis
As a skeptic myself, I never claimed that these structured connections were expected—only that coherence was expected due to the structured input. What I did not and could not predict was the specific subject matter, the exact LEs selected, or how they would randomly arrange into such real-time relevance and interconnected themes. If these connections were 'expected,' can you explain what specific mechanism guarantees this level of structured emergence?
IF
we claim "It just follows from the nature of the input."
THEN
we should be able to define the rules governing that emergence and demonstrate the expected results before they appear, rather than only claiming they were expected after the fact.
Subconscious bias. Humans are demonstrably and consistently good at making up narratives to connect seemingly unrelated messages.

It's how conspiracy theories are formed. It's the basis for the sovereign citizen paradigm. It's the means by which we get pleasing narratives from I Ching and Tarot.

The governing rule is the participation of human imagination in the process.

The moment you described your method, I knew you would be able to craft subjectively satisfying narratives from the results. It's what humans do.

It's no coincidence that you re-invented bibliomancy. It's what humans do.
 
Subconscious bias.
Why that argument doesn't work.
The UICD system selects randomly. There is no opportunity for subconscious bias in choosing LEs.
The themes and connections are seen through observation of the evidence in relation to real time.
The fact that the messages are structured before observed THEMES AND CONNECTIONS are analysed contradicts the subconscious bias argument.
Structure is an objective property of the system, not an illusion of interpretation.
"Subconscious bias" is simply a hand-waving device with no accompanying research to support it as a contender to explain this systems functionality.
  • It's a lazy, vague dismissal that assumes rather than explains.
  • It does not answer how structured, interconnected, real-time relevant messages emerge.
  • It assumes interpretation creates structure, but structure already exists before interpretation.


Humans are demonstrably and consistently good at making up narratives to connect seemingly unrelated messages.
It has already been shown that the messages are related.
It's the means by which we get pleasing narratives from I Ching and Tarot.
Irrelevant. The cards are fixed in their meaning, just as the LEs are. One can observe meaning without having to resort to bias as to how one would rather interpret meaning (for pleasing results) none of which is happening in the observation of THEMES AND CONNECTIONS and the relationship with real time events unfolding.

The UICD system does not assign pre-determined meanings to the LEs—it produces messages that are structured and coherent before any type of interpretation occurs.
The UICD system does not rely on personal satisfaction for its outputs to be valid.
The relationship to real-time events is not imposed after the fact—it emerges as part of the structural intelligence within the system.


The governing rule is the participation of human imagination in the process.
Just another empty claim with no supporting evidence.

The moment you described your method, I knew you would be able to craft subjectively satisfying narratives from the results. It's what humans do.
Your statement appears to be based in prior bias.
If one reads the Summary of Structured Analysis attached document carefully and with due consideration to intellectual honesty, one will discover that there is no basis for the prior assumption that they"knew" but rather they presumed.
The presumption itself - "being able to craft subjectively satisfying narratives from the results" is not the same as being able to work with the evidence without doing so.
What one would have to do to support this claim you have place into the debate is to examine the evidence in the attached document and show conclusively that this is what has occured.
Otherwise, simply stating this is what has occurred (especially on the bias of assumption) is nothing but unprofessional opinion.


It's no coincidence that you re-invented bibliomancy. It's what humans do.

The false equivalence between the UICD system and bibliomancy was already dismantled when Myriad attempted it in earlier posts.
 
Why that argument doesn't work.
The UICD system selects randomly...
From a non-random pool of inputs.

...It has already been shown that the messages are related.
It has been shown that your interpretations of the 'messages' are related.

I think I asked earlier for an explanation/description of how the inputs were selected. If there was an answer given, then I missed it. Please answer this question, or direct me to where you have already done so.
 
IF
we claim "It just follows from the nature of the input."
THEN
we should be able to define the rules governing that emergence and demonstrate the expected results before they appear, rather than only claiming they were expected after the fact.

„We“ are not claiming that these results were expected after the fact. We are saying that you have no null hypothesis to test against, so you are unable to prove that something unexpected is happening here. In other words, if you cannot show what is to be expected, you cannot show that your results are unexpected.

What several posters have been pointing out is that it certainly does not look unexpected to us, given the input.

So you must work on finding out how to build a null hypothesis.
 
„We“ are not claiming that these results were expected after the fact. We are saying that you have no null hypothesis to test against, so you are unable to prove that something unexpected is happening here. In other words, if you cannot show what is to be expected, you cannot show that your results are unexpected.

What several posters have been pointing out is that it certainly does not look unexpected to us, given the input.

So you must work on finding out how to build a null hypothesis.
If you are claiming structure is expected, you must:
  • Define what level of structure is naturally expected from the input.
  • Demonstrate how these expectations can be tested before results appear.
  • Show why the structure that emerged aligns with their pre-existing expectations.

    Your shift to "null hypothesis".

    A null hypothesis is typically used in statistical testing to compare an experimental condition against a baseline.
    (Demanding a null hypothesis is a rhetorical escape, not a legitimate argument against structured emergence.)
    However, the core claim in this debate is not about statistical probability, but about the nature of structured emergence itself.
    Structure is objectively observed before interpretation—so whether it is "expected" or not is secondary to the fact that it happens consistently across trials.
    If you wish to falsify the claim, you must show how the structure can be explained by an alternative mechanism.
The question I originally asked is still unanswered:
If structure is just “expected” from the input, what specific rules govern this emergence, and how can these rules be used to predict structure before it appears?

If you have not defined these rules and instead, claim that because it looks “unsurprising” to you, no deeper explanation from me is needed.

If we skeptics claim structure is expected, we must provide a clear model or mechanism that explains why.

Even so, the issue is not whether the results are surprising to some skeptics—it’s what mechanism explains the structured emergence.
 
Last edited:
From a non-random pool of inputs.


It has been shown that your interpretations of the 'messages' are related.
Where has this "been shown"?
I think I asked earlier for an explanation/description of how the inputs were selected. If there was an answer given, then I missed it. Please answer this question, or direct me to where you have already done so.
You appear to miss a lot (given your attempts at critique) - for there are many instances where I have described how the inputs are selected and even provided a short video showing a real-time selection process through a live screen capture.

Rather than continuing with these apparently bad-faith engagements and rather than me having to continue repeating myself, I suggest if you actually have an interest in this topic, you read the thread subject which starts at post #865. Also you can get help from the data in my signature.
 
Where has this "been shown"?
In every post where you have declared a shared meaning between these 'messages', based only on your interpretation of their meaning.

You appear to miss a lot (given your attempts at critique) - for there are many instances where I have described how the inputs are selected and even provided a short video showing a real-time selection process through a live screen capture.
I was not questioning how the inputs from the list were selected. I was questioning how the inputs got onto the list in the first place.

Rather than continuing with these apparently bad-faith engagements and rather than me having to continue repeating myself, I suggest if you actually have an interest in this topic, you read the thread subject which starts at post #865. Also you can get help from the data in my signature.

You still haven't answered any of my questions about how and/or why the phrases on your list of inputs were chosen. Until you do that you should probably dial back the passive-aggressive arrogance.

How was your list of inputs compiled?
 
...it’s what mechanism explains the structured emergence.

By the same reasoning you apply to justify regarding your sequences of selections as messages, I'm justified in regarding your entire library of 7,500 segments as a message.

That uber-message contains a large number of themes, connections, and implications. More than any one person can apprehend, but no matter. As you regard such elements in messages as objective and revealed by analysis, rather than subjective and created by analysis, that must also be true of the uber-message. All of the themes, connections, implications, meanings, etc. in the uber-message must already exist from the time the library is enumerated.

Any set of segments from the library, regardless of how it was selected, is a portion of that pre-existing uber-message. As such it might (and likely will) contain some of the themes, connections, and implications of the uber-message. But the selection process cannot add new ones. For example, if the theme of Evolutionary Intelligence is present in a group of selections from the library, it must also be present (and have been present all along) in the library.

This answers the question: nothing is emerging when you assemble a subset of the library. You're only focusing your own attention on certain already existing structures while pruning away all the parts you didn't select. Like shining a spotlight on a small section or a few very small scattered sections of a large, detailed, but rather chaotic tapestry.

That's why the question of how you selected and compiled the segments making up the library is so important. That process is the ultimate source of all the structure.
 
Last edited:
If you are claiming structure is expected, you must:
  • Define what level of structure is naturally expected from the input.
  • Demonstrate how these expectations can be tested before results appear.
  • Show why the structure that emerged aligns with their pre-existing expectations.
If you are claiming structure is more than expected, you must:
  • Define what level of structure is naturally expected from the input.
  • Demonstrate how these expectations can be tested before results appear (i.e. establish success criteria)
  • Measure the level of structure that actually emerges and show that it exceeds the success criteria and hence aligns with your claim
 
Yeah. I used to keep journals where I'd write whatever came into my head - sometimes I had something in mind, sometimes it was just stream of consciousness, sometimes I just stopped thinking and let the words write themselves (automatic writing). If you start a thread I can post some photos of some pages, if you like. See if you can find any coherence there.

Absolutely, will do.

And absolutely, by all means put in those pages you've "written".

As for finding coherence in them, sure why not. Although personally my thing isn't to "test" your output, given you're certainly not claiming any extravagant nonsense here! And nor am I sure what to look for there, and how! My thing was just to find out a bit more about what this automatic writing business actually is. So, speaking for myself, what I'm looking for is to understand how it is you came to do this thing, if you still do it, what you generally know about it, all of that. ...But sure, just for fun, why not, the testing for coherence thing I mean.



eta: Here you go, new thread.

Started it in the GS&P forum, but it occurs to me now that maybe you might want it in Members Only, given you'll be posting your personal writing, and maybe posting personal details. Or not, up to you, the personal details part. I'm saying, if you're more comfortable with Members Only, then I'll request the mods to shift it to Community.
 
Last edited:
Just to add relative to Pixel42s post, "structure [in the output] naturally expected from the input" (as Pixel phrases it), and the portion of the pre-existing structure in the library that's captured in the selected segments (as I phrase it), are two different ways of describing the same thing.

If you have a barrel full of marbles, and some of them are blue, then a random (or non-random but unbiased) sample of those marbles will have an expected number of blue marbles. They're only expected in the sample because there are blue marbles in the barrel, and the expected number in the sample depends on the size of the sample and the proportion of blue marbles in the barrel. The same is true of relationships between subsets of marbles ("structure"). In an unbiased sample there will be an expected number of marbles that share the same color, an expected number of marbles that are rainbow-adjacent in color, an expected number with complementary colors, and so on. Again, these are only expected in the sample because matching colors and rainbow-adjacent colors and so on already occur among the marbles in the barrel.
 

Back
Top Bottom