Indeed. Still, my problem is not that patterns have a clear ontological status, but that there are more patterns than those in which we focus our attention. For instance, our humanity (read need for survival) conditions which patterns are relevant for us and which are not.
That's true in everyday life. But there is "pure" research where knowledge is pursued for its own sake.
So, for instance, I believe the whole categorization about what an object is (ontologically speaking) is contaminated with these anthropocentric assumptions (it is solid, occupies space, has a color, etc). It can be argued that objects are in constant flux with their surroundings, so there is not a clear boundary to draw a line between what we call "this object" and we call "other objects".
Can and has been argued by Heraclitus, who said change is constant.
At the other pre-socratic extreme, Parmenides said change is impossible.
Plato reconciles them by saying change is constant, but there are durable patterns within change which don't change. He calls these patterns Forms or Ideas, and because they don't change says they are more real than the objects of sense, which do.
Then his star student Aristotle disagrees, and away we go...
A way to do this is to take out time (as it appears to us) from the equation, for example, leaving a camera with a time of 40 sec to capture a picture will show a very different shape of some objects (those which move in the scene). Why would our predetermined time window (about 20msec IIRC) be preferred over a 40 second one to define the shape of moving objects?
It shouldn't be, when we can better our senses. That's why science is the preferred standard for objectivity. Instruments extend our senses (and minds: "instrumentality"). They don't "blink" (unless they're on the blink): when they have to, their sensory buffers reset faster than ours. In a sense of course, the limitations of the instrument are another sort of phenomenological bracket, which experimenters notate as significant error (measurement = X +or- x).
Agreed, but as the example above, we register only some particular, isolated variables. Its like the example about
what is an elephant. Experiments are designed to show us a particular face, but that face reflects us.
Well, we will always be prisoners of our phenomenological cages, of the limits of our senses; but so long as we're not after absolute knowledge, then getting as close as we do in science -- obtaining reliable working knowledge of the properties of objects -- makes a good working definition for "objectivity".
Rather than blind men, science is more like men with X-ray vision, etc., examining the elephant, then examining each others' examinations, then re-examining, and re-examining, and...
Because science doesn't deal in absolutes, it's neverending: the "blind" men are always building better instruments, and like gnats, they can never leave the poor elephant alone.
To put it in other words, I worked at a newspaper years ago, they used to say that journalism was about stating facts and not give interpretations (fools). A photograph was brought as an example of a tool to put just the fact, without "contaminating it" with opinions. Thing is, a photograph implies the particular POV of the photographer... another way to say it; the photographer is also in the picture, "contaminating it".
Sure, a photograph is a choice by the photographer: depending on what she feels is relevant to the story. So the photograph is an 'objective' record of what was in the camera's eye at that moment and also where the photographer chose to point the camera, etc.: readers should be sensitive to bias, you're right.