Beausoleil
Thinker
- Joined
- Mar 8, 2003
- Messages
- 237
A typical rock is made up of many minerals. Above a temperature that depends on both the mineral and the decay product, decay products can migrate out of the mineral as they form.
So for instance, if you have a rock with mica and K-feldspar, and examine it in the K-Ar system, you might expect that the chronometer in the K-feldspar has withstood heating events that reset the chronometer in the mica. The rock got hot enough for argon to diffuse out of mica but not hot enough for argon to diffuse out of feldspar. If one's dealing with an igneous rock one looks for several minerals that record the same age to date its formation - this is the essence of the isochron method of dating.
The age of a rock is a loose concept anyway. You examine the rock and see what processes have affected it. Then you examine the isotope systems of the various minerals and interpret the data in the light of what you learnt about its processing.
There are effects that change decay rates, but they aren't encountered in terrestrial rock samples. For instance, stars are hot enough to excite nuclei to metastable states that have different lifetimes against beta decay. Of course, these temperatures are much higher than any rock can withstand and in isotope dating decay that took place before the rock formed has no effect. Another example: totally ionised nuclei do not undergo electron capture reactions (such as 40K decay to 40Ar).
This emphasises the problem I mentioned above. The conditions that speed up beta decay don't speed up electron capture or alpha decay. Geochronology uses all 3 sorts of decay, so how can we get consistent ages if decay rates altered?
So for instance, if you have a rock with mica and K-feldspar, and examine it in the K-Ar system, you might expect that the chronometer in the K-feldspar has withstood heating events that reset the chronometer in the mica. The rock got hot enough for argon to diffuse out of mica but not hot enough for argon to diffuse out of feldspar. If one's dealing with an igneous rock one looks for several minerals that record the same age to date its formation - this is the essence of the isochron method of dating.
The age of a rock is a loose concept anyway. You examine the rock and see what processes have affected it. Then you examine the isotope systems of the various minerals and interpret the data in the light of what you learnt about its processing.
There are effects that change decay rates, but they aren't encountered in terrestrial rock samples. For instance, stars are hot enough to excite nuclei to metastable states that have different lifetimes against beta decay. Of course, these temperatures are much higher than any rock can withstand and in isotope dating decay that took place before the rock formed has no effect. Another example: totally ionised nuclei do not undergo electron capture reactions (such as 40K decay to 40Ar).
This emphasises the problem I mentioned above. The conditions that speed up beta decay don't speed up electron capture or alpha decay. Geochronology uses all 3 sorts of decay, so how can we get consistent ages if decay rates altered?