Wednesday, May 23, 2007

Occam's Razor

It may be refreshing to review at this point the medieval philosopher William of Occam's insight.

"One should not increase, beyond what is necessary, the number of entities required to explain anything"

Occam's razor is a logical principle attributed to the medieval philosopher William of Occam (or Ockham). The principle states that one should not make more assumptions than the minimum needed. This principle is often called the principle of parsimony. It underlies all scientific modelling and theory building. It admonishes us to choose from a set of otherwise equivalent models of a given phenomenon the simplest one. In any given model, Occam's razor helps us to "shave off" those concepts, variables or constructs that are not really needed to explain the phenomenon. By doing that, developing the model will become much easier, and there is less chance of introducing inconsistencies, ambiguities and redundancies.
Though the principle may seem rather trivial, it is essential for model building because of what is known as the "under determination of theories by data". For a given set of observations or data, there is always an infinite number of possible models explaining those same data. This is because a model normally represents an infinite number of possible cases, of which the observed cases are only a finite subset. The non-observed cases are inferred by postulating general rules covering both actual and potential observations.
For example, through two data points in a diagram you can always draw a straight line, and induce that all further observations will lie on that line. However, you could also draw an infinite variety of the most complicated curves passing through those same two points, and these curves would fit the empirical data just as well. Only Occam's razor would in this case guide you in choosing the "straight" (i.e. linear) relation as best candidate model. A similar reasoning can be made for n data points lying in any kind of distribution.
Occam's razor is especially important for universal models such as the ones developed in General Systems Theory, mathematics or philosophy, because there the subject domain is of an unlimited complexity. If one starts with too complicated foundations for a theory that potentially encompasses the universe, the chances of getting any manageable model are very slim indeed. Moreover, the principle is sometimes the only remaining guideline when entering domains of such a high level of abstraction that no concrete tests or observations can decide between rival models. In mathematical modelling of systems, the principle can be made more concrete in the form of the principle of uncertainty maximization: from your data, induce that model which minimizes the number of additional assumptions.
This principle is part of epistemology, and can be motivated by the requirement of maximal simplicity of cognitive models. However, its significance might be extended to metaphysics if it is interpreted as saying that simpler models are more likely to be correct than complex ones, in other words, that "nature" prefers simplicity.

It will be awfully redundant at this point to reiterate that the theory of dipole gravity doesn't have any assumptions other than the ones general relativity is based on itself. With such a minimal number of assumptions, the number of areas of cosmological problems it touches and provides answers are truly remarkable. The only way it can be wrong is if and only if general relativity is wrong. This perspective gives us the compelling reason to test the predictions of dipole gravity in the terrestrial experiment as soon as possible.
http://dipoleantigravity.blogspot.com/2007/04/alternative-method-of-detecting-dipole.html

Monday, May 21, 2007

Further Digression on the Mechanical Universe

The following discussion is not totally relevant to the gravity so it is debatable if it should be in the same blog which is devoted to dipole gravity or gravity in general, but it was decided that there is enough relevancy there that justifies its inclusion here than put it in a separate category.

The topic here is still about the tachyonic neutrinos. For the case of the gravitational phenomena, we strictly confined our focus on the kinematic elastic interaction of the tachyonic neutrinos with the baryonic matter particles. However, in this article we will be focusing on the non-kinematic interaction with tachyonic neutrinos. This interaction would not be totally kinematic in the sense that there is a weak interaction effect between the neutrinos and the electrons, muons and tauons(leptons). This is not a high energy interaction either because there are no new particles generated from it as in the case of the deep inelastic scattering interactions. It is not exactly like the billiard ball style collision since there is still finite distance that the interaction can be effective even though there maybe no direct contact between the particles involved. And also, there can be a vector effect like in the case of the electrons traveling in the magnetic field environment. So the effective cross section can be much larger than the case of the kinematic elastic collision. For example, there are speculations that neutrinos may be magnetic monopoles themselves, although not confirmed, if that is the case, the fast moving magnetic monopole will create circular electric field along their path exactly like moving electron creates the magnetic field around it. So the electrons can be affected by the fast moving neutrinos relatively long distance from their path which will make the interaction cross section very large. When there are so many of the fast moving tachyonic neutrinos, the space will be like a bubble bath of electric field popping in and out of them barely managed to cancel each other in the space to avoid the creation of the net electric field in the local space time by virtue of their homogeneous presence and the total isotropicity of their motion.

The situation we are going to imagine in this picture is a single hydrogen atom bombarded by the etheric tachyonic neutrinos. The proton will largely be able to maintain its position because of its larger mass compared to the electron although still its precise position will not be absolutely determined depending on the strength of the cross sections involved. However, the position of the electron will be very uncertain. If the tachyonic neutrinos have enough energy, number density and the strong enough cross section, they will collectively be able to manage to disturb the electron's position frequently and strongly enough to it make it separated from the proton to maintain the Bohr radius. Despite the extremely small physical size of the electrons, the stronger weak interaction coefficient compared to that of the kinematic interaction will be enough to make the electron afloat from the proton by the amount of Bohr radius in the ground state. So, if this is the case, the Planck constant h becomes the function of the number density, the mean velocity and the weak interaction cross section between the tachyonic neutrinos and the elementary particles.

The uncertainty principle in quantum mechanics is caused by this statistical nature of the background tachyonic neutrinos interacting with the elementary particles. For example, quantum mechanics predicts that a particle placed in the space with the initial velocity V=0 at the origin of the coordinate x=0 has less and less chance to be found at the same place as time goes by. This is a clear violation of Newtonian mechanics. It only means that there must be some particles causing this uncertainty to the elementary particles without disturbing the statistical overall average momentum and the position of the particle. This shows the key properties of the background particles, ie, their flux must be isotropic and homogeneous. So, even if the statistical effect is fully applied, the overall averaged initial location and the momentum should be the same. The question where the electron is after a while at the time t=to has no meaning in quantum mechanics. We simply don't know. The most probable location where the electron will be found is still at x=0 but the probability to find it there will be spread out so thinly throughout the space that it would become meaningless to ask where the electron is exactly located at. In the space where tachyonic neutrinos are prevalent, this is exactly what will happen to the elementary particles like electrons.

Now we see how the two phenomena, gravitation and quantum mechanics, seemingly unrelated to each other have the same origin. All the predictions of quantum mechanics will be valid, only difference is that we know now it's an excellent phenomenology because while it worked and predicted the nature so well, still it didn't provide the clear answer to the mechanically inquisitive human mind. As Einstein put it "God doesn't play dice game". We humans want to know in mechanical terms why it is working if something really works and what is behind it. "Trust me it worked million times in the past and it should work in the future, so no more questions to the validity of it" would not be enough.

So, there is a possibility that inside the stars where the tachyonic neutrinos can not penetrate with enough numbers, the Bohr radius of the hydrogen atoms could be much smaller than it would be in the open space (assuming that they(hydrogen atoms) could manage to stay as individual atoms inside the stars) because there will be much less frequency of the tachyonic neutrinos disturbing the electron's position to make it stay far enough distance from the proton to maintain the same Bohr radius as they were in the open space. Since light cannot penetrate the bulk object, it will be hard to tell if the light that we detect belongs to the quanta that generated from the core of the stars. Most of the visible light we observe will be coming from the surface of the stars. So the observation of those lights will not provide the clear answer to this question.

Another interesting question would be what would happen if all the tachyonic neutrinos simply disappear. All the matters and stars will collapse to become like tiny pieces of dots in the universe and the energy generated from it will be so large that the big bang would be an insignificant event compared to it. On the other hand, there will be no gravity so the universe will become like a soup of murky cloud, dark, dull and lifeless and meaningless presence of emptiness all over the space if there is such a thing that can be called "space" left anymore.

In fact, the meaning of the space itself would become vague and uncertain because the pressure of the tachyonic gas seems to define the space itself as we observe it, as much as the pressurized air molecules inside the balloon defines the space of the balloon itself. In fact, it can be seen that the tachyonic neutrinos are the main building blocks of the universe instead of the material particles we observe and live in them. In a sense, the material particles(stars, galaxies, etc etc) are like the sea sponges floating under the deep sea water of the tachyonic neutrino gases spread through out the universe.