Setterfield Light Speed Research

 

Introduction
Before 1941
Setterfield's Work
Other Physical Processes if c is not constant
Are changes in light speed real?
Why is c not measured as changing now?
On the Measurement of Time, and the Velocity of Light
Regarding Work on light speed by some others
-- Moffat
-- Albrecht and Magueijo
-- Davies
Supportive and Explanatory Essays by others
Does current technology change the results?

The attitude as of 2011

Lambert Dolphin's page of collected relevant links and abstracts

 

Introduction: In 1987, Barry Setterfield and Norman Trevor submitted, by request, to Stanford Research Institute International a white paper (meant for internal discussion only) regarding changing values for the speed of light. The paper was then published under the name The Atomic Constants, Light and Time, by Flinders University in South Australia. The paper caused a bit of an uproar in some circles, as education establishments and general science had, for some years, been teaching the speed of light was constant. However the data showed differently. When the data and then its use were challenged, Lambert Dolphin, a physicist, and Alan Montgomery, a statistician, set out to find out whether Setterfield and Norman had handled the data correctly. Their several reports are linked below. Since that time, a number of other researchers have dealt with the issue of a possible change in the speed of light, and some of their material is also linked below. Barry Setterfield has continued his research through the years, with curiosity about where the data might lead. His papers here on this website are a result of that research, which continues.

A brief history of the speed of light research was written by Helen Setterfield in July of 2002 for Koinonia House, and a complete survey of the history of the speed of light experiments is available on this website.

Before 1941

Question:  Was this material about the speed of light changing talked about before?

Setterfield: Between 1880 and 1941 there were over 50 articles in the journal Nature alone addressing the topic of the decline in the actual measured values of lightspeed ( c). For example in 1931, after listing the four most recent determinations of c, De Bray commented in Nature "If the velocity of light is constant, how is it that, invariably, new determinations give values which are lower than the last one obtained ...? There are twenty-two coincidences in favour of a decrease of the velocity of light, while there is not a single one against it" (his emphasis). The interest was world-wide, and included the French, English, American, German and Russians. In addition, these discussions included some consideration of the fate of the newly developing concept of relativity if c were not a constant.

The whole discussion was brought to a close in August of 1941 by Professor R. T. Birge in an article dealing with the changing values of the atomic constants "With special reference to the speed of light" as the title stated. Birge's first paragraph raised many questions. In part it read: "This article is being written upon request, and at this time upon request.... Any belief in a change in the physical constants of nature is contrary to the spirit of science" (his emphasis) [Reports on Progress in Physics (Vol. 8, pp.90-100, 1941)]. Although this article effectively closed the whole discussion, the data trend continued. This was documented in our 1987 Report.

Please see Table A in the 1987 Report. These statistics were illustrating the fact that, in a situation where c was measured as changing, it was nonetheless true that at a given date (1882), three different methods of measuring c obtained the same result to within 5 km/s. In other words, these methods were giving consistent results. This is an important point. It was picked up by Newcomb in 1886 when he stated in Nature that the results obtained around 1740 by the two methods employed then gave consistent results, but those results gave a value for c that was about 1% higher than in his own day. Later, in 1941, history repeated itself. In that year Birge commented on the results that were obtained in the mid-1800’s by the variety of methods employed then. He acknowledged that "these older results are entirely consistent among themselves, but their average is nearly 100 km/s greater than that given by the eight more recent results." What cannot be denied is that there was a systematic drop in the values of c obtained by all methods. Even Dorsey, who was totally opposed to any variation in c was forced to concede this point. He stated "As is well known to those acquainted with the several determinations of the velocity of light, the definitive values successively reported…have, in general, decreased monotonously from Cornu’s 300.4 megametres per second in 1874 to Anderson’s 299.776 in 1940…"  

 

 Comment: Perhaps he [Birge] had in mind the idea that c was varying, just by itself, without being given a dynamics from its own term in the physical action S. That idea would violate the time-translation invariance, and likely the space translation invariance also, of physical law. In addition to blaspheming the uniformity of natural (which could evoke the "contrary to the spirit of science" remark), it would destroy the conservation of energy and momentum. That is one reason why I so strongly urge that any such theory be presented formally as a relativistic classical field theory, in which Noether's theorem(s) ensure energy and momentum and angular momentum conservation.

Setterfield:  First of all, you will note in the 1987 Report that there is a consistent trend in seven atomic quantities on the basis of conservation of energy, as shown in Table 24.

The issue of relativity is discussed in the report in the section entitled The Speed of Light and Relativity.

Here is the point. All methods of measuring c have given consistent results at any given time, but the values of c have dropped with time. This can be illustrated by the results from Pulkova Observatory alone where the value of c was obtained from the aberration method first used by Bradley. The Pulkova Observatory results were obtained using the same equipment on the same location by experienced observers over a long period of time. The observational errors remained unchanged, as did the inherent accuracy of the equipment. Nevertheless, from 1750 to 1935 the value of c obtained from this observatory dropped by more than 750 km/s. When all the results are in, each method individually revealed the drop in c, as did all methods lumped together. These results are backed up by changes in other atomic constants that are associated with c through the conservation of energy. One reviewer of the 1987 Report, who had a preference for the constancy of atomic quantities, noted that instrumental resolution "may in part explain the trend in the figures, but I admit that such an explanation does not appear to be quantitatively adequate." The Pulkova results prove that this is not the cause of the trend. Indeed, when the professor of statistics at Flinders University examined the data in the Report, he felt that a prima facie case existed for c decay and asked us to prepare a seminar for the Maths Department. In all the subsequent discussion and turmoil that the 1987 Report engendered, he stood by this assessment.

 

 Question: How small are the differences in the speed of light that we are talking about, since instrumentation became accurate?

Setterfield: The differences in the speed of light that we are talking about since instrumentation became accurate (about 1700) is about 3000 + kilometers per second. That is, about 1% higher compared to today's value. Some comments from the journals may help here:

From 1882 to 1883, Professor Simon Newcomb measured the speed of light in a series of definitive experiments. At this same time Albert Michelson had independently performed a series of experiments to determine the speed of light as well. In that same year, Nyren had determined the speed of light by the aberration method. The value obtained by these three experiments was 299854 +/-5 kilometers per second. In other words, they were in agreement to within 5 km/s. In 1886, Professor Simon Newcomb admitted that the definitive values accepted in the early 1700's were 1% higher than in his own day. [Nature, 13 May, 1886, pp 29-32]

Interestingly, history repeated itself. In 1941, Professor R.T. Birge, in looking over the most recently obtained values for the speed of light, commented that the measured values of c from the 1880's "are entirely consistent among themselves, but their average is nearly 100 km/s greater than the eight most recent values." [Report on Progress in Physics, vol. 8. pp 90-101, 1941]

So the difference was noticeable.

However, you raise the validity of the early measured values of lightspeed by Roemer and Bradley. Let us take Bradley's abberation measurements first. The same method using the same equipment was being employed at Pulkova observatory from about 1780 until about 1940. The data collected by this abberation method from Pulkova using the same equipment showed a consistent decline which amounted to 890 km/s over a period of 160 years. This is far larger than the error in measurement at Pulkova which averaged around 150 km/s. As far as Bradley's measurements themselves were concerned, I have listed the measurements he made on 24 stars over a period of 28 years, along with the reworkings of 5 different authorities. These are discussed in detail in the 1987 Report "The Atomic Constants, Light and Time" and the associated Tables 2 and 3. The result was 300,650 km/s, just 858 km/s above the present value. This final result omitted the re-workings of Busch which would have increased the mean value to 1632 km/s above the present value, but all that detail is documented in the Report. Before the critics say that the figures are hopelessly wrong, it is respectfully suggested that they check the original data for themselves and point out where the experts that re-worked the data have gone wrong.

As far as Roemer is concerned, some misleading statements by one recent authority have led to conflicting claims. Observations by Cassini gave the earth orbit radius delay for light travel as 7 minutes 5 seconds. Roemer gave it as 11 minutes from a selection of data. Halley noted that Roemer's figure for the delay was too large, while Cassini's was too small. Newton listed the delay as being between 7 and 8 minutes in 1704. Since then, observations using Roemer's method all gave results higher than today's value. Delambre, from 1000 observations around 1700 gave the delay as 493.2 seconds which gives a value of 303,320 km/s. Martin in 1759 gave the value as 493.0 seconds. Glasenapp in 1861 gave it as 498.57 seconds, Sampson in 1876.5 gave it as 498.64 seconds and Harvard gave it as 498.79 +/- 0.02 seconds which translates into 299,921 km/s +/- 13 km/s. Accordingly, while Roemer's exact value is debatable, the other documented values by this method all indicate that c was higher then. The details about all of this are available in the Report.

 

Setterfield's Work

Before 1987, Setterfield had been pressured by the editor of Ex Nihilo, a creation publication, to publish some of his material before he was ready. As his research continued, however, some of his ideas changed. The first major work was Atomic Constants, Light and Time, written in 1987.

It was attacked on primarily statistical bases, and defended by Alan Montgomery, a professional statistician and Lambert Dolphin, the physicist who had originally requested the paper. A more complete and sophisticated analysis of the Setterfield work was done by Montgomery himself. Malcolm Bowden also took the time to deal with some of the challenges.  Other challenges were responded to in published materials and private correspondence for five years following that work, at which time Setterfield resumed his research.

During the 1990’s Tifft’s work on the redshift and the challenges and extra research concerning it were being debated.  This material turned out to have strong implications for the Setterfield research.  These implications can be seen in Is the Universe Static or Expanding?; Reviewing the Zero Point Energy ; and Quantized Redshifts and the Zero Point Energy.

As Setterfield began to realize that the changing speed of light was simply the 'child' of the Zero Point Energy, which was also affecting other atomic 'constants,' he also became aware of current work in plasma physics and what is being called the 'electric universe.' Although the other researchers in these areas are primarily not creationists, their work strongly supports the Genesis account of creation and, in combination with the changing Zero Point Energy, a very recent creation. A number of papers in the Research segment of this website give the technical progress being made here, while Setterfield Simplified and the Bible study Genesis 1-11 both give lay explanations of much of this material.

Through the years literally hundreds of questions have been emailed to the Setterfields and they have attempted to answer them all. Many of the questions and answers are here in the Discussion Section of the website.

Other Physical Processes if the Velocity of Light is not Constant.

(Barry Setterfield)

Seven Relevant Basic Features

1. Photon energies are proportional to [l / c2)].
2. Photon fluxes from emitters are directly proportional to c.
3. Photons travel at the speed of c.
4. From 1 to 3 this means that the total energy flux from any emitter is invariant with decreasing c, that is, [ 1 / c2 x c x c ]. This includes stars and the radioactive decay of elements etc.
5. Atomic particles will travel at a rate proportional to c.
6. There is an additional quantisation of atomic phenomena brought about by a full quantum ± of energy available to the atom. This occurs every time there is a change in light-speed by ± 60 times its present value.
7. A harmonisation of the situation with regards to both atomic and macroscopic masses results from the new theory, and a quantisation factor is involved.

Results From Those Seven Features

A). From 2, as photosynthesis depends upon the number of photons received, it inevitably means that photosynthetic processes were more efficient with higher c values. This leads to the conclusions stated originally.

B). As radiation rates are proportional to c from 2, it inevitably follows that magma pools, e.g., on the moon, will cool more quickly. Note that A and B are built-in features of the theory that need no other maths or physics.

C). From 6 and 7, the coefficient of diffusion will vary up to 60 times its current value within a full quantum interval. In other words there is an upper maximum to diffusion efficiencies. Otherwise the original conclusions still stand.

D). In a similar way to C, and following on from 6 and 7, the coefficient of viscosity will vary down to 1/60 times it current value within the full quantum interval. This implies a lower minimum value for viscosities. Within that constraint, the original conclusions hold.

E). In a way similar to C and D, and again resulting from 6 and 7, critical velocities for laminar flow will vary up to 60 times that pertaining now, within the full quantum interval. The original conclusions then hold within that constraint.

F). As the cyclic time for each quantum interval was extremely short initially, it follows that it is appropriate to use an average value in C, D, and E, instead of the maximum: that is, about 30. As c tapered down to its present value, a long time has been spent on the lower portion of a quantum change with near-minimum values for C, and E, and near maximum values for D. These facts result in the effects originally elucidated.

 

Are changes in light speed real?

Two questions were originally posed to Malcolm Bowden in the U.K.

Dear Professor Bowden,

I am presently writing a book entitled, Origin of the Human Species. In one of the later chapters, I am dealing with dating problems, comparing the apparent biblical time frame to that of the standard theory of evolution. I have been aware of Norman and Setterfield's work for some time now and have included a section on c decay and its implications. I just came across your article supporting c decay on the web.

Personally, I am sympathetic to c decay as a solution to my own predilections supporting Adam and Eve! But good science must stand on its own feet. I had included a line citing M.E.J. Gheury de Bray's 1934 findings: I cite particularly his reference to two bits of data: c in 1926 was 299,796 k/s plus or minus 4 k/s and in 1933 was 299,774 k/s plus or minus 1 or 2 k/s. I have deleted the above line because I discovered the present established speed of light to be 299,792.458 k/s.

How can de Bray get a reading in 1933 some 18 k/s below TODAY's reading? Can you help me with this?

Dennis Bonnette, Ph.D.
Chairman,
Philosophy Department
Niagara University, NY 14109

Dear Professor Bowden,

I just came across the following website which I bring to your attention: (URL was garbled) What I found disturbing was a claimed refutation of c decay very near the bottom of this extensive document. It was based on the effect of elevated light speeds in distant celestial objects with respect to our observation here on Earth. The author claims that we would see the objects in "slow motion" because of the c decay as the light rays approached Earth. He claims this is refuted by the consistency of motion of pulsars and other regular objects at great distances. (I hope I have described the issue well enough for you to know what I am talking about or for you to be able to find the text in question on the website.) How would you respond to this claim against c decay? I was most impressed in your own website by the "common sense" argument about the expected distribution of errors as science refines the constant value of a true constant. But I grasp just enough of the abovementioned author's argument to be concerned about it. Your comment would be much appreciated.

Sincerely,
Dennis Bonnette

Response from Malcolm Bowden:

Dear Dr. Bonnette,

Thank you for your email with CDK queries. May I say that I am not a professor! I am a qualified Engineer - Civil and Structural. I regret that I am not a physicist and therefore can have great difficulty in answering technical questions such as you have posed. I have, however, forwarded your queries on to Lambert Dolphin and Barry Setterfield for their far better input. I have given your email address to them so they might reply direct. Lambert Dolphin has a huge website whch has much on CDK on it. You will find it at http://ldolphin.org or via my website. You may find your answer there anyway, but it might take some finding in this large site.

Regarding the de Bray quote, one of the difficulties is what the shape of the curve is. Barry often refers to the Bible verse of "the heavens being stretched out" and one curve is a close fit to a damped oscisllation curve as it decreased. This means is may have overshot the present speed, rose above it and then decreased to the present level. Looking at the graph in my book "True Science Agrees with the Bible" which should be the same as the website one, I see that there are a cluster of readings that are lower than the present speed between 1930 and 1940 and all have very short error bars. None, however, are as much as 18 k/s lower. I am wondering what this reading might be. It is probably one on my graph that has been corrected in some way. This suggests to me that there was another "overshoot" at that time before it settled down to the presen speed. I am forwarding this to Barry and Lambert to see if they have a better explanation than this.

Regarding the pulses from quasars, this is techinical but I agree that there seems to be a problem here. If c is fast, then pulses sent out at 1 sec intervals would travel through space and their speed would gradually slow down. This means that they would arrive at the earth at very long intervals between them. I do not know if there is any easy answer to this so am appealing to the others experts again. On aspect is that on looking into the ways of measuring distance in astronomy, I am very surprised at how much assumptions play a part. It is far from being an exact science! The question is can we be sure that they at vast distances? If they are closer than we think, the light would reach us quite quickly and the problem would be solved because there would not have been vast time periods for the light to slow down.

Certainly, it is the fall in the general readings that is very convincing to me, but there are many ramifications. I may say, that whenever I have a technical query like these two, Barry has always answerd it to my satisfaction - so I have great confidence in his reply to this also. Sorry that I cannot be of greater help but I am awaiting the replies of the greater experts!

Yours in anticipation,
Malcolm Bowden.

Response from Barry Setterfield:

Dear Dr. Bonnette,

Yes! There is a "low point" in the measured values of the speed of light, c, in the period 1928-1940. This comprised five different determinations of c by four different experimenters. By 1947 the "low point" was over. The standard establishment view on the problem was that there was a systematic error in the apparatus used. In view of the fact that 4 out of the five values were obtained by Kerr Cells, that explanation MAY have some validity as it involved light going through a polarising liquid. However, the later versions of Kerr Cells were called Geodimeters, and when they were introduced the "low point" was no longer in evidence. But there is another possible explanation.

Malcolm Bowden is absolutely correct when he says that an oscillation is involved in the cDK curve. Take the illustration of a child on a swing. When the swing is pushed, it is responding to a forcing function which may have any period. When the pushes cease, the swing settles down and finally oscillates at its own natural frequency. The complete behaviour of the swing is described mathematically by the two functions, namely the forcing function and the natural oscillation. The same thing is happening with the speed of light. Recent work undergoing peer review at the moment indicates that the general overall function is an exponential decay with a natural period of oscillation imposed upon it. The oscillation has in fact bottomed out in the recent past because evidence from associated constants suggests that c is on the increase again. The oscillation reached its peak around 700 AD.

As for the pulsar problem, most pulsars that have been found are in our own galaxy or within our Local Group of galaxies where the change in c is small. The resultant slow motion effect is therefore going to be minimal, particularly as there is such a wide range in pulsar spin rates. What you described as the potential problem was the expected CHANGE in pulsar spin rates due to dropping values of c in transit. When the calculations are done, the effect is certainly minuscule for Local Group objects.

If I can be of further assistance do not hesitate to let me know.

Comment:  It has been suggested that what may have been discovered is not a change in the value of c over the past 100 years, but rather "a secular change in the index of refraction of the atmosphere" due to the industrial revolution.

Setterfield:  This issue was discussed in the literature when c was actually measured as varying. In Nature, page 892 for June 13, 1931, V. S. Vrkljan answered this question in some detail. The kernel of what he had to say is this: "...a simple calculation shows that within the last fifty years the index of refraction [of the atmosphere] should have increased by some [6.7 x10-4] in order to produce the observed decrease [in c] of 200 km/sec. According to Landolt-Bornstein (Physikalisch-chemische Tabellen, vol.ii, 1923, p.959, I Erganzungsband, "Tracking or intellectual phase locking and cDK."   In another newsgroup postings on this topic, was suggested that the decay in c might be due merely to "tracking" or intellectual phase locking. This process is described as one in which the values of a physical constant become locked around a canonical value obtained by some expert in the field. Because of the high regard for the expert, other lesser experimenters will tailor their results to be in agreement with the value obtained by the expert. As a result, other experiments to determine the value of the constant will only slowly converge to the correct value.

 Although this charge may be levelled at some high school and first year university students, it is an accusation of intellectual dishonesty when brought into the arena of the cDK measurements.  First, there was a continuing discussion in the scientific literature as to why the measured values of c were decreasing with time. It was a recognised phenomena. In October of 1944, N. E. Dorsey summarised the situation. He admitted that the idea of c decay had "called forth many papers."  He went on to state that "As is well-known to those acquainted with the several determinations of the velocity of light, the definitive values successively reported ... have, in general, decreased monotonously from Cornu's 300.4 megametres per second in 1874 to Anderson's 299.776 in 1940 ..."  Dorsey strenuously searched for an explanation from the journals that the various experimenters had kept of their determinations. All he could do was to extend the error limits and hope that this covered the problem.  In Nature for April 4, 1931, Gheury de Bray commented: "If the velocity of light is constant, how is it that, INVARIABLY, new determinations give values which are lower than the last one obtained. ... There are twenty-two coincidences in favour of a decrease of the velocity of light, while there is not a single one against it." (his emphasis).

 In order to show the true situation, one only has to look at the three different experiments that were running concurrently in 1882. There was no collusion between the experimenters either during the experiments or prior to publication of their results. What happened? In 1882.7 Newcomb produced a value of 299,860 km/s. In 1882.8 Michelson produced a value of 299,853 km/s. Finally in 1883, Nyren obtained a value of 299,850 km/s. These three independent authorities produced results that were consistent to within 10 km/sec. This is not intellectual phase locking or tracking; these are consistent yet independent results from three different recognised authorities. Nor is this a unique phenomenon. Newcomb himself noted that those working independently around 1740 obtained results that were broadly in agreement, but reluctantly concluded that they indicated c was about 1% higher than in his own time. In 1941 history repeated itself when Birge made a parallel statement while writing about the c values obtained by Newcomb, Michelson and others around 1880. Birge was forced to concede that "...these older results are entirely consistent among themselves, but their average is nearly 100 km/s greater than that given by the eight more recent results."

 In view of the fact that these experimenters were not lesser scientists, but were themselves the big names in the field, they had no canonical value to uphold. They were themselves the authorities trying to determine what was happening to a capricious "constant". The figures from Michelson tell the story here. His first determination in 1879 gave a value of 299,910 km/s. His second in 1883 gave a result of 299,853 km/s. In 1924 he obtained a value of 299,802 km/s while in 1927 it was 299,798 km/s. This is not intellectual phase locking. Nor is it typical of a normal distribution about a fixed value. What usually happens when a fixed constant is measured is that the variety of experiments give results that are scattered about a fixed point. Instead, when all the c results are in, there is indeed a scatter; yet that scatter is not about a fixed point, but about a declining curve. It is a phenomenon that intellectual phase locking cannot adequately explain. If Dorsey, Birge or Newcomb could have explained it that way, we would certainly have heard about it in the scientific literature of the time. (May 20, 1999)

Why is c not measured as changing now?

Question: Since about 1960 the speed of light has been measured with tremendous precision with no observed change. Proponents of cDK usually reply that we have redefined units of measurement in such a way that when the modern methods of measurement are used the change in c disappears because of cancellation. Has anyone attempted to remeasure c by "old fashioned" methods? It would seem to me that redoing the classic measurements could settle this issue, at least to my satisfaction. This would provide a new baseline of at least four decades, and probably much more.

Setterfield: The problem with current methods of light-speed measurements (mainly laser) is that both wavelengths [W] and frequency [F] are measured to give c as the equation reads [c = FW].  If you have followed the discussion well, you will be aware that, within a quantum interval, wavelengths are invariant with any change in c.  This means that it is the frequency of light that varies lock-step with c. Unfortunately, atomic frequencies also vary lock-step with c, so that when laser frequencies are measured with atomic clocks no difference will be found.

 The way out of this is to use some experimental method where this problem is avoided.  It has been suggested that the Roemer method may be used. This method uses eclipse times of Jupiter's inner satellite Io.  Indeed it has been investigated by Eugene Chaffin.  Although many things can be said about his investigation (and they may be appropriate at a later date), there are a couple of outstanding problems which confronts all investigators using that method.  Chaffin pointed out that perturbations by Saturn, and resonance between Io, Europa, and Ganymede are definitely affecting the result, and a large number of parameters therefore need investigation.  Even after that has been done, there remains inherent within the observations themselves a standard deviation ranging from about 30 to 40 seconds.  This means the results will have an intrinsic error of up to 24,000 km/s. Upon reflection, all that can be said is that this method is too inaccurate to give anything more than a ball-park figure for c, which Roemer to his credit did, despite the opposition.  It therefore seems unwise to dismiss the cDK proposition on the basis of one of the least precise methods of c measurement as the notice proposes that was brought to our attention.  This leaves a variety of other methods to investigate.

 However, that is not the only way of determining what is happening to c. There are a number of other physical constants which are c-dependent that overcome the problem with the use of atomic clocks.  One of these is quantised Hall Resistance now called the von Klitzing constant.  Another might be the gyromagnetic ratio.  A further method is to compare dynamical intervals (for example, using Lunar radar or laser ranging) with atomic intervals.  These and other similar quantities give an indication that c may have bottomed out around 1980 and is slowly increasing again.  Indeed, atomic clock comparisons with historical data can be used to determine the behaviour of c way back beyond 1675 AD when Roemer made the first determination.  These data seem to indicate that c reached a maximum around 700 AD (very approximately).  The data from the redshift paper implies that this oscillation is superimposed on an exponential decline in the value of c from the early days of the cosmos.  The material on our website detailing the history of the research of the speed of light measurements details this.

 

 Comment: I was thinking more in terms of a Fizzeau device, which is what I assumed was used by Newcomb and the others mentioned in your earlier comments.

Setterfield: Your suggestion is a good one. Either the toothed wheel or the rotating mirror experiments should give a value for c that is free from the problems associated with the atomic clock/frequency blockage of modern methods, and the shortcomings of the Roemer method. The toothed wheel requires a rather long base-line to get accurate results as shown by the experiments themselves. However, given that limitation, an interesting feature may be commented upon. Cornu in 1874.8 and Perrotin in 1901.4 essentially used the same equipment. The Cornu mean is 299,945 km/s while the Perrotin mean is 299,887 km/s. This is a drop of 58 km/s in 26.6 years measured by the same equipment.

The rotating mirror experiments also required a long base-line, but the light path could be folded in various ways. Michelson in 1924 chose a method that combined the best features of both the rotating mirror and toothed wheel: it was the polygonal mirror. In the 1924 series, Michelson used an octagonal mirror. Just over two years later, in 1926.5 he decided to use a variety of polygons in a second series of experiments. The glass octagon gave 299,799 km/s; the steel octagon 299,797 km/s; a 12 faced prism had 299,798 km/s; a 12 faced steel prism gave 299,798 km/s; and a 16 faced glass prism resulted in a value of 299,798 km/s. In other words all the polygons were in agreement to within +/-- 1 km/s and about 1,600 individual experiments had been performed. That is a rather impressive result. However, despite the internal accuracy to within 1 km/s, these results are still nearly 6.5 km/s above the currently accepted value.

To my way of thinking, this polygonal mirror method would probably be the best option for a new determination of c. On the other hand, perhaps Newcomb's or Michelson's apparatus from earlier determinations may still be held in a museum display somewhere. Modern results from such apparatus would certainly arouse interest. Thanks for the helpful suggestion.

 

Questions: Are you saying that 'c' is no longer decaying? (assuming the velocity of light has decreased exponentially to nearly zero at the present time.) Do you mean "nearly zero" compared to what it may have been at the time of creation?

Setterfield: The exponential decay has an oscillation superimposed upon it. The oscillation only became prominent as the exponential declined. The oscillation appears to have bottomed out about 1980 or thereabouts. If that is the case (and we need more data to determine this exactly) then light-speed should be starting to increase again. The minimum value for light-speed was about 0.6 times its present value. This is as close to 'zero' as it came.

 

Question: I have been thinking about the speed of light's exponential decrease, and it seems to me that soon there will be a point in time when the speed of light will stop decreasing altogether, and either become a constant or maybe it will bounce back a little bit, like the recoil of a rubber band as it's let go. 

What do you think of this idea?  And can you work out what year the speed of light would reach a zero point of deceleration, based on your deceleration curve?

Setterfield:  First of all, the decrease in the speed of light is not strictly exponential, although it does follow a well-defined mathematical curve.  There is an extremely fast drop at first, which then tapers off similar to a Lorentzian curve.  Will it ever cease decreasing?  Yes, possibly.  But, as you mention, there is a recoil effect which we have picked up in the redshift measurements and which has also been picked up in recent light speed measurements.  The minimum light speed appeared to be reached, actually, around 1980, and there is evidence of a slight increase since then.  You can see evidence of this in the last two graphs here.

When you check the dates for both Planck’s Constant and the mass graph, you will see a slight change about 1980.  Although the speed of light graph does not go this far in time, as I was just showing Birge’s accepted values, it would show the same slight change in direction at this time as well.  The change you do see in the light speed graph around 1940 appears to be somewhat anomalous, as this same ‘burp’ can be seen during the same years in the lower two graphs as well. 

On the Measurement of Time, and the Velocity of Light

Setterfield:  Several questions have been raised  which deserve a reply.

 First, the matter of timing and clocks. In 1820 a committee of French scientists recommended that day-lengths throughout the year be averaged, to what is called the mean solar day. The second was then defined as 1/86,400 of this mean solar day. This definition was accepted by most countries and supplied science with an internationally accepted standard of time. This definition was used right up until 1956. In that year it was decided that the dynamical definition of a second be changed to become 1/31,556,925.97474 of the earth's orbital period that began at noon on the 1st January 1900. Note that this definition of the second ensured that the second remained the same length of time as it had always been right from its earlier definition in 1820. This definition continued until 1967 when atomic time became standard. The point to note is that in 1967 one second on the atomic clock was DEFINED as being equal to the length of the dynamical second, even though the atomic clock is based on electron transitions. Interestingly, the vast majority of c measurements were made in the period 1820 to 1967 when the actual length of the second had not changed. Therefore, the decline in c during that period cannot be attributed to changes in the definition of a second.

 However, changes in atomic clock rates affecting the measured value for c will certainly occur post 1967. In actual fact, the phasing-in period for this new system was not complete until January 1, 1972. It is important to note that dynamical or orbital time is still used by astronomers. However, the atomic clock which astronomers now use to measure this has leap-seconds added periodically to synchronise the two clocks. The International Earth Rotation Service (IERS) regulates this procedure. Since January 1st, 1972, until January 1st, 1999 exactly 32 leap seconds have been added to keep the two clocks synchronised. There are a number of explanations as to why this one-sided procedure has been necessary. Most have to do with changes in the earth's rotational period. However, a contributory cause MAY be the change in light-speed, and the consequent change in run-rate of the atomic clock. If it is accepted that it is the run-rate of the atomic clock which has changed by these 32 seconds in 27 years, then this corresponds to a change in light-speed of exactly [32/(8.52032 x 108) c = (3.7557 x 10-8 c ] or close to 11.26 metres/second.

 The question then becomes, "Is this a likely possibility?" Many scientists would probably say no. However, Lunar and planetary orbital periods which comprise the dynamical clock, have been compared with atomic clocks from 1955 to 1981 by Van Flandern and others. Assessing the evidence in 1981 Van Flandern noted that "the number of atomic seconds in a dynamical interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing down with respect to dynamical phenomena." (Precision Measurements and Fundamental Constants II, pp. 625-627, National Bureau of Standards (US) Special Publication 617, 1984. Even if these results are controversial, Van Flandern's research at least establishes the principle on which the former comments were made.

 Note here that, given the relationship between c and the atomic clock, it can be said that the atomic clock is extraordinarily PRECISE as it can measure down to less than one part in 10 billion. However, even if it is precise, it may not be ACCURATE as its run-rate will vary with c. Thus a distinction has to be made between precision and accuracy when talking about atomic clocks.

 Finally, there were some concerns about timing devices used on any future experiments to determine c by the older methods. Basically, all that is needed is an accurate counter that can measure the number of revolutions of a toothed wheel or a polygonal prism precisely enough in a one second period while light travels over a measured distance. Obviously the higher the number of teeth or mirror faces the more accurate the result. Fizeau in 1849 had a wheel with 720 teeth that rotated at 25.2 turns per second. In 1924, Michelson rotated an octagonal mirror at 528 turns per second. We should be able to do better than both of those now and minimise any errors. The measurement of the second could be done with accurate clocks from the mid-50's or early 60's. This procedure would probably overcome most of the problems that John foresees in such an experiment. If John has continuing problems, please let us know.

Further Comments on Time Measurements and c: In 1820 a committee of French scientists recommended that day lengths throughout the year be averaged, to what is called the Mean Solar Day. The second was then defined as 1/86,400 of this mean solar day. This supplied science with an internationally accepted standard of time. This definition was used right up to 1956. In that year it was decided that the definition of the second be changed to become 1/31,556,925.97474 of the earth's orbital period that began at noon on 1st January 1900. This definition continued until 1967 when atomic time became standard. In 1883 clocks in each town and city were set to their local mean solar noon, so every individual city had its own local time. It was the vast American railroad system that caused a change in that. On 11th October 1883, a General Time Convention of the railways divided the United States into four time zones, each of which would observe uniform time, with a difference of precisely one hour from one zone to another. Later in 1883, an international conference in Washington extended this system to cover the whole earth.

 The key point to note here is that the vast majority of c measurements were made during the period 1820 to 1956. During that period there was a measured change in the value of c from about 299,990 km/s down to 299,792 km/s, a drop of the order of 200 km/s in 136 years. The question is what component of that may be attributable to changes in the length of the second since the rate of rotation of the earth is involved in the existing definition. It is here that the International Earth Rotation Service (IERS) comes into the picture. Since 1st January 1972 until 1st January 1999, exactly 32 leap seconds have been added to keep Co-ordinated Universal Time (UTC) synchronised with International Atomic Time (TAI) as a result of changes in the earth's rotation rate. Let us assume that these 32 leap seconds in 27 years represent a good average rate for the changes over the whole period of 136 years from 1820 to 1956. This rate corresponds to an average change in measured light-speed of [32/(8.52023 x 108) c = (3.7557 x 10-8) c] or close to 11.26 metres per second in one year. As 136 years are involved at this rate we find that [11.26 x 136 = 1531] metres per second or 1.53 km/s over the full 136 years. This is less than 1/100th of the observed change in that period. As a result it can be stated (as I think Froome and Essen did in their book "The Velocity of Light and Radio Waves") that limitations on the definition of the second did not impair the measurement of c during that period ending in 1956.

 Therefore, if measurements of c were done with modern equivalents of rotating mirrors, toothed wheels or polygonal prisms, and the measurements of seconds were done with accurate equipment from the 1950's, a good comparison of c values should be obtained. Note, however, that the distance that the light beam travels over should be measured by equipment made prior to October 1983. At that time c was declared a universal constant (299,792.458 km/s) and, as such, was used to re-define the metre in those terms.

As a result of the new definitions from 1983, a change in c would also mean a change in the length of the new metre compared with the old. However, this process will only give the variation in c from the change-over date of 1983. By contrast, use of some of the old experimental techniques measuring c will allow direct comparisons back to at least the early 1900's and perhaps earlier. In a similar way, comparisons between orbital and atomic clocks should pick up variations in c. As pointed out before, this latter technique has in fact been demonstrated to register changes in the run-rate of the atomic clock compared with the orbital clock by Van Flandern in the period 1955 to 1981.

By way of further information, the metre was originally introduced into France on the 22nd of June, 1799, and enforced by law on the 22nd of December 1799. This "Metre of the Archives" was the distance between the end faces of a platinum bar. In September 1889 up till 1960 the metre was defined as the distance between two engraved lines on a platinum-iridium bar held at the International Bureau of Weights and Measures in Sevres, France. This more recent platinum-iridium standard of 1889 is specifically stated to have reproduced the old metre within the accuracy then possible, namely about one part in a million. Then in 1960, the metre was re-defined in terms of the wavelength of a krypton 86 transition. The accuracy of lasers had rendered a new definition necessary in 1983. It can therefore be stated that from about 1800 up to 1960 there was no essential change in the length of the metre. It was during that time that c was measured as varying. As a consequence, the observed variation in c can have nothing to do with variations in the standard metre.            

Question: Barry points out that for obvious reasons no change in the speed of light has been noticed since the redefinition of time in terms of the speed of light a few decades ago. However, the new definition of time should cause a noticeable drift from ephermis time due to the alleged changing speed of light. I'm not aware of any such drift. Ephemeris time should be independent of the speed of light. Before atomic standards were adopted, crystal clocks had documented the irregular difference between ephemeris time and time defined by the rotation of the earth. Has Barry investigated this?

Setterfield: On the thesis being presented here, the run-rate of atomic clocks is proportional to 'c'. In other words, when 'c' was higher, atomic clocks ticked more rapidly. By contrast, it can be shown that dynamical, orbital or ephemeris time is independent of 'c' and so is not affected by the 'c' decay process. Kovalevsky has pointed out that if the two clock rates were different, "then Planck's constant as well as atomic frequencies would drift" [J. Kovalevsky, Metrologia 1:4 (1965), 169].

Such changes have been noted. At the same time as 'c' was measured as decreasing, there was a steady increase in the measured value of Planck's constant, 'h', as outlined in the 1987 Report by Norman and Setterfield. However, the measured value of 'hc' has been shown to be constant throughout astronomical time. Therefore it must be concluded from these measurements that 'h' is proportional to 1/c precisely. As far as different clock rates is concerned, the data is also important. During the interval 1955 to 1981 Van Flandern examined data from lunar laser ranging using atomic clocks and compared them with dynamical data. He concluded that: "the number of atomic seconds in a dynamical interval is becoming fewer. Presumably, if the result has any generality to it, this means that atomic phenomena are slowing down with respect to dynamical phenomena" [T. C. Van Flandern, in 'Precision Measurements and Fundamental Constants II,' (B. N. Taylor and W. D. Phillips, Eds.), NBS (US), Special Publication 617 (1984), 625]. These results establish the general principle being outlined here. Van Flandern also made one further point as a consequence of these results. He stated that "Assumptions such as the constancy of the velocity of light · may be true only in one set of units (atomic or dynamical), but not the other" [op. cit.]. This is the kernel of what has already been said above. Since the run-rate of the atomic clock is proportional to 'c', it becomes apparent that 'c' will always be a constant in terms of atomic time. Van Flandern's measurements, coupled with the measured behaviour of 'c', and other associated 'constants', indicate that the decay rate of 'c' was flattening out to a minimum which seemed to be attained around 1980. Whether or not this is the final minimum is a matter for decision by future measurements. But let me explain the situation this way. The astronomical, geological, and archaeological data indicate that there is a ripple or oscillation associated with the main decay pattern for 'c'. In many physical systems, the complete response to the processes acting comprises two parts: the particular or forced response, and the complimentary, free, or natural response. The forced response gives the main decay pattern, while the free response often gives an oscillation or ripple superimposed on the main pattern. The decay in 'c' is behaving in a very similar way to these classical systems.

There are three scenarios currently undergoing analysis. One is similar to that depicted by E. A. Karlow in American Journal of Physics 62:7 (1994), 634, where there is a ripple on the decay pattern that results in "flat points", following which the drop is resumed. The second and third scenarios are both presented by J. J. D'Azzo and C. H. Houpis "Feedback Control System Analysis and Synthesis" International Student Edition, p.258, McGraw-Hill Kogakusha, 1966. In Fig. 8-5 one option is that the decay with its ripple may bottom out abruptly and stay constant thereafter. The other is that oscillation may continue with a slight rise in the value of the quantity after each of the minima. Note that for 'c' behaviour, the inverse of the curves in Fig. 8-5 is required. All three options describe the behaviour of 'c' rather well up to this juncture. However, further observations are needed to finally settle which sort of curve is being followed.

Work on light speed by others

Light Speed and the Early Cosmos (Barry Setterfield, January 24, 2002)

The issue of light-speed in the early cosmos is one which has received some attention in several peer-reviewed journals. Starting in December 1987, the Russian physicist V. S. Troitskii from the Radiophysical Research Institute in Gorky published a twenty-two page analysis in Astrophysics and Space Science regarding the problems cosmologists faced with the early universe. He looked at a possible solution if it was accepted that light-speed continuously decreased over the lifetime of the cosmos, and the associated atomic constants varied synchronously. He suggested that, at the origin of the cosmos, light may have traveled at 1010 times its current speed. He concluded that the cosmos was static and not expanding.

In 1993, J. W. Moffat of the University of Toronto, Canada, had two articles published in the International Journal of Modern Physics D.  He suggested that there was a high value for 'c' during the earliest moments of the formation of the cosmos, following which it rapidly dropped to its present value. Then, in January 1999, a paper in Physical Review D by Andreas Albrecht and Joao Magueijo, entitled "A Time Varying Speed Of Light As A Solution To Cosmological Puzzles" received a great deal of attention. These authors demonstrated that a number of serious problems facing cosmologists could be solved by a very high initial speed of light.

Like Moffat before them, Albrecht and Magueijo isolated their high initial light-speed and its proposed dramatic drop to the current speed to a very limited time during the formation of the cosmos. However, in the same issue of Physical Review D there appeared a paper by John D. Barrow, Professor of Mathematical Sciences at the University of Cambridge. He took this concept one step further by proposing that the speed of light has dropped from the value proposed by Albrecht and Magueijo down to its current value over the lifetime of the universe.

An article in New Scientist for July 24, 1999, summarised these proposals in the first sentence. "Call it heresy, but all the big cosmological problems will simply melt away, if you break one rule, says John D. Barrow - the rule that says the speed of light never varies." Interestingly, the initial speed of light proposed by Albrecht, Magueijo and Barrow is 1060 times its current speed. In contrast, the redshift data give a far less dramatic result. The most distant object seen in the Hubble Space Telescope has a redshift, 'z', of 10 . This gives an original speed of light more in line with Troitskii's proposal, and considerably more conservative than the Barrow, Albrecht and Magueijo estimate. This lower, more conservative estimate is also in line with the 1987 Norman-Setterfield Report.

 

Moffat

Speed Of Light May Not Be Constant, Physicist Suggests

A University of Toronto professor believes that one of the most sacrosanct rules of 20th-century science -- that the speed of light has always been the same - is wrong. Ever since Einstein proposed his special theory of relativity in 1905, physicists have accepted as fundamental principle that the speed of light -- 300 million metres per second -- is a constant and that nothing has, or can, travel faster. John Moffat of the physics department disagrees - light once travelled much faster than it does today, he believes.

Recent theory and observations about the origins of the universe would appear to back up his belief. For instance, theories of the origin of the universe -- the "Big Bang"- suggest that very early in the universe's development, its edges were farther apart than light, moving at a constant speed, could possibly have travelled in that time. To explain this, scientists have focused on strange, unknown and as-yet-undiscovered forms of matter that produce gravity that repulses objects.

Moffat's theory - that the speed of light at the beginning of time was much faster than it is now - provides an answer to some of these cosmology problems. "It is easier for me to question Einstein's theory than it is to assume there is some kind of strange, exotic matter around me in my kitchen." His theory could also help explain astronomers' discovery last year that the universe's expansion is accelerating. Moffat's paper, co-authored with former U of T researcher Michael Clayton, appeared in a recent edition of the journal Physics Letters.

 

Albrecht and Magueijo

 Questions: I'm not a scientist, although I have some math and science background, and I am only just beginning to look into this discussion and may be asking a stupid question. Apology stated. I am a fellow believer and view Genesis as the ultimate "Theory" for which we need to find proof (not that we need to defend God, but it seems a reasonable part of our witness). That said, I want to review the emerging theories somewhat objectively. I noticed in Setterfield's paper there was reference to conservation of energy E=MC2. I found it interesting that VSL theory put forth by Magueijo doesn't require that, but seems to say energy will not be conserved with time. Have Setterfield (or yourself) reviewed the work of Magueijo? Perhaps he has discovered something important. Ultimately any theory has to make since on an earth populated by humans to be of any use to us. Does this consideration require conservation of energy in the Setterfield theory? Or, is it possible that energy may not be conserved over the history of the universe?

Setterfield: Yes, we certainly have considered Albrecht and Magueijo's paper and are aware of what he is proposing. His paper is basically theoretical and has very little observational backing for it. By contrast, my papers are strictly based on observational evidence. This requires that there is conservation of energy. The observational basis for these proposals also reveals that there is a series of energy jumps occurring at discrete intervals throughout time as more energy becomes available to the atom. Importantly, it should be noted that this energy was initially invested in the vacuum during its expansion, and has become progressively available as the tension in the fabric of space has relaxed over time thus converting potential energy into the kinetic energy utilised by the atom. The atom can only access this energy once a certain threshold has been reached, and hence it occurs in a series of jumps. This has given rise to the quantised redshift that occurs in space.

Thus observational evidence agrees with the conservation approach rather than Mageuijo's approach.

see also Einstein's Biggest Blunder

Davies in Nature, August 2002

Setterfield: On Thursday 8th August 2002, a burst of Press publicity accompanied the publication of a paper in the prestigious scientific journal Nature. That article was authored by Professor Paul Davies, of Sydney's Macquarie University, and by two astrophysicists from the University of New South Wales, Dr. Charles Lineweaver, and graduate student Tamara Davis. The paper suggested that the speed of light was much higher in the past and had dropped over the lifetime of the universe. These conclusions were reached as a result of the observations of University of New South Wales astronomer Dr. John Webb made in 1999 and the more recent observations of one of his PhD students, Michael Murphy. These observations indicated a slight shift in the position of the dark lines that appear in the rainbow spectrum of metallic atoms deep in space when compared with their expected position. Because there are a number of factors to disentangle statistically, and because the effect is small (about 1 part in 100,000), there remains some doubt as to the validity of the primary conclusion, let alone the suspected causes of the effect.

The actual physical quantity that the observations are targeting is the fine structure constant. This constant links together four other atomic quantities, namely the speed of light, the electronic charge, Planck's constant and the electric property of free space called the permittivity. It is possible that any one of these quantities may be varying, or that there is synchronous variation between some or all of the components that make up the fine structure constant. Thus, one possible explanation for the observed effect is that the speed of light was higher the further back in time we look. But it is not the only explanation. Other possible explanations include a change in the value of the charge on the electron. However, the paper by Davies et al. rejects this possibility on the basis of what was expected to occur with black holes at the frontiers of the cosmos. Until I have seen a copy of the paper by Davies et al. I do not know if they have eliminated all other options.

However, since a major paper by Andreas Albrecht and Jao Magueijo in 1999, and another one by John Barrow in the same issue of Physical Review D, the speed of light has come under increasing scrutiny as a physical quantity that may be varying. These scientists are saying that if lightspeed was significantly higher at the inception of the cosmos (about 1060 higher) then a number of astronomical problems can be readily resolved. Paul Davies statements echo that and he, like Barrow, considers that lightspeed has declined over the history of the universe. By contrast, Albrecht and Magueijo contained the lightspeed change to the earliest moments of the Big Bang and had it drop to its present value immediately afterwards. In that sense, this recent work is consolidating the belief that the drop in lightspeed has extended over the whole history of the universe. This is the position that the variable lightspeed (Vc) research has advocated since the early 1980's.

The cause of the change in the speed of light was postulated by Lineweaver to be connected to a change in the structure of the vacuum. Our work has shown he was correct and Reviewing the Zero Point Energy elucidates this. Because there is an intrinsic energy in every cubic centimetre of the vacuum, this energy may manifest as virtual particle pairs like electron/positron pairs that flit in and out of existence. As a photon of light travels through the vacuum, it hits a virtual particle, is absorbed, and then shortly after is re-emitted. This process, while fast, still takes a finite time to occur. Thus, a photon of light is like a runner going over hurdles. The more hurdles over a set distance on the track the longer it takes for runners to reach their destination. Thus, if the energy content of space increased with time, more virtual particles would manifest per unit distance, and so the longer light would take to reach its destination.

Much was made of the potential problems that lightspeed changes would cause to Einstein's theory of Relativity. This matter has been discussed ever since Albrecht and Magueijo's paper in 1999. However, the Vc work examined this issue back in the 1980's and decided that Einstein's work will basically remain valid provided that energy is conserved in the process. This necessarily involves changes in a number of other constants as Trevor Norman and I outlined in the 1987 Report, The Atomic Constants, Light and Time,  from SRI International and Flinders University. These matters are also discussed in further detail in the 2001 paper. I also had the opportunity to briefly pursue the issue of observed changes to other atomic constants with Prof. Albrecht in March 2002. He admitted that his proposal had problems with the observations of some constants. I mentioned that these problems could be overcome if energy was conserved in the process.  He stated that they had looked at that but decided that they could not achieve all the effects they wanted to if energy was conserved, and so abandoned that position.  Albrecht and Magueijo attempted to largely avoid these problems by isolating lightspeed changes to the earliest moments of the Big Bang.  However, these more recent results are tending to confirm that the change has been occurring over the lifetime of the universe.  Consequently, the issue of changing atomic constants must be opened again, and the validity of Einstein's equations linked in with it.

In summary, the scientific community is coming to believe that a drop in lightspeed has occurred over the lifetime of the cosmos from some initial value near 1060 times its current speed.  The Vc research has indicated that lightspeed has been dropping over the life of the universe from a maximum value around 1011 times now. This is a more conservative estimate than others are proposing. The actual cause of the change in lightspeed is suspected by both secular scientists and those involved in the Vc research as being related to changes in the structure of the vacuum.  Finally, Einstein's equations have been called into question. However, they can be shown to be basically correct provided that energy is conserved in the process of c variation, but some other atomic constants will vary synchronously in this case. Those other constants, which have been shown to be varying in this way from observational data, were examined in the 1987 Report and the 2001 paper and support the Vc position.

 10th August 2002.

Supportive and Explanatory Essays by others

Speed of Light Slowing Down?   by Chuck Missler

Expanded explanation and implications regarding a changing c  by Lambert Dolphin

Reports of the Death of Speed of Light Decay are Premature, by Malcolm Bowden

 

Does current technology change the results?

Comment:  One argument that is often used against older data being used to support CDK is that improved techniques & technology have boosted accuracy over the years, & thus older data is less accurate. I have often wondered if anyone has reproduced those older measurements using similar techniques & technology. The reason that I suggest this is because I've been led to understand that older measurements tend to disproportionately indicate higher c (i.e.. older values are not centered around today accepted value as one would expect). This makes me think that if this tendency is due to errors in measurement from less accurate techniques &/or technology then the disproportionately higher c values may have resulted from a systemic error or bias that should be reproducible today.

In other words, if c has remained constant throughout history, then those older techniques & technologies should still show the same bias toward higher values of c. On the other hand, if c has decreased over time, then those older techniques & technologies should now produce values of c that are scattered around todays accepted value.


response from Lambert Dolphin:
Alan Montgomery, a Canadian government statistician, studied this possibility carefully when we sorted all our original data. The decrease shows up regardless of the method of measuring c. The decreases in c observed have always been within error-bar limits. If one sorts the data by decade, the systematic decreases are still evident. As explained in our papers, the published data has to be scrutinized to be sure that a given measurements does not fit into the master data list more than once. The full set of all the data is online, and an Excel table should any of you wish to do his (or her) own analysis.  Alan went to a lot of extra trouble for his paper presented at the Pittsburgh conference in 1995. We tried to anticipate every possible criticism of our methods and played the devil’s advocate back and forth for at least a year to make sure we had not overlooked anything.

The speed of light was dropping rapidly when the first measurements were made in the late 1600s. The rate of change in c is now very slow but still measurable by comparing the run rate of atomic clocks compared with gravity clocks.

An easy way to start thinking about this whole issue is to recall that the speed of light in any medium is equal to one over the square root of mu times epsilon, where mu and epsilon represent the magnetic permeability and electrical permittivity of the medium. (From Maxwell’s equations). If empty space were truly empty, c would always have the same value in vacuum.  It is now evident both theoretically and experimentally that empty space has a residual energy level known as the ZPE (Zero-Point Energy). An increase in the ZPE of space over time would account for the observed in c. Light slows down in transparent media like glass or water, so if the properties of the vacuum have changed over time, c would be expected to change. The surprise is in a new model for the nature of the vacuum. There is no requirement in the Law of Moses for the speed of light to be a fixed quantity forever.

The topic of the ZPE is currently a subject of vigorous study in modern physics and cosmology. You’ll find a lot more on this on Barry’s web site.

Response from Barry:
The entire history of the speed of light experiments is a project I undertook some years ago, in part when I was trying to find out if human or instrumental error were involved in the measured changes of c.  What I found was that when the same instruments were used at the same laboratories, and often by the same people, the downward trend in c was still remarkable.  The most clear example of this is the Pulkova Observatory measurements.  All of this can be found in History of the Speed of Light Experiments.  Please note it is in four parts.

 

The Attitude as of 2011

Question: Since I am not part of the scientific community that studies things like the speed of light, I wanted to ask:
What is the current attitude in the scientific community toward the changing speed of light and the quantized red shift.  What response is being given (pro or con) to the data?

Setterfield: Thank you for your question. The attitude of the general science community to the changing speed of light is two-fold. There are those who deny any possibility of a change. Then there are others like Jao Magueijo and Andreas Albrecht and John Barrow or John Webb who admit the possibility and have studied it...in a sort of way. Unfortunately, the way they have decided to handle it is by taking what they call a minimalist position. That is, they only vary the minimum number of constants that they can. In doing this, they have ignored the fact that anything that changes one constant, like the Zero Point Energy (ZPE) will have changed others as well. Thus the speed of light changes, linked with the ZPE changes, will also change Plancks constant, h, and atomic masses, atomic time, and the permittivity and permeability of the vacuum, etc. What they have done is to see what variation of the speed of light can be found in distant astronomical objects using the Fine Structure Constant, sometimes simply called "alpha".

As it turns out, alpha is made up of the speed of light, Planck's constant, the permittivity of space and the square of the electronic charge. Our research has shown these constants are combined in alpha in such a way that all their individual variations mutually cancel out. So on our work, alpha should be absolutely constant (except in strong gravitational fields). Instead, as astronomers have looked at the value of alpha in distant galaxies, they find a very small variation; less than one part in 10,000. They then attribute this change to a change in the speed of light alone, which is the only constant they are varying in this minimalist position. Therefore, this is the limit to which they can go with lightspeed variation.

A few years ago I saw Dr. Albrecht at Davis. I suggested that he might like to vary other constants, including those in alpha, in such a way that energy is conserved in the whole process. (That way, the changes in the data we have would have been supported.). Albrecht responded that they had looked at the possibility of energy conservation in this process. However, he stated that "if we did that, we could not achieve all that we wanted to achieve with our approach." Sad. But that is the way it goes. Their whole process is theory driven, not data driven.

As far as the quantized redshift is concerned, papers have been published with data showing the quantization out to the frontiers of the cosmos...even recently. However, most astronomers reject their findings. The argument goes that the quantization is the result of too small a data set. Increase the number of data, and the effect will go away. This has always been their argument. Data sets have been increased and the effect is still apparent. There is never going to be enough data to satisfy the skeptics. There is a reason for this. If they admit the redshift is quantized, then that means the universe is not expanding - or else is expanding in jumps. Both alternatives are the death blow to the Big Bang; and most cosmological models are completely dependent upon this. It would mean a major re-think in astronomy and a huge about-face in public. In a word, it is unthinkable. So the quantized redshift is ignored and treated as if it does not exist.

That basically is the situation on both topics that you raised. I hope that gives you an overview as to why these propositions are usually rejected.