• No results found

Alternative cosmologies

N/A
N/A
Protected

Academic year: 2022

Share "Alternative cosmologies"

Copied!
22
0
0

Loading.... (view fulltext now)

Full text

(1)

To learn more about AIP Conference Proceedings, including the Conference Proceedings Series, please visit the webpage

http://proceedings.aip.org/proceedings

COSMOLOGY

AND GRAVITATION

XIII

th

Brazilian School of Cosmology and Gravitation

Mangaratiba, Rio de Janeiro, Brazil 20July- 2August2008

EDITORS Mario Novello

Institute de Cosmologia Relatividade, e Astroffsica, ICRA/CBPF Rio de Janeiro, Brazil

Santiago E. Perez Bergliaffa

Universidade do Estado do Rio de Janeiro Rio de Janeiro, Brazil

SPONSORING ORGANIZATIONS

Fundacao de Amparo a Pequisa do Rio de Janeiro - (FAPERJ) Centra Brasileiro de Pesquisas Fisicas - (CBPF)

Institute de Cosmologia, Relatividade, e Astroffsica - (ICRA-Br)

Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - (CNPq) Fianciadora de Estudos e Projetos - (FINEP)

International Center for Relativistic Astrophysics Network - (ICRANet)

AMERICAN

INSTITUTE Melville, New York, 2009

—PHYSICS AIP CONFERENCE PROCEEDINGS I VOLUME 1132

(2)

ALTERNATIVE COSMOLOGffiS

J.V. Narlikar

Inter-University Centre for Astrophysics and Astronomy, Post Bag 4, Ganeshkhind, Pune University Campus, Pune 411 007, INDIA

PACS: cosmology, cosmogony, high energy phenomena, alternatives to big bang models

ALTERNATIVES TO FRDEDMANN COSMOLOGIES In 1922-24 when Friedmann produced the expanding universe solutions of Einstein's equations, his work largely went unnoticed. Subsequent to Hubble's discovery of nebular redshift, however, cosmologists recognized these models as the simplest starting point for discussing their subject. The physicists, on the other hand considered these attempts as naive and speculative and so they did not pay much attention to George Gamow's very seminal work on the early universe.

The turning point for cosmology came, however, in 1965 with the discovery of the microwave background radiation. The MBR seemed to confirm the early universe scenario and taken together with the extended validity of Hubble's law obtained by bigger and better telescopes, laid a solid foundation for cosmology as a branch of physics. By the mid 1970s a considerable body of physicists began to take the Friedmann cosmology seriously. More so after they realized that the big bang cosmology provides a setting, the only setting known so far, for testing their very high energy physics and the grand unification programme. Cosmologists also looked to particle physics for understanding the primary origin of matter. Thus the subject of astroparticle physics has grown out of joint speculations of big bang cosmologists and high energy particle physicists.

To what extent is Friedmann cosmology a correct theory of the origin and the large scale structure of the universe? While the majority of today's cosmologists would put their money on the Friedmann models, there have been a few 'agnostics' from tune to time, who were not satisfied with them. Their criticism of standard model makes the following points:

1. It is highly speculative 2. It uses untested physics

3. It talks of early epochs that can never be seen.

And out of their efforts have emerged alternative theories of cosmology.

These theories have not been worked through to the depth that Friedman cosmology can boast of. This is hardly surprising considering the very limited number of people who worked on them. Nevertheless they contain different perspectives and are worth taking a look at, if only because they might offer a resolution of some of the outstanding

CP1132, Cosmology and Gravitation: XIII Brazilian School on Cosmology and Gravitation edited by M. Novello and S. E. Perez Bergliaffa

O 2009 American Institute of Physics 978-0-7354-0669-8/09/S25.00 1

(3)

problems that the Friedmann cosmology has been unable to solve. In these lectures, we describe a few such theories, in particular those based on the following concepts:

(1) Mach's Principle

(2) The Large Numbers Hypothesis (3) Modified Newtonian Dynamics (4) Creation of Matter.

MACH'S PRINCIPLE

There are two ways of measuring the Earth's spin about its polar axis. By observing the rising and setting of stars the astronomer can determine the period of one revolution of the Earth around its axis : the period of 23h56m4s.l. The second method employs a Foucault pendulum whose plane gradually rotates around a vertical axis as the pendulum swings. Knowing the latitude of the place of the pendulum it is possible to calculate the Earth's spin period. The two methods give the same answer.

At first sight this does not seem surprising. If we are measuring the same quantity, we should get the same answer regardless of the method used. Closer examination, however, reveals why the issue is non-trivial. The two methods are based on different assumptions.

The first method measures the Earth's spin period against a background of distant stars : while the second employs the standard Newtonian mechanics in a spinning frame of reference. In the latter case, we take note of how Newton's laws of motion get modified when their consequences are measured in a frame of reference spinning relative to the 'absolute space' in which these laws were first stated by Newton.

Thus, implicit in the assumption that equates the two methods is the coincidence of absolute space with the background of distant stars. It was Ernst Mach (1893) in the last century who pointed out that this coincidence is nontrivial. He read something deeper in it, arguing that the postulate of absolute space that allows one to write down the laws of motion and arrive at the concept of inertia, is somehow intimately related to the background of distant parts of the universe. We will analyse Mach's argument further.

When expressed in the framework of the absolute space, Newton's second law of motion takes the familar form

P = mf. (1) This law states that a body of mass m subjected to an external force P experiences an acceleration f. Let us denote by Z the coordinate system in which P and f are measured.

Newton was well aware that his second law has the simple form (1) only with respect to S and those frames that are in uniform motion relative to £. If we choose another frame

£' that has an acceleration a relative to L, the law of motion measured in E' becomes

P' = P - ma = mf. (2)

Although (2) outwardly looks the same as (1), with f the acceleration of the body in £', something new has entered into the force term. This is the term —ma, which has nothing to do with the external force but depends solely on the mass m of the body and

the acceleration a of the reference frame relative to the absolute space. Realizing this aspect of the additional force in (2), Newton termed it "inertial force". As this name implies, the additional force is proportional to the inertial mass of the body. Newton discussed this force at length in his Principia, citing the example of a rotating water- filled bucket.

In this experiment, a water filled bucket is suspended from a ceiling by a rope. The rope is given a twist and let go. The bucket begins to spin and the water in the bucket also spins with it. It is observed that the surface of the water dips in the centre and rises at the boundary. Newton argued that spin introduces an absolute effect on the water surface, arising from inertial forces: thus one can give an unambiguous meaning to absolute space as a reference frame in which the water surface in the bucket is flat.

According to Mach, the Newtonian discussion was incomplete in the sense that the existence of the absolute space was postulated arbitrarily and in an abstract manner. Why does S have a special status in that it does not require the inertial force? How can one physically identify Z without recourse to the second law of motion, which is based on it?

Mach argued that the answers to these questions were contained in the observation of the distant parts of the universe. It is the universe that provides a background reference frame that can be identified with Newton's frame E. Instead of saying that it is an accident that Earth's rotation velocity relative to £ agrees with that relative to the distant parts of the universe, Mach took it as proof that the distant parts of the universe somehow enter into the formulation of local laws of mechanics.

One way this could happen is by a direct connection between the property of inertia and the existence of the universal background. To see this point of view, imagine a single body in an otherwise empty universe. In the absence of any forces (1) becomes

mf=0. (3)

What does this equation imply? Following Newton we would conclude from (3) that f = 0, that is, the body moves with uniform velocity. But we now no longer have a back- ground against which to measure velocities. Thus f = 0 has no operational significance.

Rather, the lack of any tangible background for measuring motion suggests that f should be completely indeterminate. And it is not difficult to see that such a conclusion follows naturally provided we opt for the alternative deduction, also possible from (3) that

m = 0. (4) In other words, the measure of inertia depends on the existence of the background in such a way that in the absence of the background the measure vanishes! This aspect introduces a new feature into mechanics not considered by Newton. The Newtonian view that inertia is the property of matter has to be augmented to the statement that inertia is the property of matter as well as of the background provided by the rest of the universe.

This general idea is known as Mach's principle.

Such a Machian viewpoint not only modifies local mechanics, but it also introduces new elements into cosmology. For, except in the universe following the perfect cos-

(4)

mological principle ', there is no basis now for assuming that particle masses would necessarily stay fixed in an evolving universe. This is the reason for considering cos- mological models anew from the Machian viewpoint. Presented here are some instances of how different physicists have given quantitative expression to Mach's principle and arrived at new cosmological models.

THE BRANS-DICKE THEORY OF GRAVITY

In 1961 C. Brans and R.H. Dicke provided an interesting alternative to general relativity based on Mach's principle. To understand the reasons leading to their field equations, we first note that the concept of a variable inertial mass itself leads to a problem of interpretation. For, how do we compare masses at two different points in spacetime?

Masses are measured in certain units, such as masses of elementary particles, which are themselves subject to this change! We need an independent unit of mass against which an increase or decrease of a particle mass can be measured. Such a unit is provided by gravity, by the so called Planck mass :

^y G) /2

Thus the dimensionless quantity

1/2

(5)

(6) measured at different spacetime points can tell us whether masses m are changing.

Or alternatively, if we insist on using mass units that are the same everywhere, a change of % would tell us that the gravitational constant G is changing.2 This is the conclusion Brans and Dicke (1961) arrived at in their approach to Mach's principle. They looked for a framework in which the gravitational constant G arises from the structure of the universe, so that a changing G could be looked upon as the Machian consequence of a changing universe.

D.W. Sciama (1953) had given general arguments leading to a relationship between G and the large-scale structure of the universe. We come across one example of such a relation in the Friedmann cosmologies :

-3 L/2

(7)

1 We will define it later: but it forces the conclusion that the large scale properties of the universe do not change with time.

2 We could of course assume that h and c also change. However, by keeping h and c constant we follow the principle of least modification of existing theories. Thus special relativity and quantum theory are unaffected if we keep h and c fixed.

If we write R0 = c/HQ as a characteristic length of the universe and M0 = 47rp0#3/3as the characteristic mass of the universe, then the above relation becomes

1 _ M0 _ j MQ m

G-R^q° ~ ^2~1^2- (8)

Given a dynamic coupling between the inertia and gravity, a relation of the above type is expected to hold. Brans and Dicke took this relation as one that determines G"1

from a linear superposition of inertial contributions m/rc2, the typical one being from a mass mat a distance r from the point where G is measured. Since m/r is a solution of a scalar wave equation with a point source of strength m, Brans and Dicke postulated that G behaves as the reciprocal of a scalar field 0 :

Gr*~> (f)J - (9)

where 0 is expected to satisfy a scalar wave equation whose source is all the matter in the universe.

THE ACTION PRINCIPLE

The intuitive concepts described above are contained in the Brans-Dicke action princi- ple, which may be written in the form

(10) Notice first that the coefficent of R is c30/16^ instead of c3/16?rG as in the Einstein- Hilbert action. The reason for this lies in the anticipated behaviour of G as just given. The second term, with 0* = d(j)/djf, ensures that 0 will satisfy a wave equation, while the third term includes, through a Lagrangian density L, all the matter (and energy) present in the spacetime region Y. The A denotes the matter part of the action, leading to energy momentum tensor Tlk of matter through the usual relation, w is a coupling constant.

The variation of of for small changes of glk leads to the field equations

D D Rik~2gikR=

Similarly, the variation of 0 leads to the following equation for 0

-

(11)

(12)

(5)

This latter equation can be simplified by substituting for R from the contracted form of (11). We finally get

(13) where T is the trace of 7JJ. Thus (13) leads to the anticipated scalar wave equation for 0 with sources in matter, D being the wave operator.

Because it contains a scalar field <f> in addition to the metric tensor g,*, the Brans- Dicke theory is often referred to as the scalar-tensor theory of gravitation. Notice that the theory approaches general relativity as « —> °°. The solar system tests of this theory have led to large lower limit on o>, e.g., (t> > 1000.

COSMOLOGICAL SOLUTIONS OF THE BRANS-DICKE EQUATIONS

In analogy with the Friedmann models, we will consider only the homogeneous and isotropic cosmological models in the Brans-Dicke theory. Accordingly we start with the Robertson-Walker line element and the energy tensor for a perfect fluid. </> is now a function of the cosmic time only. Thus the field equations become

2S 2<pS

S2

In addition, we have the field equation for </» :

h2 •

(14)

(15) (16)

(17) We anticipate that big bang solutions will emerge from these equations and set the big bang epoch at t = 0. Then the integral of (17) is

(18) where C is a constant. Two types of solutions are obtained, depending on whether C = 0 or C ^ 0.

(i)C = 0

We will consider a simple example of this type, with k = 0, p = 0, and e = pc2. This solution is therefore analogous to the Eisntein-de Sitter model of general relativity. Write

logS

logG

FIGURE 1. The temporal behaviour of the scale factor S and gravitational constant G plotted on a log-log plot.

~3A

= S° U J ' * =

so that p °c t~5A and the field equations give A-™ B-3co+4 and

(19)

(20)

(21)

°'"0

The temporal behaviour of 5 and G (°c 0 ~J) is illustrated in Figure 1. It can be verified that as co —»• °° this solution tends to the Einstein-de Sitter model.

An analogue of the radiation model can be obtained in this theory. H. Nariai (1968) obtained solutions for p = ne with n in the range 0 < n < 1/3.

(ii)C^O

In this case the 0-terms dominate the dynamics of the universe in the early stages. Thus for small enough t we have

:|, (22)

both for the cases of dust and of radiation. For our power law solutions for the case p = 0, we have at small enough t

3A+B=l,

(23)

(6)

In the case of a radiation-dominated universe p = £/3 and we can again try a solution of the form (19) to get as t -> 0

= -AB +(OB2 (24)

Taking into account (22) we can solve (23) to get

5 =l ± 3 y / ( 2 ( 0 / 3 ) + l

3(0 + 4

The upper sign holds when C > 0 and the lower sign when C < 0. For C > 0, </> —» 0 when 5 -> 0, while for C < 0, <j> —> °° for S —»• 0. These conclusions hold irrespective of the values of & or of the equation of state, since at small values of S the dynamics of the universe are controlled by the </>-term (See Figure 1).

THE VARIATION OF G

Since G « 0"1, a time-dependent </> will mean a time-dependent gravitational constant.

As seen from earlier equations, we have for C = 0 G

G

H (25)

Thus |G| is of the order of Bubble's constant unless (0 is large and its sign indicates that the gravitational constant should decrease with time.

However, for a large enough |C|, the (/(-dominated solutions differ significantly from the matter-dominated ones even at the present epochs. In this case for C large and negative we can have G increasing with time even at relatively recent epochs.

INFLATION IN BRANS-DICKE COSMOLOGIES

Because of its relative simplicity of formulation and interpretation of observable results, the Brans-Dicke cosmology has been studied in the 'very early universe' phase also.

V.B. John and C. Mathiazhagan (1984) were the first to consider inflationary phase in this cosmology. The problem of bubble nucleation and coelescence that was faced by Guth's inflationary model, represented the difficulty of what has been known as the graceful exit from the inflationary phase into the Friedmann radiation dominated phase.

La and Steinhardt had considered the Brans-Dicke framework to generate an 'extended inflation'. The phase 'extended' arises because the expansion is not exponential but of a power-law type. The idea seemed to solve the graceful exit problem but ran into trouble because of the distortions it produced in the cosmic microwave background, distortions that were unacceptably high. Undeterred by these setbacks the inflation enthusiasts

explored a variation on the Brans-Dicke theme by adding higher order couplings of the scalar field with gravity. This led to the notion of 'hyper-extended inflation'. However, none of these ideas seem to have received much following in later years.

To sum up, the Brans-Dicke theory had generated considerable interest as an alter- native theory of gravity, but with the solar system tests giving values very close to the predictioins of general relativty with greater and greater accuracy, the (0— parameter that distinguished it from general relativity had to be larger and larger, thus making it more and more indistinguishable from general relativity at least on the scale of the solar system. On the cosmological front the theory has given different solutions from standard Friedmann cosmology, but these differences do not seem to have impressed theoreticians sufficiently for them to undertake detailed studies of the cosmogony of the universe from the very early epochs. For large (0, as shown by equation (25), the rate of change of G is expected to be small compared to H and this is consistent with present measurements of G and G.

DIRAC AND THE LARGE NUMBERS HYPOTHESIS

Dimensionless constants in physics play an important role in understanding natural phenomena. For example, the fine structure constant a = e2/he ~ 1/137 conveys an impression of the strength of electrodynamics as a basic interaction. Given e, G, and the masses of proton and the electron mp and me, we can construct another dimensionless constant (that is, a constant with no units):

Gmpme = 2.3 x 1039 ~ 10r>40 (26) This constant measures the relative strength of the electrical and the gravitational forces between the electron and the proton. Like the fine structure constant a — e^/hc this constant reflects an intrinsic property of nature. However, unlike a, the constant in (26) is enormously large! Why such a large number?

Perhaps the appearance of a large dimensionless constant might be dismissed as some quirk on the part of nature. The mystery deepens, however, if we consider another dimensionless number. This is the ratio of the length scale associated with the universe, C/HQ, and the length associated with the electron, e2 /nigC2. This ratio is

me(?

= 3.7 x 10,40 (27)

Not only do we have another large dimensionless number in (27), but it is of the same order as in (26).

We can generate another large number of special significance out of particle physics and cosmology. Assuming the closure density pc, let us calculate the number of particles in a Euclidean sphere of radius C/HQ, the mass of each particle being mp. The answer is

N = H0J SxG 2mpGH0

(7)

' i

4xl07V

1080. (28)

Thus taking N as a standard we see that the large dimensionless numbers of (26) and (27) are both of the order of W1/2.

Reactions among physicists have varied as to the significance of all these numbers.

Some dismiss it as a coincidence with the rejoinder : "So what?" Others have read deep significance in these relationships. The latter class includes such distinguished physicists as A.S. Eddington and P.A.M. Dirac.

Dirac (1937) pointed out that the relationships (27) and (28) contain the Hubble constant HQ, and therefore the magnitudes computed in these formulae vary with the epoch in the standard Friedmann model. If so, the near equality of (26) and (27) has to be coincidence of the present epoch in the universe, unless the constant (26) also varies in such a way as to maintain the state of near equality with (27) at all epochs.

With this proviso, the equality of (26) and (27) is not coincidental but is characteristic of the universe at all epochs. The proviso also implies that at least one of the so-called constants involved in (26), e,mp,me, and G, must vary with the epoch.

THELNH

This proviso was generalized by Dirac to what he called the Large Numbers Hypothesis (LNH). To understand this hypothesis we rewrite the ratio (27) as that between the time scale associated with the universe, TO = H^1, and the time taken by light to travel a distance of the order of the classical electron radius, te — e2 lme<? . The LNH then states that any large number that at the present epoch is expressible in the form

where k is of order unity, varies with the epoch t as (t/te}k with a constant of propor- tionality of order unity.

Applied to (26), therefore, the LNH implies that the ratio e2 /Gmpme must vary as (t/tg)~l. Dirac made the distinction between e,me,mp on one side and G on the other in the sense that the former are atomic (microscopic quantities) while G has macroscopic significance. In the Machian cosmologies, G is in fact related to the large-scale structure of the universe. Dirac therefore assumed that if we use "atomic units" that always maintain fixed values for atomic quantities, then te will be constant and G°*t~l. That is, in terms of atomic time units the gravitational constant must vary with the epoch t, with

|G/G|~H.

Dirac (1973, 1974) wrote papers in which quantitative models based on the LNH were worked out. One consequence is that new matter is created continuously. In additive mat- ter creation the rate of creation is proportional to volume whereas in the multiplicative creation the rate is proportional to the existing mass in the volume. The variation of G in most of these models is, however, much faster than observed.

10

MODIFIED NEWTONIAN DYNAMICS (MONO)

Observational motivation : Rotation curves of galaxies show that typical orbital velocity V satisfies the Newtonian equation

GM V2 2 GM

—=- = => V oc

R2 R R (29)

However, we get V « constant at radial distance R about 2-3 times the visible size of the galaxy. What does it mean?

a) Conventional conclusion : M = M(R) —> M(R) oc R. M(R) increasing with R implies the existance of dark matter far beyond the visible extent of the galaxy. Further assump- tions made by big bang cosmologists is that the dark matter is mostly nonbaryonic.

b) Alternative conclusion : It was proposed by Milgrom (1983) that at sufficiently low accelerations, Newton's second law of motion gets modified. This idea is called MOdified Newtonian Dynamics or MOND.

THE MOND ALTERNATIVE

The MOND prescription is briefly stated as follows: True gravitational acceleration = g.

Newtonian gravitational acceleration = ##. Then define

~2 (30)

where, n(x) — 1 for x > 1 and n(x) = x for x < 1.

For example, for a galaxy of mass = 10nM0, acceleration at 10 kpc. towards the centre

From (30) we get,

( 1 01 8x 3 x l O x l 03)2

= 1.5 x l(T8cm-2

s2 GM

| g | < a0, ft (x) = x => — = gN = -g- (GMap)1/2

R V2 = gR=>V = (GMa0)1/4 Tully Fisher relation (TFR) as observed empirically, implies

LH band °= V4

11

(31)

(32)

(8)

If L/M « constant for the class of galaxies chosen,

M o c V4 (33)

MOND => V4 = GMa0. Thus MOND gives a natural explanation of TFR.

In practice one needs to model each galaxy as a Disc + Halo type combination.

Matching of halo parameters is then needed with those of the disc to get the observed rotation curves. This demands fine tuning.

To relate to L# <* V4 (TFR), LH comes from visible matter and V from halo.

Halo-disc interactive models seem like fine tuning. Such models require dissipational collapse of gas into potential well of halo. It is usually hard to expect a sharp Tully - Fisher Relationship. However, explanation of TFR is claimed as a success by MOND.

Other successes of MOND

(i) Expect discrepancies with Newtonian mechanics in galaxies with surface mass den- sity

This is found in dwarf spirals.

(ii) MOND predicts detailed shape of rotation curve from the observed part of matter, gas + stars + dust, using a single parameter T.

(iii) Does MOND explain 'excess' of kinetic energy in clusters?

Yes. Sanders (2003) has argued that this is the case. But in some doubtful cases the clusters contain much more baryonic matter hithero undiscovered.

MOND & COSMOLOGY

MOND was not proposed as a formal theory but as an empirical rule. To make it into a relativistic theory with action principle, Bekenstein (2004) proposed the TeVeS (Tensor- Vector-Scalar theory).

The tensor part is of course provided by the metric. This theory 1. Agrees with solar system tests,

2. Agrees with gravitational lensing observations without dark matter, 3. Needs no superluminal propagation, and

4. Can construct cosmological models.

Sanders' (2008) comments on this theory are as follows.

In some sense the relativistic MOND does require dark matter, e.g.,

1. CDM dark matter potential wells needed at recombination to explain the first two peaks of angular power spectrum of MBR.

2. Rebrightening of SNIa at z > 1 requires matter domination over vacuum energy, with

&CDM = 0.25.

3. Numerical coincidence cannot be avoided. For example, the result

12

<2o - cH0 (34)

is not explained by the TeVeS theory.

Is MOND motivated by cosmology? The possible effect of the universe on local parti- cles may recall Machian ideas as in the Brans-Dicke theory. In Sanders' version one talks of preferred frame cosmology, the preferred frame being 'Cosmological rest frame'. It should be noted that the Vector field is not invariant under Lorentz Transformations.

Although MOND is an interesting concept, it is not clear why it is to be preferred to the dark matter alternative.

The three approaches (Mach's Principle, Large Number Hypothesis and MOND) have been presented here in a somewhat sketchy form, largely because of shortage of time needed for a more comprehensive presentation. I will, however, now turn to an alternative cosmology with whose genesis I have been involved. This cosmology will be presented with somewhat greater detail.

EVOLUTION OF COSMOLOGY IN THE 20TH CENTURY The standard cosmological model accepted by the majority at present is centered about the big bang which involves the creation of matter and energy in an initial explosion.

Since we have overwhelming evidence that the universe is expanding, the only alterna- tive to this picture appears to be the classical steady-state cosmology, of Bondi, Gold and Hoyle, (Bondi and Gold, 1948, Hoyle, 1948) or a model in which the universe is cyclic with an oscillation period which can be estimated from observation. In this latter class of models the bounce at a finite minimum of the scale factor is produced by a negative energy scalar field. Long ago Hoyle and Narlikar (1964) emphasized the fact that such a scalar field will produce models which oscillate between finite ranges of scale. In the 1960s theoretical physicists shied away from scalar fields, and more so those involving negative energy. Later Narlikar and Padmanabhan (1985) discussed how the scalar cre- ation field helps resolve the problems of singularity, flatness and horizon in cosmology.

It now appears that the popularity of inflation and the so-called new physics of the 1980s have changed the 1960s' mind-set. Thus Steinhardt and Turok (2002) introduced a nega- tive potential energy field and used it to cause a bounce from a non-singular high density state. It is unfortunate that they did not cite the earlier work of Hoyle and Narlikar which had pioneered the concept of non-singular bounce through the agency of a negative en- ergy field, at a time when the physics community was hostile to these ideas. Such a field is required to ensure that matter creation does not violate the law of conservation of matter and energy.

Following the discovery of the expansion of the universe by Hubble in 1929, prac- tically all of the theoretical models considered were of the Friedmann type, until the proposal by Bondi, Gold and Hoyle in 1948 of the classical steady state model. Bondi and Gold arrived at this model by postulating the perfect cosmological principle. This principle not only required the universe to be homogeneous and isotropic in space, but also unchanging in time. Thus statistically, all physical observables in such a universe should remain constant in time. A classical test of this model lay in the fact that, as dis- tinct from all of the big bang models, it predicted that the universe must be accelerating

13

(9)

(cf Hoyle and Sandage, 1956). For many years it was claimed that the observations in- dicated that the universe is decelerating, and that this finding disproved the steady state model. Not until much later was it conceded that it was really not possible to deter- mine the deceleration parameter by the classical methods then being used. Gunn and Oke (1975) were the first to highlight the observational uncertainties associated with this test. Of course many other arguments were used against the classical steady state model (for a discussion of the history see Hoyle, Burbidge and Narlikar 2000 Chapters 7 and 8). But starting in 1998 studies of the redshift-apparent magnitude relation for supernovae of Type 1A showed that the universe is apparently accelerating (Riess et al.

1998, Perlmutter et al. 1999). The normal and indeed the proper way to proceed after this result was obtained should have been at least to acknowledge that, despite the diffi- culties associated with the steady state model, this model had all along been advocating an accelerating universe.

It is worth mentioning that McCrea (1951) was the first to introduce vacuum related stresses with equation of state p = — p in the context of the steady state theory. Later Gliner (1970) discussed how vacuum-like state of the medium can serve as original (non singular) state of a Friedmann model.

The introduction of dark energy is typical of the way the standard cosmology has developed; viz, a new assumption is introduced specifically to sustain the model against some new observation. Thus, when the amount of dark matter proved to be too high to sustain the primordial origin of deuterium, the assumption was introduced that most of the dark matter has to be non-baryonic. Further assumptions about this dark matter became necessary, e.g., cold, hot, warm, to sustain the structure formation scenarios. The assumption of inflation was introduced to get rid of the horizon and flatness problems and to do away with an embarrassingly high density of relic magnetic monopoles. As far as the dark energy is concerned, until 1998 the general attitude towards the cosmological constant was typically as summarized by Longair in the Beijing cosmology symposium:

"None of the observations to date require the cosmological constant" (Longair 1987).

Yet, when the supernovae observations could not be fitted without this constant, it came back with a vengeance as dark energy.

Although the popularity of the cosmological constant and dark energy picked up in the late 1990s, there had been earlier attempts at extending the Friedmann models to include effects of vacuum energy. A review of these models, vis-a-vis observations may be found in the article by Carroll and Press (1992).

We concede that with the assumptions of dark energy, non-baryonic dark matter, inflation etc. an overall self consistent picture has been provided within the framework of the standard model. One demonstration of this convergence to self consistency is seen from a comparison of a review of the values of cosmological parameters of the standard model by Bagla, et al. (1996), with the present values. Except for the evidence from high redshift supernovae, in favour of an accelerating universe which came 2-3 years later than the above review, there is an overall consistency of the picture within the last decade or so, including a firmer belief in the flat (Q = 1) model with narrower error bars.

Nevertheless we also like to emphasize that the inputs required in fundamental physics through these assumptions have so far no experimental checks from laboratory physics.

Moreover an epoch dependent scenario providing self-consistency checks, e.g. MBR anisotropies, cluster baryon fraction as a function of redshift does not meet the criterion

14

FIGURE 2. The oscillatory + expanding scale factor of QSSC.

of 'repeatability of a scientific experiment'. We contrast this situation with that in stellar evolution where stars of different masses constitute repeated experimental checks on the theoretical stellar models thus improving their credibility.

Given the speculative nature of our understanding of the universe, a sceptic of the standard model is justified in exploring an alternative avenue wherein the observed fea- tures of the universe are explained with fewer speculative assumptions. We review here the progress of such an alternative model known as the Quasi-Steady State Cosmology.

THE QUASI-STEADY STATE COSMOLOGY (QSSC) In this model creation of matter is brought in as a physical phenomenon and a negative kinetic energy scalar field is required to ensure that it does not violate the law of conservation of matter and energy. A simple approach based on Mach's principle leads naturally to such a field within the curved spacetime of general relativity. The resulting field equations have the two simplest types of solutions for a homogeneous and isotropic universe: (i) those in which the universe oscillates but there is no creation of matter, and (ii) those in which the universe steadily expands with a constant value of H0 being driven by continuous creation of matter. The simplest model including features of both these solutions is the Quasi-Steady State Cosmology (QSSC), first proposed by Hoyle, Burbidge and Narlikar (1993). It has the scale factor in the form:

S(t] = exp {1 + 77 cos 0(r)}, 0(f)«~ (35) where P is the long term 'steady state' time scale of expansion while Q is the period of a single oscillation. The function 9(t) satisfies a known differential equation but can be approximated by the linear function 2nt/Q. The scale factor is plotted in Figure 2. Note that it is essential for the universe to have a long term expansion; for a universe that has only oscillations without long term expansion would run into problems like the Olbers paradox. It is also a challenge in such a model to avoid running into 'heat death' through a steady increase of entropy from one cycle to next. These difficulties are avoided if there is creation of new matter at the start of each oscillation as happens in the QSSC, and also, if the universe has a steady long term expansion in addition to the oscillations.

New matter in such a case is of low entropy and the event horizon ensures a constant entropy within as the universe expands.

15

(10)

CONSIDERATIONS OF COSMOGONY

Before I describe this cosmological model, I want to indicate the importance of the observed behavior of the galaxies (the observed cosmogony) in this approach.

Now that theoretical cosmologists have begun to look with favor on the concepts of scalar negative energy fields, and the creation process, they have taken the position that this subject can only be investigated by working out models based on classical approaches of high energy physics and their effects on the global scale. In all of the dis- cussions of what is called precision cosmology there is no discussion of the remarkable phenomena which have been found in the comparatively nearby universe showing that galaxies themselves can eject what may become, new galaxies. I believe that only when we really understand how individual galaxies and clusters etc. have formed, evolve, and die (if they ever do) shall we really understand the overall cosmology of the universe. As was mentioned earlier, the method currently used in the standard model is to suppose that initial quantum fluctuations were present at an unobservable epoch in the early universe, and then try to mimic the building of galaxies using numerical methods, invoking the dominance of non-baryonic matter and dark energy for which there is no independent evidence.

In one sense I believe that the deficiency of the current standard approach is already obvious. The model is based on only some parts of the observational data. These are: all of the details of the microwave background, the abundances of the light elements, the observed dimming of distant supernovae, and the large scale distribution of the observed galaxies. This has led to the conclusion that most of the mass-energy making up the universe has properties which are completely unknown to physics. This is hardly a rational position, since it depends heavily on the belief that all of the laws of physics known to us today can be extrapolated back to scales and epochs where nothing is really testable; and that there is nothing new to be learned.

In spite of this, a very persuasive case has been made that all of the observational parameters can be fitted together to develop what is now becoming widely accepted as a new standard model, the so-called ACDM model (Spergel et al., 2003). There have been some publications casting doubt on this model, particularly as far as the reality of dark energy and cold, dark matter are concerned (Meyers et al. 2004; Blanchard et al.

2003). It is usual to dismiss them as controversial and to argue that a few dissenting ideas on the periphery of a generally accepted paradigm are but natural. However, it is unfortunately the case that a large fraction of our understanding of the extragalactic universe is being based on the belief (for which there is no independent evidence) that there was a beginning and an inflationary phase, and that the seeds of galaxies all originate from that very early phase.

I believe that an alternative approach should be considered and tested by observers and theorists alike. In this scheme the major themes are (1) that the universe is cyclic and there was no initial big bang, and (2) all of the observational evidence should be used to test the model. As we shall show, this not only includes the observations which are used in the current standard model, but also the properties and interactions of galaxies and QSOs which are present in the local (z < 0.1) universe.

Possibly the most perceptive astronomer in recent history was Viktor Ambartsumian the famous Armenian theorist. Starting in the 1950s and 1960s (Ambartsumian, 1965)

16

he stressed the role of explosions in the universe arguing that the associations of galaxies (groups, clusters, etc.) snowed a tendency to expand with far larger kinetic energy than is expected by assuming that the gravitational virial condition holds.

Since these phenomena appear on the extragalactic scale and involve quasi-stellar objects, active galaxies, powerful radio sources and clusters and groups of galaxies at all redshifts, we believe they must have an intimate connection with cosmology. Indeed, if one looks at standard cosmology, there too the paradigm centers around the 'big bang' which is itself an explosive creation of matter and energy. In the big bang scenario the origin of all of the phenomena is ultimately attributed to a single origin in the very early universe. No connection has been considered by the standard cosmologists between this primordial event and the mini-creation events (MCEs, hereafter) that Ambartsumian talked about. In fact, the QSOs and AGN are commonly ascribed to supermassive black holes as 'prime movers'. In this interpretation the only connection with cosmology is that it must be argued that the central black holes are a result of the processes of galaxy formation in the early universe.

In the QSSC we have been trying to relate such mini-creation events (MCEs) directly to the large scale dynamics of the universe. It can be shown that the dynamics of the universe is governed by the frequency and power of the MCEs, and there is a two-way feedback between the two. That is, the universe expands when there is a large MCE activity and contracts when the activity is switched off. Likewise, the MCE activity is large when the density of the universe is relatively large and negligible when the density is relatively small. In short, the universe oscillates between states of finite maximum and minimum densities as do the creation phases in the MCEs.

This was the model called the quasi-steady state cosmology or QSSC in brief. The model was motivated partly by Ambartsumian's ideas and partly by the growing num- ber of explosive phenomena that are being discovered in extragalactic astronomy. In the following sections I discuss the cosmological model and then turn to the various phenomena which are beginning to help us understand the basic cosmogony.

GRAVITATIONAL EQUATIONS WITH CREATION OF MATTER The mathematical framework for our cosmological model has been discussed by Hoyle, Burbidge and Narlikar (1995; HBN hereafter), and we outline briefly its salient features.

To begin with, it is a theory that is derived from an action principle based on Mach's Principle, and assumes that the inertia of matter owes its origin to other matter in the universe. This leads to a theoretical framework wider than general relativity as it includes terms relating to inertia and creation of matter. These are explained in the Appendix, and I use the results derived there in the following discussion.

Thus the equations of general relativity are replaced in this theory by

(36) Rik ~ gikR + ^8ik =

*•

with the coupling constant / defined as

\Tik - f (CiCk - \ClCi\ , L V / J

17

(11)

f = ~f <">

[I have taken the speed of light c = 1.] Here T = h/mp is the characteristic life time of a Planck particle with mass mp = ^/3h/8nG. The gradient of C with respect to spacetime coordinates x*(i = 0,1,2,3) is denoted by C,. Although the above equation defines / in terms of the fundamental constants it is convenient to keep its identity on the right hand side of Einstein's equations since there we can compare the C-field energy tensor directly with the matter tensor. Note that because of positive /, the C-field has negative kinetic energy. Also, as pointed out in the Appendix, the cosmological constant A, is negative in this theory.

The question now arises of why astrophysical observation suggests that the creation of matter occurs in some places but not in others. For creation to occur at the points A O , B O > - - - it is necessary classically that the action should not change (i.e. it should remain stationary) with respect to small changes in the spacetime positions of these points, which can be shown to require

Q(A0)Ci(Ao) = C(-(fio)C'(50) = • • • = 4- (38) This is in general not the case: in general the magnitude of C,-(X)C'(X) is much less that mp. However, as one approaches closer and closer to the surface of a massive compact body C,-(X)C'(X) is increased by a general relativistic time dilatation factor, whereas mp stays fixed.

This suggests that we should look for regions of strong gravitational field such as those near collapsed massive objects. In general relativistic astrophysics such objects are none other than black holes, formed from gravitational collapse. Theorems by Penrose, Hawking and others (see Hawking and Ellis 1973) have shown that provided certain positive energy conditions are met, a compact object undergoes gravitational collapse to a spacetime singularity. Such objects become black holes before the singularity is reached. However, in the present case, the negative energy of the C-field intervenes in such a way as to violate the above energy conditions. What happens to such a collapsing object containing a C-field apart from ordinary matter? It can be shown (Narlikar et al.

2006) that such an object does not become a black hole. Instead, the collapse of the object is halted and the object bounces back, thanks to the effect of the C-field. The C- field strength grows as the object shrinks and so its repulsive effect ultimately dominates over gravity. This is why the object bounces. We will refer to such an object as a compact massive object (CMO) or a near-black hole (NBH).

Thus, such an object after bouncing at a minimum radius will expand and as its radius increases the strength of the C-field falls while for small radii, the size of the object increases rapidly. This expansion therefore resembles an explosion.

It is worth stressing here that even in classical general relativity, the external observer never lives long enough to observe the collapsing object enter the horizon. Thus all claims to have observed black holes in X-ray sources or galactic nuclei really establish the existence of compact massive objects, and as such they are consistent with the NBH concept. A spinning NBH, for example can be approximated by the Kerr solution limited to region outside the horizon (- in an NBH there is no horizon). In cases where C has not

18

r

gone to the level of creation of matter, an NBH will behave very much like a Kerr black hole.

The theory would profit most from a quantum description of the creation process. The difficulty, however, is that Planck particles are defined as those for which the Compton wavelength and the gravitational radius are essentially the same, which means that, unlike other quantum processes, flat spacetime cannot be used in the formulation of the theory. A gravitational disturbance is necessarily involved and the ideal location for triggering creation is that near a CMO. The C-field boson far away from a compact object of mass M may not be energetic enough to trigger the creation of a Planck particle.

On falling into the strong gravitational field of a sufficiently compact object, however, the boson energy is multiplied by a factor, (1 — 2GM/r)~1/2 for a local Schwarzschild metric.

Bosons then multiply up in a cascade, one makes two, two makes four, ..., as in the discharge of a laser, with particle production multiplying up similarly and with negative pressure effects ultimately blowing the system apart. This is the explosive event that we earlier referred to as a mini-creation event (MCE). Unlike the big bang, however, the dynamics of this phenomenon is well defined and non-singular. For a detailed discussion of the role of a NBH as well as the mode of its formation, see Hoyle et al. (2000), (HBN hereafter) p. 244-249.

While still qualitative, we shall show that this view agrees well with the empirical facts of observational astrophysics. For, as mentioned in the previous section, we do see several explosive phenomena in the universe, such as jets from radio sources, gamma ray bursts, X-ray bursters, QSOs and active galactic nuclei, etc. Generally it is assumed that a black hole plays the lead role in such an event by somehow converting a fraction of its huge gravitational energy into large kinetic energy of the 'burst' kind. In actuality, we do not see infalling matter that is the signature of a black hole. Rather one sees outgoing matter and radiation, which agrees very well with the explosive picture presented above.

COSMOLOGICAL MODELS

The qualitative picture described above is too difficult and complex to admit an exact solution of the field equations (36). The problem is analogous to that in standard cos- mology where a universe with inhomogeneity on the scale of galaxies, clusters, super- clusters, etc., as well as containing dark matter and radiation is impossible to describe exactly by a general relativistic solution. In such a case one starts with simplified ap- proximations as in models of Friedmann and Lemaitre and then puts in specific details as perturbation. The two phases of radiation-dominated and matter-dominated universe likewise reflect approximations implying that in the early stages the relativistic parti- cles and photons dominated the expansion of the universe whereas in the later stages it was the non-relativistic matter or dust, that played the major role in the dynamics of the universe.

In the same spirit we approach the above cosmology by a mathematical idealization of a homogeneous and isotropic universe in which there are regularly phased epochs when the MCEs were active and matter creation took place while between two consecutive epochs there was no creation (- the MCEs lying dormant). We will refer to these two sit-

19

(12)

uations as creative and non-creative modes. In the homogeneous universe assumed here the C-field will be a function of cosmic time only. We will be interested in the matter- dominated analogues of the standard models since, as we shall see, the analogue of the radiation-dominated state never arises except locally in each MCE where, however, it re- mains less intense than the C-field. In this approximation, the increase or decrease of the scale factor S(t) of the universe indicates an average smoothed out effect of the MCEs as they are turned on or off. The following discussion is based on the work of Sachs, et al. (1996).

We write the field equations (36) for the Robertson-Walker line element with S(t) as scale factor and k as curvature parameter and for matter in the form of dust, when they reduce to essentially two independent equations :

•• *i

(39)

L>

U

= 3A + 8;rGp - 67IG/C2, (40)

u

where we have set the speed of light c = 1 and the density of dust is given by p. From these equations we get the conservation law in the form of an identity :

- -dS - 67TG/C2)} = 3S2 (41)

This law incorporates "creative" as well as "non-creative" modes. We will discuss both in that order.

The creative mode

This has

which, in terms of our simplified model becomes

|(^P)^0.

For the case k = 0, we get a simple steady-state de Sitter type solution with C = m, S = exp(t/P),

and from (50) and (51) we get

Since A < 0, we expect that

20

(42)

(43)

(44)

(45)

A* ^> —2nGp

(46) but will defer the determination of P to after we have looked at the non-creative solu- tions. Although Sachs, et al. (1996) have discussed all cases, we will concentrate on the simplest one of flat space k = 0.

The rate of creation of matter is given by

'-% (47) As will be seen in the quasi-steady state case, this rate of creation is an overall average made of a large number of small events. Further, since the creation activity has ups and downs, we expect J to denote some sort of temporal average. This will become clearer after we consider the non-creative mode and then link it to the creative one.

The non-creative mode

In this case 7'| = 0 and we get a different set of solutions. The conservation of matter alone gives

P - p while for (59) and a constant A, (52) leads to

Therefore, equation (51) gives

A B S2

(48)

(49)

(50) where A and B are positive constants arising from the constants of proportionality in (46) and (47). We now find that the exact solution of (48) in the case k = 0, is given by

S = 5[l+7]cos0(f)]

where r\s a parameter and the function 9(t) is given by

(51)

s20)}. (52)

Here, S is a constant and the parameter r\s the condition: |TJ| < 1. Thus the scale factor never becomes zero and the model oscillates between finite scale limits

Smin = S(l - 7?) < S < 5(1 + 7]) = 5max, The density of matter and the C-field energy density are given by

21

(53)

(13)

p = -

2nG

3A

while the period of oscillation is given by

G =

1

(54) (55)

(56) The oscillatory solution can be approximated by a simpler sinusoidal solution with the same period :

s - - . (57)

Thus the function 6(t) is approximately proportional to t.

Notice that there is considerable similarity between the oscillatory solution obtained here and that discussed by Steinhardt and Turok (2002) in the context of a scalar field arising from phase transition. The bounce at finite minimum of scale factor is produced in both cosmologies through a negative energy scalar field. As we pointed out in the introduction, Hoyle and Narlikar (1964) [see also Narlikar (1973)] have emphasized the fact that such a scalar field can produce models which oscillate between finite ranges of scale. In the Hoyle-Narlikar paper cited above C °= 1/S

3

, as opposed to (60), exactly as assumed by Steinhardt and Turok (2002) 38 years later. This is because instead of the trace-free energy tensor of Equation (2) here, Hoyle and Narlikar had used the standard scalar field tensor given by

(58)

Far from being dismissed as physically unrealistic, negative kinetic energy fields like the C—field are gaining popularity. Recent works by Rubano and Seudellaro (2004), Sami and Toporensky (2004), Singh, et al. (2003) who refer to the earlier work by Hoyle and Narlikar (1964) have adapted the same ideas to describe phantom matter and the cosmological constant. In these works solutions of vacuum field equations with a cosmological constant are interpreted as a steady state in which matter or entropy is being continuously created. Barrow, et al. (2004) who obtain bouncing models similar to ours refer to the paper by Hoyle and Narlikar (1963) where C-field idea was proposed in the context of the steady state theory.

The Quasi-Steady State Solution

The quasi-steady state cosmology is described by a combination of the creative and the non-creative modes. For this the general procedure to be followed is to look for a composite solution of the form

22

{1 + 77 cos B(t)} (59)

wherein P > G- Thus over a period Q as given by (54), the universe is essentially in a non-creative mode. However, at regular instances separated by the period Q it has injection of new matter at such a rate as to preserve an average rate of creation over period P as given by / in (45). It is most likely that these epochs of creation are those of the minimum value of the scale factor during oscillation when the level of the C-field background is the highest. There is a sharp drop at a typical minimum but the S(t) is a continuous curve with a zero derivative at S = £„„„.

Suppose that matter creation takes place at the minimum value of S = S^, and that

N particles are created per unit volume with mass /MQ. Then the extra density added at

this epoch in the creative mode is

Ap = (60)

After one cycle the volume of the space expands by a factor exp (3Q/P) and to restore the density to its original value we should have

= p, i.e., Ap/p^3Q/P. (61)

The C-field strength likewise takes a jump at creation and declines over the following cycle by the factor exp(—4Q/P). Thus the requirement of "steady state" from cycle to cycle tells us that the change in the strength of C

2

must be

AC

2

= ^C

2

. (62)

The above result is seen to be consistent with (40) when we take note of the conservation law (39). A little manipulation of this equation gives us

'S3). (63)

However, the right hand side is the rate of creation of matter per unit volume. Since from (59) and (60) we have

AC

2

_ 4 A p

~C2~~3~p~

!

(64)

and from (42) and (43) we have p = /C

2

, we see that (61) is deducible from (59) and (60).

To summarize, we find that the composite solution properly reflects the quasi-steady state character of the cosmology in that while each cycle of duration Q is exactly a repeat of the preceding one, over a long time scale the universe expands with the de Sitter expansion factor exp(f/P). The two time scales P and Q of the model thus turn out to be related to the coupling constants and the parameters A,/,G, r\f the field equations. Further progress in the theoretical problem can be made after we understand the quantum theory of creation by the C-field.

23

(14)

These solutions contain sufficient number of arbitrary constants to assure us that they are generic, once we make the simplification that the universe obeys the Weyl postulate and the cosmological principle. The composite solution can be seen as an illustration of how a non-creative mode can be joined with the creative mode. More possibilities may exist of combining the two within the given framework. We have, however, followed the simplicity argument (also used in the standard big bang cosmology) to limit our present choice to the composite solution described here. HBN have used (59), or its approximation

27U

~Q

(65)

\ / \ s

to work out the observable features of the QSSC, which we shall highlight next.

THE ASTROPHYSICAL PICTURE Cosmological Parameters

Coming next to a physical interpretation of these mathematical solutions, we can visualize the above model in terms of the following values of its parameters:

P = 202, 2 = 5 x!010yrs, r|= 0.811,

A = -0.358 x 10~56(cm)~2. (66)

To fix ideas, we have taken the maximum redshift Zmax — 5 so that the scale factor at the present epoch So is determined from the relation So = S(l — Tj)(l +Zmax)- This set of parameters has been used in recent papers on the QSSC (Narlikar, et al. 2002,2003). For this model the ratio of maximum to minimum scale factor in any oscillation is around 9.6.These parametric values are not uniquely chosen; they are rather indicative of the magnitudes that may describe the real universe. For example, Zmax could be as high as 10 without placing any strain on the model. The various observational tests seek to place constraints on these values. Can the above model quantified by the above parameters cope with such tests? If it does we will know that the QSSC provides a realistic and viable alternative to the big bang.

The Radiation Background

As far as the origin and nature of the MBR is concerned we use a fact that is always ignored by standard cosmologists. If we suppose that most of the 4He found in our own and external galaxies (about 24% of the hydrogen by mass) was synthesized by hydrogen burning in stars, the energy released amounts to about 4.37 x 10~13 erg cm""3. This is almost exactly equal to the energy density of the microwave background radiation

24

with T = 2.74°K. For proponents of the standard model this has to be dismissed as a coincidence, but for the QSSC it is a powerful argument in favor of the hypothesis that the microwave radiation at the level detected is relic starlight from previous oscillations in the QSSC which has been thermalized (Hoyle, et al. 1994). Of course, this coincidence loses its significance in the standard big bang cosmology where the MBR temperature is epoch-dependent.

It is then natural to suppose that the other light isotopes, namely D, 3He, 6Li, 7Li,

9Be, 10B and 1JB were produced by stellar processes. It has been shown (cf. Burbidge and Hoyle, 1998) that both spallation and stellar flares (for 2D) on the surfaces of stars can explain the measured abundances. Thus all of the isotopes are ultimately a result of stellar nucleosynthesis (Burbidge et al. 1957; Burbidge and Hoyle 1998).

This option raises a problem, however. If we simply extrapolate our understanding of stellar nucleosynthesis, we will find it hard to explain the relatively low metallicity of stars in our Galaxy. This is still an unsolved problem. We believe but have not yet established that it may be that the initial mass function of the stars where the elements are made is dominated by stars which are only able to eject the outer shells while all of the heavy elements are contained in the cores which simply collapse into NBHs. Using theory we can construct a mass function which will lead to the right answer (we think) but it has not yet been done. But of course our handwaving in this area is no better than all of the speculations that are being made in the conventional approach when it comes to the "first" stars.

The theory succeeds in relating the intensity and temperature of MBR to the stellar burning activity in each cycle, the result emphasizing the causal relationship between the radiation background and nuclear abundances. But, how is the background thermalized?

The metallic whisker shaped grains condensed from supernova ejecta have been shown to effectively thermalize the relic starlight (Hoyle et al., 1994, 2000). It has also been demonstrated that inhomogeneities on the observed scale result from the thermalized radiation from clusters, groups of galaxies etc. thermalized at the minimum of the last oscillation (Narlikar et al., 2003). By using a toy model for these sources, it has been shown that the resulting angular power spectrum has a satisfactory fit to the data compiled by Podariu et al (2001) for the band power spectrum of the MBR temperature inhomogeneities. Extending that work further we show, in the following, that the model is also consistent with the first- and third- year observations of the Wilkinson Microwave Anisotropy Probe (WMAP) (Page et al. 2003; Spergel et al. 2006).

Following Narlikar et al (2003) we model the inhomogeneity of the MBR temperature as a set of small disc-shaped spots, randomly distributed on a unit sphere. The spots may be either 'top hat' type or 'Gaussian' type. In the former case they have sharp boundaries whereas in the latter case they taper outwards. We assume the former for clusters, and the latter for the galaxies, or groups of galaxies, and also for the curvature effect. This is because the clusters will tend to have rather sharp boundaries whereas in the other cases such sharp limits do not exist. The resultant inhomogeneity of the MBR thus arises from a superposition of random spots of three characteristic sizes corresponding to the three effects - the curvature effects at the last minimum of the the scale factor, clusters, and groups of galaxies. This is given by a 7 - parameter model of the angular power spectrum

25

References

Related documents

Clinico-mycological study on superficial fungal infections in tertiary care hospital and a profile of their antifungal susceptibility pattern. Hanumanthappa H, Sarojini K,

This is to certify that Mr Ankur Thakur, from Centre for Management studies, Jamia Millia Islamia has completed Internship with Tata Power Solar Systems Limited, Bangalore for two

Jitendra Kumar, student of Dayalbagh Educational Institute, Agra completed a 6-week Internship Programme under Hankernest Technologies Pvt.. As part-fulfillment of the

motivations, but must balance the multiple conflicting policies and regulations for both fossil fuels and renewables 87 ... In order to assess progress on just transition, we put

Since the C-field is a global cosmological field, we expect the creation phenomenon to be globally cophased. Thus, there will be phases when the creation activity is large, leading

For example, for a colloidal solution of copper in water, copper particles constitute the dispersed phase and water the dispersion medium... Types of Colloidal

3.1.1 Primary Objective: To Study on Serum Cholinesterase Level in deaths due to Organophosphorus compounds Poisoning.. 3.1.2 Secondary Objective: To provide an evidence based

Additional data from a prospective randomized trial, also suggested preferential outcomes for the1-stage approach (ie,LCBDE+LC) in terms of decreased hospital stay and