Could Dark Energy be a “Cosmic Gravity Background”

Could Dark Energy be a “Cosmic Gravity Background”

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Could dark energy (the accelerating expansion of the universe) be the result of gravity from the dense past universe "pulling" outward from all directions? The universe theoretically started out extremely dense and hot, and we can see the light reaching us as the cosmic microwave background. I wonder if that same region and denser portions beyond are exerting a gravitational effect that is stretching out everything within toward it, effectively being a cosmic gravity background. If so, is this the explanation for dark energy?

There is in fact a cosmic gravitational wave background. These waves are expected to be stochastic, having originated in the early universe (much earlier than the cosmic microwave background). Random fluctuations were subsequently stretched during inflation, making them observable over many wavelengths. A good and reasonably up-to-date introduction I read is Lasky et al. (2016), which gives some of the most recent constraints on the parameters of the frequency spectrum $Omega_{ ext{GW}}(f)$. The great thing about the wide spectrum is that a number of different tools (pulsar timing arrays, interferometers like LIGO, etc.) can be used to constrain different parts of it, both directly and indirectly.

That said, dark energy is almost certainly not a result of the gravitational wave background. First, there's no theoretical mechanism to explain how such an effect would arise. Second, the background is probably not strong enough to have an influence on the current expansion of the universe (see the graph here to see how weak it is). Finally, dark energy is a relatively recent phenomenon, arising several billion years ago. The background was around long before then, and yet had no effects.

Could dark energy (the accelerating expansion of the universe) be the result of gravity from the dense past universe "pulling" outward from all directions?

No, because gravity alters the motion of light and matter through space. It doesn't make space fall down in some Chicken-Little fashion. You may hear that it does, as per the waterfall analogy, but it doesn't. In similar vein the "reversed" gravity that you're suggesting doesn't make space fall up, as it were.

The universe theoretically started out extremely dense and hot, and we can see the light reaching us as the cosmic microwave background.

I think most people are fairly happy with the CMBR.

I wonder if that same region and denser portions beyond are exerting a gravitational effect that is stretching out everything within toward it, effectively being a cosmic gravity background.

It's good to think for yourself, and it's good to wonder. But I have to say I think this idea is a non-starter. We have galaxies receding faster than light (see this paper). That means it's space expanding, not gravity pulling.

If so, is this the explanation for dark energy?

Sorry, no. If I were you I'd have a look at vacuum energy.

Sidereal Observer

Spooky Action at a Distance

Einstein’s “spooky action at a distance” has made it into the popular press, including publications such as the New York Times, The Atlantic, and The Economist. For journalists, the experiments related to “spooky action” are significant for their technological potential. The theory is quantum mechanics, and it is vital to our high-tech digital world. This is the branch of physics that deals with the behavior of atoms and electrons. It is useful for designing the semiconductors in our computers, cell phones, digital cameras, lasers, etc.

Entanglement is the term physicists use today. “Spooky action” doesn’t sound quite right. When two particles are entangled, they have a certain connection that makes them appear to act in unison no matter how far apart they are. According to the math, they could be in different galaxies and still appear to be holding hands.

To check this out, John Bell worked out the math for measuring the spins of entangled electrons with spins in opposite directions. Bell wrote an amusing paper called “Bertlmann’s Socks and the Nature of Reality” about his results. His friend Bertlmann always wore socks of different colors. So if you see one pink sock, then you know the other is not pink. This is not spooky action at a distance it is the way Dr. Bertlmann got dressed that morning.

Entangled electrons are different. If you measure one spin up, then you know the other is spin down. Common sense tells us that’s the way the electrons started out. Bell used that common sense idea to derive an equation (actually, an inequality) for measuring spins at different angles. He found that the common sense result is different from what we calculate from quantum mechanics.

Applications for entangled particles have been proposed for cryptography. So far, we don’t see an way to use them for faster-than-light communication, but we can always speculate about what we may find if we understand the phenomenon better. After all, George Washington would have thought cell phones were impossible. The theory of electromagnetism was developed in the mid-1800’s, while Washington died in 1799. Even the camera part of the cell phone would have baffled him. We have a lot of paintings of Washington, but I don’t think he ever saw a camera.

Photons are easier to work with than electrons, so the experiments to test Bell’s inequalities have been done with entangled photons. Entangled photons have the same polarization, whereas entangled electrons have opposite spins. The easiest way to understand this, I believe, is to realize that both “up” and “down” are the same for a polarizer. So the Bertlmann’s socks analogy does not work quite the same way for photons: we can think instead about my own socks, which always match.

The experimental results indicate that quantum mechanics is right and common sense is wrong. For more details, see “How I Understand Bell’s Theorem”:

Posted in Physics and Math | Comments Off on Spooky Action at a Distance

Entanglement and Superluminal Communication

When the New Horizons spacecraft visited Pluto, it had to operate fairly independently. That’s because it takes about 5 hours for light to travel from Earth to Pluto, or from Pluto back to Earth. We communicate with spacecraft by radio signals, which are part of the electromagnetic spectrum and thus travel at the speed of light.

Let’s imagine trying to “drive” the New Horizons probe by remote control from Earth. Suppose we have a big screen displaying photos sent back from the spacecraft to show where it is. But actually, what we see is the view from 5 hours ago. Now, if we want to turn left, or speed up or slow down, we send a signal that will take another 5 hours to reach the spacecraft. Clearly, NASA had to plan ahead.

Entangled photons appear to communicate with each other instantaneously over any distance. See “How I Understand Bell’s Theorem”. Einstein called this phenomenon “spooky action at a distance”. Many people have asked whether we could use this spooky action to speed up the process of communicating with our spacecraft.

I believe this could work IF WE HAD A TIME MACHINE to repeat the Aspect experiment with the same sequence of photon pairs and different settings of the polarizers. However, if we had a time machine, then we wouldn’t need anything so complicated as entangled photons. We could just go back in time far enough to send a conventional signal.

Without the time machine, we just see a random sequence of polarizations at each end of the Aspect experiment. To get any information, we have to compare the sequences at the two ends. That requires communication by conventional signals.

Posted in Physics and Math | Comments Off on Entanglement and Superluminal Communication

Introduction to Bell’s Theorem: Physics for Fun and Philosophy

Everybody has heard of Einstein’s theory of relativity and E = mc(squared). This is one of the most fascinating parts of physics because it shows us our universe has a lot more than what meets the eye. Space can warp, time can slow down or speed up, and things can look very different from a different reference frame.

Einstein won the Nobel Prize for a theory that is even more fascinating: quantum mechanics. This area is less well known to the public, but it’s both weirder and more practical than relativity. Quantum mechanics is the basic theory for all our electronic technology, as well as our understanding of organic chemistry and molecular biology. It’s what we know about our universe at the smallest scales of the particles that make up everything. The math of quantum theory is well established, because we can use it to engineer products. The philosophy is something else.

Einstein is often quoted as saying “God does not play dice.” This is a comment about one of the weird things in quantum mechanics: it tells us the fundamental building blocks of our universe do some things at random. We can calculate statistical results, but the individual particles do not seem to have an independent reality.

Einstein thought there was something wrong here. In one paper, Einstein, Podolsky, and Rosen (EPR) described a thought experiment for measuring the position and momentum of a pair of particles with a special connection, which we now call entanglement. He hoped to show that quantum mechanics has some deficiencies, so we need a more advanced theory to get rid of the weird stuff. This paper led to remarkable results.

John Bell adapted Einstein’s ideas to make an experimental test more practical. As it turns out, the weird stuff in quantum mechanics appears to be correct.

A graduate level discussion of Bell’s Theorem can be found in the text by Arno Bohm and Mark Loewe. The primary reference for this article is Jim Baggott’s book, The Meaning of Quantum Mechanics, which describes both the math and the philosophical implications of Bell’s Theorem in greater detail. My goal is is to simplify the description so that more people can appreciate how fascinating our world really is.

The experiment to test Bell’s theorem starts with polarized light. Some types of sunglasses and camera filters are based on polarization. In classical electromagnetic theory, light is a wave with an oscillating electric field. Vertically polarized light keeps its electric field vertical, while horizontal light keeps its electric field horizontal. Only vertically polarized light passes through a vertical polarizer, and only horizontally polarized light passes through a horizontal polarizer. For a polarizer at some other angle, some vertically polarized light will pass through and some will be blocked. The amount of light passed depends on the angle between the polarized light and the polarization filter.

More expensive polarization analyzers can split the light into two beams, with vertically polarized light in one beam and horizontally polarized light in the other beam.

In quantum mechanics, light consists of photons. It is possible to do an experiment to produce a pair of photons that have a special connection in their polarization. This special connection, or entanglement, appears to let them communicate instantaneously with each other, no matter how far they are separated. Also it enables us to check whether they are “born” with a specific polarization, or whether something unexpected happens when they are measured.

See “How I Understand Bell’s Theorem” for the results.

Posted in Physics and Math | Comments Off on Introduction to Bell’s Theorem: Physics for Fun and Philosophy

What Is Light?

Physics gives us two descriptions of light, and both of them sound easy to understand until we put them together. The classical theory of light is an electromagnetic wave, similar to the waves we can see on the surface of a pond if we throw in a rock. The quantum theory of light is a particle, called a photon, which is more similar to the rock than to the water wave.

Sometimes the quantum picture is easier to understand. For example, I learned about nuclear magnetic resonance (the basis of MRI medical imaging) in chemistry class before I encountered it in physics class. A hydrogen atom in a magnetic field acts like a tiny magnet that can aligned with the field or against the field. These two orientations have different energies. A photon of radio frequency electromagnetic energy (part of the spectrum of light) can deliver a packet of energy to the atom, just right to flip it from the lower energy to the higher one. The classical description of nuclear magnetic resonance is more complicated, involving precession of the magnetic moment at the Larmor frequency. I never understood that, since it was so much easier to think in terms of photons.

To understand how eyeglasses, camera lenses, and other optical equipment work, we need the classical theory. Picture the water waves around the rock that fell in the pond. At any one spot, the surface of the water moves up and down. The position of the crests and troughs of the wave move outward from the spot where the rock was dropped. If we replace the water height by the magnitude of an electric vector, then we have a picture of a light wave in two dimensions.

Polaroid sunglasses take advantage of another feature of light. If the electric vector is going up and down vertically, the light wave is vertically polarized. It is also possible for the electric vector to oscillate horizontally (unlike the water wave), and this light is horizontally polarized. Most light that we consider “glare” is horizontally polarized, so the polarized sunglasses filter it out and deliver vertically polarized light to our eyes.

There is another possibility for light polarization. The direction of oscillation for the electric field can rotate around the direction the wave is traveling. In this case, we have circular polarization.

In quantum theory, photons have circular polarization because they carry angular momentum. Or at least they have circular polarization when they are “born”, as when they are emitted from atoms making a transition to a lower energy state. So at first glance, I would expect an individual photon to be able to pass through a vertical polarizer and then a horizontal polarizer, if we place the two polarizers at an appropriate distance to allow the electric field vector to rotate 90 degrees (or 90 degrees plus any number of complete turns). That’s not what we see experimentally. After a photon passes through a vertical polarizer, then it can pass through any number of perfect vertical polarizers at any distance, but not through any horizontal polarizers at any distance.

Now let’s take a look at entangled photons. “Entangled” means they are sort of stuck together in a mathematical sense. The ones I want to consider are two photons emitted from the same atom, but traveling in opposite directions. They are both born with circular polarizations. The electric vector of one rotates in the opposite direction from that of the other. Let’s suppose we measure the polarization of each photon at some distance from its source. At first glance, I would expect we could choose distances so that one photon is vertically polarized and the other horizontally polarized. Again, that’s not what we see experimentally. The polarizations of the two photons always match.

The mathematical description of these entangled photons is a bit complicated, so I’ll just give a reference. The result from quantum mechanics is that the polarizations should always match. Any time we align the polarizers in the same direction on both sides of the source, we will get the same results for both photons. (The really interesting part comes when we rotate one polarizer with respect to the one on the other side of the source. I’ll cover this in “How I Understand Bell’s Theorem”.) This is not what we expect for classical waves with circular polarization, which are never described by the math of entanglement.

If we picture light as a collection of particles, then we get oddities such as a particle going through two slits at once, when we have never observed half a photon. If we picture light as a wave, then the wave has to transform into a particle when it interacts. We sometimes say light travels as a wave and interacts as a particle. This gives us the right answers, but it leaves out a key point: the mechanism for changing from wave to particle. Usually we expect physics to explain how things happen. We still have more work to do in describing light.

The Meaning of Quantum Theory, Jim Baggott, Oxford University Press, 1992.

Posted in Physics and Math | Comments Off on What Is Light?

How I Understand Bell’s Theorem

Here’s the idealized version of the experiment. Take one calcium atom, and hit it with two laser beams of particular wavelengths to produce a particular high-energy electronic state. According to quantum mechanics, it should emit two (entangled) photons in opposite directions. Look for polarized photons.

In practical terms, the experiment is done with a vacuum chamber and a beam of calcium atoms from a pinhole in a hot oven. The detector system records vertical and horizontal photons on opposite sides of the atomic beam. The result is a random sequence of vertical and horizontal photon pairs, the same on both sides, within the accuracy of the measuring system.

Vertical and horizontal are arbitrary directions here. The “vertical” axis of the polarizer can be set at any angle to the vertical axis of the lab. As long as the polarizers on both sides of the atomic beam are set at the same angle, both detectors produce the same results. The interesting results come when the polarizers are set at different angles.

I made a random series of zeros and ones by looking at the digits of pi. Odd digits get one, even digits get zero. Suppose we do the entangled photon experiment with both polarizers set at 0 degrees to the lab vertical. Let’s say “1” represents a vertical photon detection, and “0” represents a horizontal photon detection. The sequence of photons detected could look like this:

0 Degrees:

Now for an idealized experiment, let’s suppose we can go back in time and repeat the experiment, with one change. Rotate the right polarizer to +22.5 degrees from the lab vertical. From classical electromagnetism, we expect approximately 85% of the light to go through both polarizers the same way. Here we have

Amplitude = cos (22.5 degrees) = 0.924
Intensity = Amplitude squared = 0.854

In quantum mechanics, this means the results of the polarization detectors on opposite sides will agree approximately 85% of the time, and disagree approximately 15% of the time. Which pairs will disagree? We don’t know all we can calculate is the statistical result, as in playing dice. We assume the left polarization analyzer gives the same results as it did the first time, because what we do to the right polarizer should not affect what happens on the left.

Let’s say the agreements are “hits” and the disagreements are “misses”. Here’s a possible sequence for the right detector in our second run:

+22.5 Degrees:

Now let’s go back in time and reset the polarizers again. Let’s put the right polarization analyzer back to 0 degrees, and move the left polarization analyzer to -22.5 degrees. As in Run #2, we should have approximately 15% misses between the two detectors. Again, these misses happen at random. Here’s a possible sequence for the left detector of our third run:

-22.5 Degrees:

And now for the weird part. Let’s go back in time for one final run, and set the right polarization analyzer at +22.5 degrees and the left one at -22.5 degrees. Comparing the two sequences for these angles, we see that there should be at most 30% misses. Sometimes the misses from Run #2 and the misses from Run #3 occur for the same photon pair, so the number of misses in Run #4 can be less than 30%. It can’t be more. So at least 70% of the photon detections should match when the polarizers are at an angle of 45 degrees to each other.

Now let’s return to reality, where we can’t do time travel. We can still take a lot of data and do a careful mathematical analysis. The statistical results are the same as the results in our time travel idealized experiment: if we get 15% misses when the polarizers are 22.5 degrees apart, then we should get no more than 30% misses when the polarizers are 45 degrees apart.

What actually happens when the polarizers are set at an angle of 45 degrees to each other? They agree only half the time. In classical electromagnetism, we have:

Amplitude = cos (45 degrees) = 0.707 = square root of 1/2
Intensity = Amplitude squared = 0.50

What happened here? Bell concluded there is something wrong with the very reasonable assumptions used to calculate the maximum number of misses when we rotate both polarization analyzers from the vertical. The photons appear to know how the polarizer is set on the opposite side of the setup.

In principle, this experiment could be done halfway between the Milky Way and the Andromeda galaxy, 2.5 million lightyears away. One polarization analyzer could be in each galaxy. So we don’t expect one photon to know what’s happening when its entangled partner gets detected.

We can’t use entangled particles to send messages faster than light, because all we see in the experiment is a random series of vertical and horizontal polarizations. Let’s say Alice is receiving photons in the Andromeda galaxy, while Bob is receiving them here at home. Bell’s theorem implies that the way Alice sets her polarizers affects what Bob measures, and vice versa, but for this application, it matters that they don’t have a time machine.

Entangled photons may have applications in cryptography. As long as Alice and Bob can communicate by ordinary signals, they can compare results and see whether an eavesdropper has intruded into their entangled photon observations. However, they can’t send information using the entangled photons alone, whatever they do with their polarizers.

Photons are different from us. Entangled photons appear to be able to communicate with each other instantaneously over arbitrarily large distances.

There is no simple answer to why these results come out the way they do. As long as we consider light to be a classical wave, the cosine squared formula is fairly easy to understand. The conceptual difficulty comes when we see that light acts like a stream of photons. A similar problem arises in the description of an interferometer, where the photon appears to take two different paths at the same time. A classical wave can always be split into two parts, but not a photon. We would not expect an electron to be split into halves and then recombined.

As in Einstein’s theory of relativity, we see our universe has a lot more than what meets the eye.

Quantum Mechanics: Foundations and Applications, Second edition, Chapter XIII.3, “Bell’s Inequalities, Hidden Variables, and the EPR Paradox”, Arno Bohm and Mark Loewe, Springer-Verlag, 1986.

The Meaning of Quantum Theory, Jim Baggott, Oxford University Press, 1992.

Posted in Physics and Math | Comments Off on How I Understand Bell’s Theorem

Star Wars in a Parallel Universe

The popular Star Wars movie series starts off by telling us the story happened long ago, in a galaxy far away. Most of the characters look and act remarkably similar to people here on earth. Does that make any sense?

Remarkably, modern physics says yes. There can be other people just like us in places where we can never contact them. Brian Greene described nine versions of what we may call “parallel universes” in his book The Hidden Reality. Parallel means they are similar but do not interact, like parallel lines that never meet no matter how far extended.

How Far Away Is that Star Wars Galaxy?

The word universe originally meant everything there is. The prefix “uni” means one, as in the words unit, unique, united, uniform, unicorn, unicycle, unilateral, unisex, etc. If everything is included in the universe, then there can only be one. However, physics has expanded our ideas of what “everything” means. Now sometimes we speak of our universe as everything accessible to us. Originally the word “world” also meant everything, and sometimes we still use it that way. Other times, a “world” means a planet, and we recognize many other planets. Hypothetically, there can be other universes containing everything accessible to their inhabitants, just as complete as ours. Greene and others use the term “multiverse” to include all universes.

Greene’s first version of the multiverse is the one we see in Star Wars. As we look out into space with powerful telescopes, we observe billions of galaxies. They’re distributed uniformly in every direction, so if we take a chunk of space containing a million galaxies, it will look pretty much the same as any other chunk of space the same size. We don’t know how far space goes there may be an end to our universe out there somewhere, or it may be infinite. So how many galaxies would it take to give us a reasonable chance of finding a Star Wars galaxy?

Let’s assume space is not just amazingly big, but infinite. Then it has an infinite number of chunks the size of everything we can observe in our universe. The size of our observable universe is limited by the distance light can travel in the time since the universe began in the Big Bang. Light is fast compared to things we encounter in everyday life, but it’s a slowpoke when we consider distances on a cosmic scale. It takes 2 million years just to cross the distance from the Andromeda galaxy to us.

Think about our observable universe as a sphere embedded in the much larger, infinite universe. Now imagine a bunch of other spheres the same size as our universe, scattered throughout infinite space, and separated by distances larger than the diameter of each sphere. This would be similar to the “plum pudding” model of the atom proposed by J. J. Thomson in 1904. At that time, scientists knew atoms contained negatively charged electrons plus positively charged something to balance the charges. The plum pudding model was an early guess about the structure of the atom. I’ve never actually seen any plum pudding, so I think more in terms of an unsliced loaf of raisin bread. Let’s scale this model up really, really, big. Each plum or raisin now represents a region of space the size of our observable universe. We assume that our region of space is not special beyond the limits of what we can see, the entire universe goes on and on forever with pretty much the same density of matter. So in our analogy, the raisin or plum areas are not different from the intervening pudding or bread material they’re just arbitrarily chosen regions that are too far apart to allow any interaction with each other. Greene calls this the quilted multiverse he thinks of it more as the three-dimensional analog of a patchwork quilt.

Greene explains there is only a finite number of different ways for the matter in each plum region to be arranged. It is amazingly big, but still finite, and infinity is infinitely bigger than any finite number. Thus, in an infinite universe, there should be an unlimited number of copies of each possible arrangement. That includes copies of us!

By extrapolation, if every possible arrangement of matter is equally likely, then there will be an infinite number of regions of space with the Star Wars characters. This strikes me as a truly outlandish picture of reality, but some of the other multiverse ideas are even stranger.

Even Farther Out

I’m more familiar with the many worlds interpretation of quantum mechanics, developed by Hugh Everett III in 1957. I’ll describe the details in the chapter, Wine Tasting for Physics Insights. It provides basically the same conclusion: anything that can happen does happen somewhere, in some parallel universe.

Then it gets even weirder. Everything that can happen is determined by the laws of physics, along with the values for physical constants such as the mass of the electron, etc. What if we have a spectrum of parallel universes with different physics? For each variation, there could be an infinite multiverse where everything that can happen does happen. The seven other versions of multiverse in Greene’s book deal with these possibilities.

Ideas about parallel universes are highly speculative, but the physicists who came up with them are not crackpots. Their motivation is not just a vivid imagination, although I’m sure that helps. They are serious scientists trying to solve serious problems. We would like for physics theory to cover everything consistently, but so far that’s not the case I’ll say more about that later. For now, let’s take a break from the mind-boggling size of these multiverses.

There is a much older proposal for something like parallel universes. The 17th century philosopher Rene Descartes proposed a spiritual domain of reality distinct from the physical domain. Humans participate in both, so they’re not quite the same as parallel universes that can never meet. They’re similar in that ideas have been proposed for a spiritual domain just as interesting as the physical, and operating independently for the most part. Descartes’ idea is called dualism, or sometimes Cartesian dualism. You may have heard of Cartesian coordinates in math class they’re named after the same Descartes, who was a mathematician as well as philosopher.

From a physics perspective, Cartesian dualism is just as speculative as multiverse ideas, but Descartes was not a crackpot any more than our modern physicists are. He was trying to solve serious problems such as how consciousness and free will can exist in a world governed by laws of physics. His idea involves only one somewhat parallel universe, as opposed to an infinite number.

May the Force Be with You

Here I want to share a story about string theory, a research area that motivates several multiverse ideas. The popular level book called Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law, by Peter Woit, describes problems in this venture. One day I took my copy to read while I got the oil changed in my car. When it was time to pay the bill, I put the book on the counter while I fished out my credit card. The auto mechanic said, “I thought they just couldn’t prove string theory I didn’t think it was wrong.” I was totally amazed. How many auto mechanics know that much about string theory?

Now back to the motivation for physicists to speculate about parallel universes. First, they’re not perfectly parallel: our theoreticians hope to find some place where these other universes meet our own, so that we can check them out experimentally. Second, we have a long scientific tradition of discovering how everything is bigger than what we thought. There are other planets, other solar systems, and other galaxies. Why stop there?

Most important, physicists like theories that cover everything. We love the principle of conservation of energy, because as far as we understand, it applies to everything: particle physics, chemistry, biology, geology, engineering, astronomy, cosmology, etc. Our theories of the forces of nature are not in such good shape. Electromagnetism and the nuclear forces fit together nicely in quantum theory, but gravity is an oddball. Theoreticians are working to develop a theory of quantum gravity, and string theory is part of this project.

Electricity, magnetism, and light were considered to be unrelated phenomena until the mid 19th century. The development of electromagnetic theory showed these are all manifestations of the same force, and led to remarkable advances in technology. Today’s physicists hope to unite all the forces of nature into a single theory.

In Star Wars, The Force is the spiritual domain of life. It can be used for good or evil, but it’s never overlooked. The characters in the story have a unified theory of The Force plus the forces of nature, or what we can see as science and religion. I expect we will eventually develop a unified view of reality also.

I realize some people think science and religion have to be in conflict, because different people have different ideas about the two subjects. However, scientists find just as many conflicts within their own fields. In my own graduate research, I got involved in a bitter dispute about the rate constant for precipitation of silver bromide, which is not even that interesting. Peter Woit is a theoretical physicist himself, and his book Not Even Wrong is quite an insult to some of his fellow physicists. The title of the book implies the theory is so poorly developed that it doesn’t even deserve to be judged right or wrong. Many of our leading theoretical physicists, including Nobel Laureates, have been working on string theory for decades, so I’m sure they do not agree with Woit.

I don’t know whether any of the far out multiverse ideas is correct, or whether the Star Wars characters exist in one or more of them. I do know we make progress in physics by considering lots and lots of ideas that sound far out. Ideas with spiritual significance, such as Cartesian duality, are certainly tamer than some physics proposals. If you’re willing to consider an infinite multiverse as real, then you might as well consider eternal life in heaven as a real possibility.

Posted in Physics and Math | Comments Off on Star Wars in a Parallel Universe

What Einstein Did for Us

Everybody knows about Albert Einstein. He was a famous physicist who developed the theory of relativity, among other things. If you’re not a theoretical physicist, you may wonder, what does all this mean for us?

Curiosity is Cool

Einstein’s work, like most theoretical physics, includes complicated math that most people don’t want to mess with. When you hear about who won the Nobel Prize for physics this year, chances are you won’t understand what the winners did or why they won. Einstein’s work is unique because most of us do know something about it in. His most famous equation, E = mc(squared), says that mass can be converted to energy, and vice versa. This is how we explain atomic bombs. In the next chapter, I’ll give examples of how it works.

There are many popular books on relativity, with pictures of rockets moving near the speed of light. We don’t have such technology, of course, but we can still be curious about what would happen if we did. The speed of light is very fast, approximately 300,000 kilometers per second (186,000 miles per second). Einstein’s work shows that we can never travel faster than that. So at maximum speed, it would take us about 4 years to reach the star nearest to the sun, 100,000 years to cross our Milky Way galaxy, and 2.5 million years to reach the Andromeda galaxy. Does that mean interstellar travel is forever beyond our reach? Well, no, Einstein’s work also shows us time slows down for fast movers.

I would love to travel back in time and tell George Washington about sending photos home from my vacation using the free wi-fi at McDonald’s. Our first president died before the theory of electromagnetic waves was developed, so he would have no idea how wi-fi could work. Also he never saw a camera, as far as I can find in the history of photography his portraits were painted by hand.

Einstein’s curiosity went far beyond possibilities that were technically feasible during his lifetime. He was never afraid to ask what would happen if we could do something far out: travel on a light beam, see individual atoms, etc. He claimed to be fairly ordinary in terms of mathematical ability (for a physicist, anyway). His genius sprang from his imagination and curiosity. We can all take him as a good example.

Truth is Stranger than Fiction

Relativity theory tells us what you see is not always what you get. In everyday life, we look around and see things moving through space, and nothing ever happens to empty space. In relativity theory, space can shrink or warp due to gravitational fields or motion of objects. In everyday life, it’s obvious that time marches on at the same pace forever. Einstein showed us time slows down in some situations.

There are a lot of jokes about relativity as a matter of perception: an hour in the dentist’s office, for example, feels much longer than an hour with a very attractive romantic partner. But it’s not just perception in relativity theory time really does slow down under some conditions. Space gets changed by the same factors that slow down time. To see relativistic effects, we need either very precise measurements or very strong gravity or very fast objects, moving at near the speed of light.

Relativity theory stretches the imagination. Science fiction writers love it. Their stories take us far beyond the space and time accessible in our ordinary lives, and such fictional travels are more entertaining when they have grains of truth in them. Relativity lifts limitations on what is possible. In the future, our technology may take us to places that are stranger than fiction. In the present, our imagination can take us beyond the limitations of what we can physically do.

Mistakes Are OK

Dark energy is one of the hot topics in astrophysics today. The universe is expanding, according to observations of other galaxies. Galaxies form clusters that are tied together by their gravity, but every cluster is rushing away from every other cluster. Theoreticians interpret this to mean the space between the galactic clusters is expanding. For many years, physicists expected the rate of expansion to slow down because of gravity. Current data, however, indicate the rate of expansion is speeding up. This was the topic for the 2011 Nobel Prize in physics, awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess.

So what is pushing the galaxies apart faster and faster? Whatever it is, theoreticians have named it dark energy. The word “dark” indicates two things: first, we can’t see it, and second, we’re in the dark metaphorically about what it is.

When Einstein was working on his theory of general relativity, he thought the universe had always been the same size and would always stay that way. No expansion or contraction. That’s what everyone thought. Einstein’s equations told him, however, that something would have to balance the attraction of gravity, so he put in a term he called the cosmological constant. Later he found out the universe is expanding (and could be slowing down from gravity), so he took the cosmological constant out. Einstein is widely quoted as saying that constant was his greatest blunder.

Today, the concept of dark energy is basically the same as Einstein’s cosmological constant. So what Einstein thought was a mistake turned out to be right, and what he thought was right (an unchanging universe) turned out to be a mistake.

I think physicists, more than any other scientists, have to deal with new data that changes the picture. When we look at theories from the past, we see a lot of ideas that are wrong in terms of fitting the data we have today. We can be pretty sure that some of our ideas today are wrong we just don’t know which ones.

Einstein’s very public mistake involving the cosmological constant is a perfect illustration of the value of risky ideas. Yes, we’re going to make mistakes when we venture into new territory. We can always change our minds.

And Then There Are the Practical Applications

There are some technological applications of relativity. We looked at atomic bombs and E = mc(squared) above. Another example is the global positioning system (GPS), which requires very precise clocks in the satellites. They’re in orbit high above earth, where the gravitational field is significantly weaker than at earth’s surface, so relativistic effects have to be included in the calculations. Then there’s chemistry. Relativity theory is necessary for understanding the spin property of electrons, and this is a step toward understanding materials at the atomic and molecular level. Medical researchers and engineers need the results.

However, Einstein did something else that has much more relevance to our lives, both technologically and spiritually. In 1905, he wrote one of the pivotal papers on quantum mechanics, which is the branch of physics that deals with the fundamental particles that make up everything. This was the work that earned him the Nobel Prize. In the following chapters, we’ll look more closely at why quantum mechanics is even weirder, and even more significant to us.

Posted in Physics and Math | Comments Off on What Einstein Did for Us

Heisenberg May Have Been Here

Heisenberg’s uncertainty principle is a key part of quantum theory. The English name is a translation of the German “Unbestimmtheit Principe”. Most undergraduate physics students learn the uncertainty principle in terms of limits on experimental accuracy. When we make a measurement, we disturb the system. For microscopic systems, such as electrons and photons, this disturbance is significant. This part is true, but it’s not the whole story. Unbestimmt could be more appropriately translated as indeterminate. According to Bohr and his Copenhagen colleagues, the properties of microscopic systems do not exist until they are measured.

The standard example is position and momentum. We can measure one with arbitrary accuracy, but not both. This is true partly because when we measure one property, we disturb the system enough to change the other one. However, according to the Copenhagen interpretation, the system does not really have a position or a momentum until we make the measurement.

Einstein said that can’t be right. Of course the electron has a position and a momentum at every instant, he claimed we just have a limitation on measuring it. He considered a pair of entangled electrons. In principle, we could measure the position of one, and the momentum of the other. Entangled particles have to match, in a quantum mathematical sense. So Einstein, along with colleagues Podolsky and Rosen, wrote a paper (the EPR paper) explaining how each electron must have a definite position and momentum, in violation of the exclusion principle. Here’s a link to the EPR paper:

It is not practical to measure the position and momentum of an electron precisely enough to see whether quantum uncertainty (or indeterminism) can be violated. However, with some extra math, John Bell showed how two entangled particles with spin could be tested. Here’s a link to his paper:

As it turns out, the experiments favor Bohr’s interpretation. Particle properties are not just uncertain, they appear to be indeterminate. For details, see my article on entanglement:

Posted in Physics and Math | Comments Off on Heisenberg May Have Been Here

Wine Tasting for Physics Insights

Wine tasting has a lot to offer the physics community. Here’s how it works.

Think about how you would describe the taste of a particular food or drink. Flavors and aromas are a challenge to communicate, partly because we can distinguish so many of them. One possibility is to describe an unknown flavor in terms of more familiar flavors. “Tastes like chicken” works for a lot of meats. Kiwifruits taste like strawberries, melons, and bananas, according to the web site The same site characterizes coriander in terms of citrus peel and sage.

Math uses the same idea. Have you ever wondered what a calculator does when you push the function keys? It calculates a series expansion. This is another way to get something you don’t know from a list of things you do know.

Wine flavors are the best example I know of flavor series expansions. There is an enormous number of wines, each with its own flavor, according to people who appreciate this sort of thing. Wikipedia provides a table of different grape varieties and the most common descriptors used for each one. These descriptors include dozens of fruits and spices, along with a number of flavors I would never have expected in wine: tobacco, farmyard, pencil shavings, petrol, and cat pee. I’m not a wine aficianado, so the descriptions don’t mean a whole heck of a lot to me. However, I see the wine tasting culture is popular, and these descriptions do mean something to a lot of people. In the Wikipedia article, I read Merlot has flavors of black cherry, plums, and tomato, while Chardonnay has flavors of butter, melon, pineapple, and vanilla. I like this approach.

Physics aficianados describe electrons using an approach similar to that used by wine tasters. Our electronic devices have a complicated variety of electron states, as food and wine have a complicated variety of flavors. In college, we learn a bunch of relatively simple mathematical functions, analogous to the familiar flavors such as apple, cherry, tomato, etc. Then we use the simple functions to build up more complicated functions.

Wave functions are special mathematical items used in quantum mechanics. The name comes from some of their characteristics, which are similar to water waves. Simple wave functions represent situations where nothing is happening. One example would be a single hydrogen atom in outer space, in its lowest energy state. As long as it’s not disturbed, it’s very stable. Now, for the interesting stuff, such as what goes on in our electronic devices, the wave functions are more complicated.

Writing a wave function is like describing the taste of a particular wine. In most cases, it’s very complicated. So physicists write a complicated wave function as a sum of simpler, more familiar ones.

In quantum mechanics, we assume that everything has a wave function. A molecule, made of many atoms, has a wave function. I have a wave function. You have a wave function. The entire planet earth has a wave function. The universe has a wave function. We assume in principle that we can calculate what’s going to happen by solving the Schrodinger equation to see how the wave function changes with time. In practical terms, we can solve the equation only for very simple cases, such as an individual atom or a perfect crystal. But we assume the Schrodinger equation applies to all wave functions, whether we can solve it or not.

In classical mechanics, we use Newton’s laws to calculate what’s going to happen. Physicists can do an excellent job of predicting eclipses, for example. It would be too complicated to solve the equations for the entire world, so we have just approximate predictions about the weather, earthquakes, and many other processes. As far as we understand, the Schrodinger equation gives the same results as Newton’s laws for most processes that involve a large number of particles. It gives different results for small systems such as atoms and molecules.

There is one major exception to the Schrodinger equation. It does not apply to a measurement process. Theoretically, it works for everything else that ever happens in the universe, but it doesn’t work for a measurement on even a single electron. This is what “collapse of the wave function” is about.

Let’s go back to the wine tasting analogy. Suppose, for the moment, everyone agrees that Merlot tastes like black cherries plus plus plums plus tomato. To be more specific, let’s say Merlot is 40% black cherry plus 40% plum plus 30% tomato. Wine tasting is in the domain of classical mechanics and Newton’s laws. Both people and wine are composed of enormous numbers of atoms and molecules. We use the term macroscopic to describe things (or people) in this domain. Individual atoms are designated as microscopic. Now in the macroscopic world, measurement processes obey the same laws of physics as any other processes. Tasting wine is one type of measurement. Everybody who tastes the same wine should get the same results.

In the microscopic domain of Schrodinger’s equation, things are different. Pretend wine tasting worked that way. Then if we had 100 people taste a Merlot, it would taste like black cherry for 40 of them, like plum for another 40, and like tomato for the remaining 30. You can pour the wine into a different container, or transport it to another city, or do anything that does not count as a measurement, and it keeps all of its flavors. But once you taste it, you get only one flavor. And you never know which flavor you’re going to get all you know is that if a large number of people taste the Merlot, then 40% get black cherry, 40% get plum, and 30% get tomato.

Collapse of the wave function is totally different from what we see in ordinary life. Erwin Schrodinger himself, who won the Nobel Prize for his work on quantum mechanics, had trouble understanding how nature can work that way. He wrote a famous story about a cat in a box with some radioactive material. There are many types of atoms that naturally decay into other elements by radioactive emission of a particle. If we write the wave function for a radioactive atom, we need a series expansion with two terms: the original and the decayed states. For a lump of uranium containing many atoms, the series expansion must include a term for all the atoms in their original state, plus a term for one decay, plus a term for two decays, and so on. When we make a measurement, the wave function collapses to just one term that corresponds to the correct number of decays.

Schrodinger imagined a device that would detect one radioactive decay and break a vial of poison, which would kill the cat. He wrote the article in German, and the usual translation denotes his contraption as a “diabolical device”. I read one other translation that calls it an “infernal machine”. I always get a laugh out of the second translation, because I think in terms of infernal machine when my computer crashes.

The wave function for radioactive material must contain terms for 50% original and 50% with one decay if there is a 50% chance of measuring one decay after a specified time. Schrodinger asked us to imagine sealing the cat in the box with the radioactive material and the diabolical device for the specified time. So for his equation to apply to the entire universe, the wave function for the box and its contents must have two terms: one for original radioactive material and live cat, plus one for decayed material and dead cat. Then when we open the box and make a measurement, that is, we look to see whether the cat is dead or alive, the wave function collapses to one or the other.

Schrodinger was not happy about the series expansion with live cat and dead cat. That’s not something we see in ordinary life. In the wine tasting analogy, Merlot is not really a combination of cherries, plums, and tomatoes. It just tastes that way. The flavors are in the human mind. And Merlot certainly does not collapse into one of these fruits when someone tastes it.

The collapse of the wave function is part of quantum mechanics because it is so successful in analyzing what happens to microscopic systems. In practical terms, this is part of the deal that makes our electronic devices work. In philosophical terms, it is a mystery.

Ever since I first heard about it in a college physics class, I’ve been wondering how the wave function collapses. I expected to see pages and pages of math in some textbook for graduate school deriving the collapse, and I wondered how much sense I would be able to make of it. But then I took the break from school and read the popular books, and learned there is no math at all! We can calculate probabilities for the results, as in the analogy of wine tasting, but there is no calculation that shows how we get from the series expansion to just one term.

I’ve seen science cartoons where the professor is writing mathematical equations all over a chalkboard, and one line in the middle of his work says, “Then a miracle occurs.” That’s how the collapse of the wave function works. The Schrodinger equation supposedly describes everything that ever happens, until a measurement is made. Then a miracle occurs and the wave function collapses. Then the Schrodinger equation picks back up again and describes everything that happens until the next measurement.

As it turns out, I’m not the only one wondering how this happens. From the earliest days of quantum mechanics, theoretical physicists have searched for a good explanation. Einstein objected that quantum mechanics must not be complete. He thought the series expansion was just an approximation, and there should be some way to calculate exactly the results of a measurement. “God does not play dice” is his well-known about quantum measurement. However, decades of careful experiments have demonstrated agreement with the collapse of the wave function. As far as we can tell, God really does play dice at the microscopic level.

Another Nobel laureate in physics, Eugene Wigner, speculated that the wave function collapses when the measurement results enter the mind of a conscious observer. This idea is most similar to the wine tasting analogy. I think it has a lot of value, but it’s not at all popular in the physics community. The main objection is that a lot of quantum processes happen in the universe, and most of them do not have any conscious observers looking. So do macroscopic objects such as stars and galaxies exist as series expansions until someone looks at them? Generally we think not. We think something or someone collapses the wave functions for large items. Maybe God can collapse the wave functions for processes when we’re not looking, but then I might think God would collapse all the wave functions, including the one for the cat in Schrodinger’s box. Why would human observers make any difference?

The standard explanation of wave function collapse is part of the Copenhagen interpretation, developed primarily by physicist Neils Bohr. He described the measurement process as an irreversible act of amplification. In his view, the wave function of the radioactive atom collapses as soon as the emitted particle interacts with the macroscopic detector. The cat, box, and human observer are all extraneous. All you need to collapse the wave function is interaction with something macroscopic. In practical terms, Bohr drew a line between microscopic and macroscopic.

Physicist David Mermin characterized the Copenhagen interpretation as “Shut up and calculate.” Bohr never told us how the wave function collapses, just when. Separating macroscopic and microscopic is a great way to do engineering without worrying about philosophy. We can’t possibly solve the Schrodinger equation for a particle detector, or for a cat, or for the vast majority of macroscopic objects. However, we can do a lot of calculations that are useful in the microscopic domains of electronic and chemical technologies. Most physicists, as well as engineers, are happy to do the practical calculations and forget about the philosophical implications, such as whether God plays dice.

Most, but not all. Physicists tend to be enormously curious. We want to know how and why things happen the way they do. So the quantum measurement problem never goes away.

The Copenhagen interpretation leaves important questions unanswered. What is the boundary between microscopic and macroscopic? One atom is clearly microscopic. Also two, and three, and so on. But at some point, “and so on” doesn’t work any more a sufficiently large number of atoms is macroscopic. So what exactly does macroscopic mean? And why do macroscopic objects collapse wave functions? How do they do it?

In 1957, Hugh Everett III published another interpretation of quantum measurement. His paper describes a reality where wave functions never collapse instead, the universe splits into branches. When a measurement occurs, each possibility in the series expansion happens in its own branch, or parallel universe. This sounds like the stuff of science fiction, but Everett was perfectly serious. So he removed the problem of God playing dice with the universe, but he added an enormous number of universes to the total reality. For obvious reasons, his work is called the Many Worlds interpretation.

I find Many Worlds interesting, but I think it still leaves the important questions unanswered. When does a system become big enough to qualify as macroscopic? And why do macroscopic objects split the universe when a wave function interacts with them? How do they do it?

Everett’s paper has some math, but the important idea is parallel universes that do not interact with each other. There is no calculation of how to produce or detect such universes. So for the moment, we have no way to check whether he’s right.

Maybe we should go taste some wine.

Posted in Physics and Math | Comments Off on Wine Tasting for Physics Insights

Twin Paradox

I think the most important paradox in relativity is the question of whether there is a fixed space, as we always assume in our common sense view of the universe. Special relativity deals with constant velocities (no acceleration), and it tells us there is no fixed space. Every non-accelerating reference frame is equivalent. We can only measure velocities relative to another reference frame. General relativity deals with acceleration. OK, acceleration relative to what?

Let’s call the twins Alice and Bob. Bob is traveling in a spaceship at high (constant) speed, let’s say 90% of the speed of light relative to Alice. Special relativity tells us that Alice will measure Bob’s clocks running slow, and also Bob’s spaceship will be shorter in the direction of travel. Now Alice can be on Earth, or on a space station in the middle of the galaxy, or anywhere else. Her reference frame is also non accelerating (to a good approximation: if she’s on Earth, then there is a small acceleration as Earth orbits the sun). Special relativity tells us Bob will see Alice traveling away from him at 90% of the speed of light, and her clocks running slow, and everything in her reference frame shorter in the direction of travel.

Of course they can’t both really be slower and shorter than the other. So let’s get Bob to turn around and come back. Relativity calculations show that Bob really will end up younger than Alice.

Dave asked, will Bob also be shorter?

As I learned the twin paradox, an important question was: why can’t we assume Alice turned around and came back? Then she would be younger than Bob. The answer to this is that Bob has to undergo accelerations to turn around and come back.

Larry presented an excellent explanation of why Bob comes back younger but not shorter: “Basically, age can differ for the twins because age is a path integral over changing frames rather than a simple time coordinate difference in one frame. The everyday analog is an accumulated odometer reading, which is a path integral over changing directions. Odometer reading will not match the as-the-crow-flies difference in coordinates.”

Now let’s say Alice and Bob started on a space station out in deep space, outside the galaxy, and outside of any galactic cluster. Einstein said there is no way to measure an absolute velocity. As long as there is no acceleration, any observer can say “my reference frame is at rest and the other person is moving”. So when Bob feels an acceleration, how does that work? Maybe he accelerates relative to the total mass in the rest of the universe. Then why can’t he have a velocity relative to the total mass of the universe?

Now on to Marty’s question. General relativity describes gravity in terms of curved spacetime. So is it really a force? Theoretical physicists have been working for decades to develop a quantum theory of gravity, in analogy to the quantum theories of electromagnetism, the strong nuclear force, and the weak nuclear force. So far they have not been successful. Quantum gravity would include a graviton particle, in analogy with photons, W and Z bosons, and gluons. No gravitons have yet been observed. Many physicists are sure that all four forces have to fit together in a single theory, but so far it’s not working. Are these folks barking up the wrong tree? If so, maybe Marty has an idea why.

I certainly do not have all the answers to gravity, relativity, and spacetime, and I’m pretty sure no one else does either. So philosophers, keep asking the great questions!

Could Dark Energy be a &ldquoCosmic Gravity Background&rdquo - Astronomy

Number of Followers: 1

Open Access journal
ISSN (Online) 2075-1680
Published by MDPI [233 journals]

  • Axioms, Vol. 9, Pages 113: Comprehensive Criteria for the Extrema in
    Entropy Production Rate for Heat Transfer in the Linear Region of Extended
    Thermodynamics Framework

    • Authors:George D. Verros
      First page: 113
      Abstract: In this work comprehensive criteria for detecting the extrema in entropy production rate for heat transfer by conduction in a uniform body under a constant volume in the linear region of Extended Thermodynamics Framework are developed. These criteria are based on calculating the time derivative of entropy production rate with the aid of well-established engineering principles, such as the local heat transfer coefficients. By using these coefficients, the temperature gradient is replaced by the difference of this quantity. It is believed that the result of this work could be used to further elucidate irreversible processes.
      Citation: Axioms
      PubDate: 2020-10-08
      DOI: 10.3390/axioms9040113
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Somayeh Nemati, Delfim F. M. Torres
      First page: 114
      Abstract: We propose two efficient numerical approaches for solving variable-order fractional optimal control-affine problems. The variable-order fractional derivative is considered in the Caputo sense, which together with the Riemann&ndashLiouville integral operator is used in our new techniques. An accurate operational matrix of variable-order fractional integration for Bernoulli polynomials is introduced. Our methods proceed as follows. First, a specific approximation of the differentiation order of the state function is considered, in terms of Bernoulli polynomials. Such approximation, together with the initial conditions, help us to obtain some approximations for the other existing functions in the dynamical control-affine system. Using these approximations, and the Gauss&mdashLegendre integration formula, the problem is reduced to a system of nonlinear algebraic equations. Some error bounds are then given for the approximate optimal state and control functions, which allow us to obtain an error bound for the approximate value of the performance index. We end by solving some test problems, which demonstrate the high accuracy of our results.
      Citation: Axioms
      PubDate: 2020-10-13
      DOI: 10.3390/axioms9040114
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Nopparat Wairojjana, Nuttapol Pakkaranang, Habib ur Rehman, Nattawut Pholasa, Tiwabhorn Khanpanuk
      First page: 115
      Abstract: A number of applications from mathematical programmings, such as minimax problems, penalization methods and fixed-point problems can be formulated as a variational inequality model. Most of the techniques used to solve such problems involve iterative algorithms, and that is why, in this paper, we introduce a new extragradient-like method to solve the problems of variational inequalities in real Hilbert space involving pseudomonotone operators. The method has a clear advantage because of a variable stepsize formula that is revised on each iteration based on the previous iterations. The key advantage of the method is that it works without the prior knowledge of the Lipschitz constant. Strong convergence of the method is proved under mild conditions. Several numerical experiments are reported to show the numerical behaviour of the method.
      Citation: Axioms
      PubDate: 2020-10-13
      DOI: 10.3390/axioms9040115
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Nipon Waiyaworn, Kamsing Nonlaopon, Somsak Orankitjaroen
      First page: 116
      Abstract: In this paper, we present the distributional solutions of the modified spherical Bessel differential equations t2y&prime&prime(t)+2ty&prime(t)&minus[t2+&nu(&nu+1)]y(t)=0 and the linear differential equations of the forms t2y&prime&prime(t)+3ty&prime(t)&minus(t2+&nu2&minus1)y(t)=0, where &nu&isinN&cup <0>and t&isinR. We find that the distributional solutions, in the form of a finite series of the Dirac delta function and its derivatives, depend on the values of &nu. The results of several examples are also presented.
      Citation: Axioms
      PubDate: 2020-10-13
      DOI: 10.3390/axioms9040116
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Miguel Vivas-Cortez, Artion Kashuri, Rozana Liko, Jorge Eliecer Hernández Hernández
      First page: 117
      Abstract: In this paper, the authors analyse and study some recent publications about integral inequalities related to generalized convex functions of several variables and the use of extended fractional integrals. In particular, they establish a new Hermite&ndashHadamard inequality for generalized coordinate ϕ-convex functions via an extension of the Riemann&ndashLiouville fractional integral. Furthermore, an interesting identity for functions with two variables is obtained, and with the use of it, some new extensions of trapezium-type inequalities using Raina&rsquos special function via generalized coordinate ϕ-convex functions are developed. Various special cases have been studied. At the end, a brief conclusion is given as well.
      Citation: Axioms
      PubDate: 2020-10-15
      DOI: 10.3390/axioms9040117
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Nopparat Wairojjana, Mudasir Younis, Habib ur Rehman, Nuttapol Pakkaranang, Nattawut Pholasa
      First page: 118
      Abstract: Variational inequality theory is an effective tool for engineering, economics, transport and mathematical optimization. Some of the approaches used to resolve variational inequalities usually involve iterative techniques. In this article, we introduce a new modified viscosity-type extragradient method to solve monotone variational inequalities problems in real Hilbert space. The result of the strong convergence of the method is well established without the information of the operator&rsquos Lipschitz constant. There are proper mathematical studies relating our newly designed method to the currently state of the art on several practical test problems.
      Citation: Axioms
      PubDate: 2020-10-15
      DOI: 10.3390/axioms9040118
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Nallapu Vijender, Vasileios Drakopoulos
      First page: 119
      Abstract: In this article, firstly, an overview of affine fractal interpolation functions using a suitable iterated function system is presented and, secondly, the construction of Bernstein affine fractal interpolation functions in two and three dimensions is introduced. Moreover, the convergence of the proposed Bernstein affine fractal interpolation functions towards the data generating function does not require any condition on the scaling factors. Consequently, the proposed Bernstein affine fractal interpolation functions possess irregularity at any stage of convergence towards the data generating function.
      Citation: Axioms
      PubDate: 2020-10-18
      DOI: 10.3390/axioms9040119
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Salvatore Triolo
      First page: 120
      Abstract: In this paper, we analyze local spectral properties of operators R,S and RS which satisfy the operator equations RnSRn=Rj and SnRSn=Sj for same integers j&gen&ge0. We also continue to study the relationship between the local spectral properties of an operator R and the local spectral properties of S. Thus, we investigate the transmission of some local spectral properties from R to S and we illustrate our results with an example. The theory is exemplified in some cases.
      Citation: Axioms
      PubDate: 2020-10-19
      DOI: 10.3390/axioms9040120
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Tursun K. Yuldashev, Erkinjon T. Karimov
      First page: 121
      Abstract: The questions of the one-value solvability of an inverse boundary value problem for a mixed type integro-differential equation with Caputo operators of different fractional orders and spectral parameters are considered. The mixed type integro-differential equation with respect to the main unknown function is an inhomogeneous partial integro-differential equation of fractional order in both positive and negative parts of the multidimensional rectangular domain under consideration. This mixed type of equation, with respect to redefinition functions, is a nonlinear Fredholm type integral equation. The fractional Caputo operators&rsquo orders are smaller in the positive part of the domain than the orders of Caputo operators in the negative part of the domain under consideration. Using the method of Fourier series, two systems of countable systems of ordinary fractional integro-differential equations with degenerate kernels and different orders of integro-differentation are obtained. Furthermore, a method of degenerate kernels is used. In order to determine arbitrary integration constants, a linear system of functional algebraic equations is obtained. From the solvability condition of this system are calculated the regular and irregular values of the spectral parameters. The solution of the inverse problem under consideration is obtained in the form of Fourier series. The unique solvability of the problem for regular values of spectral parameters is proved. During the proof of the convergence of the Fourier series, certain properties of the Mittag&ndashLeffler function of two variables, the Cauchy&ndashSchwarz inequality and Bessel inequality, are used. We also studied the continuous dependence of the solution of the problem on small parameters for regular values of spectral parameters. The existence and uniqueness of redefined functions have been justified by solving the systems of two countable systems of nonlinear integral equations. The results are formulated as a theorem.
      Citation: Axioms
      PubDate: 2020-10-20
      DOI: 10.3390/axioms9040121
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Hasan S. Panigoro, Agus Suryanto, Wuryansari Muharini Kusumawinahyu, Isnani Darti
      First page: 122
      Abstract: The harvesting management is developed to protect the biological resources from over-exploitation such as harvesting and trapping. In this article, we consider a predator&ndashprey interaction that follows the fractional-order Rosenzweig&ndashMacArthur model where the predator is harvested obeying a threshold harvesting policy (THP). The THP is applied to maintain the existence of the population in the prey&ndashpredator mechanism. We first consider the Rosenzweig&ndashMacArthur model using the Caputo fractional-order derivative (that is, the operator with the power-law kernel) and perform some dynamical analysis such as the existence and uniqueness, non-negativity, boundedness, local stability, global stability, and the existence of Hopf bifurcation. We then reconsider the same model involving the Atangana&ndashBaleanu fractional derivative with the Mittag&ndashLeffler kernel in the Caputo sense (ABC). The existence and uniqueness of the solution of the model with ABC operator are established. We also explore the dynamics of the model with both fractional derivative operators numerically and confirm the theoretical findings. In particular, it is shown that models with both Caputo operator and ABC operator undergo a Hopf bifurcation that can be controlled by the conversion rate of consumed prey into the predator birth rate or by the order of fractional derivative. However, the bifurcation point of the model with the Caputo operator is different from that of the model with the ABC operator.
      Citation: Axioms
      PubDate: 2020-10-22
      DOI: 10.3390/axioms9040122
      Issue No:Vol. 9, No. 4 (2020)

    • Authors:Mohamed Tahar Kadaoui Abbassi, Noura Amri
      First page: 72
      Abstract: In this paper, we study natural paracontact magnetic trajectories in the unit tangent bundle, i.e., those that are associated to g-natural paracontact metric structures. We characterize slant natural paracontact magnetic trajectories as those satisfying a certain conservation law. Restricting to two-dimensional base manifolds of constant Gaussian curvature and to Kaluza&ndashKlein type metrics on their unit tangent bundles, we give a full classification of natural paracontact slant magnetic trajectories (and geodesics).
      Citation: Axioms
      PubDate: 2020-06-30
      DOI: 10.3390/axioms9030072
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Diego Caratelli, Pierpaolo Natalini, Paolo Emilio Ricci
      First page: 73
      Abstract: After recalling the most important properties of the Bell polynomials, we show how to approximate a positive compact operator by a suitable matrix. Then, we derive a representation formula for functions of the obtained matrix, which can be considered as an approximate value for the functions of the corresponding operator.
      Citation: Axioms
      PubDate: 2020-06-30
      DOI: 10.3390/axioms9030073
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Ilya Boykov, Vladimir Roudnev, Alla Boykova
      First page: 74
      Abstract: We propose an iterative projection method for solving linear and nonlinear hypersingular integral equations with non-Riemann integrable functions on the right-hand sides. We investigate hypersingular integral equations with second order singularities. Today, hypersingular integral equations of this type are widely used in physics and technology. The convergence of the proposed method is based on the Lyapunov stability theory of solutions of ordinary differential equation systems. The advantage of the method for linear equations is in simplicity of unique solvability verification for the approximate equations system in terms of the operator logarithmic norm. This makes it possible to estimate the norm of the inverse matrix for an approximating system. The advantage of the method for nonlinear equations is that neither the existence or reversibility of the nonlinear operator derivative is required. Examples are given illustrating the effectiveness of the proposed method.
      Citation: Axioms
      PubDate: 2020-07-01
      DOI: 10.3390/axioms9030074
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Osama Moaaz, Hamida Mahjoub, Ali Muhib
      First page: 75
      Abstract: In this paper, we are interested in studying the periodic behavior of solutions of nonlinear difference equations. We used a new method to find the necessary and sufficient conditions for the existence of periodic solutions. Through examples, we compare the results of this method with the usual method.
      Citation: Axioms
      PubDate: 2020-07-01
      DOI: 10.3390/axioms9030075
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Senee Suwandee, Arumona Edward Arumona, Kanad Ray, Phichai Youplao, Preecha Yupapin
      First page: 76
      Abstract: We have proposed that human life is formed on a space and time function relationship basis, which is distorted after fertilization in the ovum, from which growth is generated by a space&ndashtime distortion against the universe&rsquos gravity. A space&ndashtime distortion&rsquos reduction can be managed by space and time separation, which is known as mindfulness. A space&ndashtime distortion in human cells is configured by a polariton traveling in a gold grating film, which can be employed to investigate mindfulness characteristics. Mindfulness is the steady state of the time function of energy after the separation. Energy levels of mindfulness based on polariton aspects are categorized by a quantum number (n), which can be reduced to be a two-level system called Rabi oscillation by a successive filtering method. We have assumed a cell space&ndashtime distortion can reduce to reach the original state, which is the stopping state. Mindfulness with a certain frequency energy level of n = 2 was achieved. Several techniques in the practice of mindfulness based on successive filtering called meditation are given and explained, where the required levels of the mindfulness state can be achieved. The criteria of the proposed method are a low energy level (n) and high frequency (f) outputs, which can apply to having a working performance improvement.
      Citation: Axioms
      PubDate: 2020-07-08
      DOI: 10.3390/axioms9030076
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Sanjib Biswas, Dragan Pamucar
      First page: 77
      Abstract: Facility location is one of the critical strategic decisions for any organization. It not only carries the organization’s identity but also connects the point of origin and point of consumption. In the case of higher educational institutions, specifically B-Schools, location is one of the primary concerns for potential students and their parents while selecting an institution for pursuing higher education. There has been a plethora of research conducted to investigate the factors influencing the B-School selection decision-making. However, location as a standalone factor has not been widely studied. This paper aims to explore various location selection criteria from the viewpoint of the candidates who aspire to enroll in B-Schools. We apply an integrated group decision-making framework of pivot pairwise relative criteria importance assessment (PIPRECIA), and level-based weight assessment LBWA is used wherein a group of student counselors, admission executives, and educators from India has participated. The factors which influence the location decision are identified through qualitative opinion analysis. The results show that connectivity and commutation are the dominant issues.
      Citation: Axioms
      PubDate: 2020-07-08
      DOI: 10.3390/axioms9030077
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Olga Grigorenko, Juan Jose Miñana, Alexander Šostak, Oscar Valero
      First page: 78
      Abstract: We present an alternative approach to the concept of a fuzzy (pseudo)metric using t-conorms instead of t-norms and call them t-conorm based fuzzy (pseudo)metrics or just CB-fuzzy (pseudo)metrics. We develop the basics of the theory of CB-fuzzy (pseudo)metrics and compare them with &ldquoclassic&rdquo fuzzy (pseudo)metrics. A method for construction CB-fuzzy (pseudo)metrics from ordinary metrics is elaborated and topology induced by CB-fuzzy (pseudo)metrics is studied. We establish interrelations between CB-fuzzy metrics and modulars, and in the process of this study, a particular role of Hamacher t-(co)norm in the theory of (CB)-fuzzy metrics is revealed. Finally, an intuitionistic version of a CB-fuzzy metric is introduced and applied in order to emphasize the roles of t-norms and a t-conorm in this context.
      Citation: Axioms
      PubDate: 2020-07-08
      DOI: 10.3390/axioms9030078
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:G. Muhiuddin, D. Al-Kadi, M. Balamurugan
      First page: 79
      Abstract: The notion of anti-intuitionistic fuzzy soft a-ideals of B C I -algebras is introduced and several related properties are investigated. Furthermore, the operations, namely AND, extended intersection, restricted intersection, and union on anti-intuitionistic fuzzy soft a-ideals are discussed. Finally, characterizations of anti-intuitionistic fuzzy soft a-ideals of B C I -algebras are given.
      Citation: Axioms
      PubDate: 2020-07-08
      DOI: 10.3390/axioms9030079
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Abdukomil Risbekovich Khashimov, Dana Smetanová
      First page: 80
      Abstract: The paper is devoted to solutions of the third order pseudo-elliptic type equations. An energy estimates for solutions of the equations considering transformation&rsquos character of the body form were established by using of an analog of the Saint-Venant principle. In consequence of this estimate, the uniqueness theorems were obtained for solutions of the first boundary value problem for third order equations in unlimited domains. The energy estimates are illustrated on two examples.
      Citation: Axioms
      PubDate: 2020-07-16
      DOI: 10.3390/axioms9030080
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Maksim V. Kukushkin
      First page: 81
      Abstract: In this paper, we continue our study of the Abel equation with the right-hand side belonging to the Lebesgue weighted space. We have improved the previously known result&mdash the existence and uniqueness theorem formulated in terms of the Jacoby series coefficients that gives us an opportunity to find and classify a solution by virtue of an asymptotic of some relation containing the Jacobi series coefficients of the right-hand side. The main results are the following&mdashthe conditions imposed on the parameters, under which the Abel equation has a unique solution represented by the series, are formulated the relationship between the values of the parameters and the solution smoothness is established. The independence between one of the parameters and the smoothness of the solution is proved.
      Citation: Axioms
      PubDate: 2020-07-17
      DOI: 10.3390/axioms9030081
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Fateme Ghomanjani, Stanford Shateyi
      First page: 82
      Abstract: An effective algorithm for solving quadratic Riccati differential equation (QRDE), multipantograph delay differential equations (MPDDEs), and optimal control systems (OCSs) with pantograph delays is presented in this paper. This technique is based on Genocchi polynomials (GPs). The properties of Genocchi polynomials are stated, and operational matrices of derivative are constructed. A collocation method based on this operational matrix is used. The findings show that the technique is accurate and simple to use.
      Citation: Axioms
      PubDate: 2020-07-18
      DOI: 10.3390/axioms9030082
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:David W. Pravica, Njinasoa Randriampiry, Michael J. Spurr
      First page: 83
      Abstract: A family of Schwartz functions W ( t ) are interpreted as eigensolutions of MADEs in the sense that W ( &delta ) ( t ) = E W ( q &gamma t ) where the eigenvalue E &isin R is independent of the advancing parameter q &gt 1 . The parameters &delta , &gamma &isin N are characteristics of the MADE. Some issues, which are related to corresponding q-advanced PDEs, are also explored. In the limit that q &rarr 1 + we show convergence of MADE eigenfunctions to solutions of ODEs, which involve only simple exponentials and trigonometric functions. The limit eigenfunctions ( q = 1 + ) are not Schwartz, thus convergence is only uniform in t &isin R on compact sets. An asymptotic analysis is provided for MADEs which indicates how to extend solutions in a neighborhood of the origin t = 0 . Finally, an expanded table of Fourier transforms is provided that includes Schwartz solutions to MADEs.
      Citation: Axioms
      PubDate: 2020-07-21
      DOI: 10.3390/axioms9030083
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Sopo Pkhakadze, Hans Tompits
      First page: 84
      Abstract: Default logic is one of the basic formalisms for nonmonotonic reasoning, a well-established area from logic-based artificial intelligence dealing with the representation of rational conclusions, which are characterised by the feature that the inference process may require to retract prior conclusions given additional premisses. This nonmonotonic aspect is in contrast to valid inference relations, which are monotonic. Although nonmonotonic reasoning has been extensively studied in the literature, only few works exist dealing with a proper proof theory for specific logics. In this paper, we introduce sequent-type calculi for two variants of default logic, viz., on the one hand, for three-valued default logic due to Radzikowska, and on the other hand, for disjunctive default logic, due to Gelfond, Lifschitz, Przymusinska, and Truszczyński. The first variant of default logic employs Łukasiewicz&rsquos three-valued logic as the underlying base logic and the second variant generalises defaults by allowing a selection of consequents in defaults. Both versions have been introduced to address certain representational shortcomings of standard default logic. The calculi we introduce axiomatise brave reasoning for these versions of default logic, which is the task of determining whether a given formula is contained in some extension of a given default theory. Our approach follows the sequent method first introduced in the context of nonmonotonic reasoning by Bonatti, which employs a rejection calculus for axiomatising invalid formulas, taking care of expressing the consistency condition of defaults.
      Citation: Axioms
      PubDate: 2020-07-21
      DOI: 10.3390/axioms9030084
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Kyung-Tae Kang, Seok-Zun Song, Eun Hwan Roh, Young Bae Jun
      First page: 85
      Abstract: The notion of hybrid ideals in B C K / B C I -algebras is introduced, and related properties are investigated. Characterizations of hybrid ideals are discussed. Relations between hybrid ideals and hybrid subalgebras are considered. Characterizations of hybrid ideals are considered. Based on a hybrid structure, properties of special sets are investigated, and conditions for the special sets to be ideals are displayed.
      Citation: Axioms
      PubDate: 2020-07-23
      DOI: 10.3390/axioms9030085
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Alexander Yeliseev
      First page: 86
      Abstract: An asymptotic solution of the linear Cauchy problem in the presence of a &ldquoweak&rdquo turning point for the limit operator is constructed by the method of S. A. Lomov regularization. The main singularities of this problem are written out explicitly. Estimates are given for &epsilon that characterize the behavior of singularities for ϵ&rarr0. The asymptotic convergence of a regularized series is proven. The results are illustrated by an example. Bibliography: six titles.
      Citation: Axioms
      PubDate: 2020-07-23
      DOI: 10.3390/axioms9030086
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Julio César Hernández Arzusa
      First page: 87
      Abstract: In this paper, we give conditions under which a commutative topological semigroup can be embedded algebraically and topologically into a compact topological Abelian group. We prove that every feebly compact regular first countable cancellative commutative topological semigroup with open shifts is a topological group, as well as every connected locally compact Hausdorff cancellative commutative topological monoid with open shifts. Finally, we use these results to give sufficient conditions on a commutative topological semigroup that guarantee it to have countable cellularity.
      Citation: Axioms
      PubDate: 2020-07-24
      DOI: 10.3390/axioms9030087
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:David Levin
      First page: 88
      Abstract: In some applications, one is interested in reconstructing a function f from its Fourier series coefficients. The problem is that the Fourier series is slowly convergent if the function is non-periodic, or is non-smooth. In this paper, we suggest a method for deriving high order approximation to f using a Pad&eacute-like method. Namely, we do this by fitting some Fourier coefficients of the approximant to the given Fourier coefficients of f. Given the Fourier series coefficients of a function on a rectangular domain in Rd, assuming the function is piecewise smooth, we approximate the function by piecewise high order spline functions. First, the singularity structure of the function is identified. For example in the 2D case, we find high accuracy approximation to the curves separating between smooth segments of f. Secondly, simultaneously we find the approximations of all the different segments of f. We start by developing and demonstrating a high accuracy algorithm for the 1D case, and we use this algorithm to step up to the multidimensional case.
      Citation: Axioms
      PubDate: 2020-07-24
      DOI: 10.3390/axioms9030088
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Konstantinos Kalimeris, Athanassios S. Fokas
      First page: 89
      Abstract: Using the unified transform, also known as the Fokas method, we analyse the modified Helmholtz equation in the regular hexagon with symmetric Dirichlet boundary conditions namely, the boundary value problem where the trace of the solution is given by the same function on each side of the hexagon. We show that if this function is odd, then this problem can be solved in closed form numerical verification is also provided.
      Citation: Axioms
      PubDate: 2020-07-28
      DOI: 10.3390/axioms9030089
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Kazuki Yamaga
      First page: 90
      Abstract: It is known that, in quantum theory, measurements may suppress Hamiltonian dynamics of a system. A famous example is the &lsquoQuantum Zeno Effect&rsquo. This is the phenomena that, if one performs the measurements M times asking whether the system is in the same state as the one at the initial time until the fixed measurement time t, then survival probability tends to 1 by taking the limit M&rarr&infin. This is the case for fixed measurement time t. It is known that, if one takes measurement time infinite at appropriate scaling, the &lsquoQuantum Zeno Effect&rsquo does not occur and the effect of Hamiltonian dynamics emerges. In the present paper, we consider the long time repeated measurements and the dynamics of quantum many body systems in the scaling where the effect of measurements and dynamics are balanced. We show that the stochastic process, called the symmetric simple exclusion process (SSEP), is obtained from the repeated and long time measurements of configuration of particles in finite lattice fermion systems. The emerging stochastic process is independent of potential and interaction of the underlying Hamiltonian of the system.
      Citation: Axioms
      PubDate: 2020-07-29
      DOI: 10.3390/axioms9030090
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Romeo Pascone, Cathryn Callahan
      First page: 91
      Abstract: A novel method for generating and providing quadrature solutions to families of linear, second-order, ordinary differential equations is presented in this paper. It is based upon a comparison of control system feedback diagrams&mdashone representing the system and equation under study and a second equalized to it and providing solutions. The resulting Riccati equation connection between them is utilized to generate and solve groups of equations parameterized by arbitrary functions and constants. This method also leads to a formal solution mechanism for all second-order linear differential equations involving an infinite series of integrals of each equation&rsquos Schwarzian derivative. The practicality of this mechanism is strongly dependent on the series rates of and allowed regions for convergence. The feedback diagram method developed is shown to be equivalent to a comparable method based on the differential equation&rsquos normal form and another relying upon the grouping of terms for a reduction of the equation order, but augmenting their results. Applications are also made to the Helmholtz equation.
      Citation: Axioms
      PubDate: 2020-07-29
      DOI: 10.3390/axioms9030091
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Shaima M. Dsouza, Tittu Mathew Varghese, P. R. Budarapu, S. Natarajan
      First page: 92
      Abstract: A non-intrusive approach coupled with non-uniform rational B-splines based isogeometric finite element method is proposed here. The developed methodology was employed to study the stochastic static bending and free vibration characteristics of functionally graded material plates with inhered material randomness. A first order shear deformation theory with an artificial shear correction factor was used for spatial discretization. The output randomness is represented by polynomial chaos expansion. The robustness and accuracy of the framework were demonstrated by comparing the results with Monte Carlo simulations. A systematic parametric study was carried out to bring out the sensitivity of the input randomness on the stochastic output response using Sobol&rsquo indices. Functionally graded plates made up of Aluminium (Al) and Zirconium Oxide (ZrO2) were considered in all the numerical examples.
      Citation: Axioms
      PubDate: 2020-07-30
      DOI: 10.3390/axioms9030092
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Thomas Ernst
      First page: 93
      Abstract: The Horn&ndashKarlsson approach to find convergence regions is applied to find convergence regions for triple q-hypergeometric functions. It turns out that the convergence regions are significantly increased in the q-case just as for q-Appell and q-Lauricella functions, additions are replaced by Ward q-additions. Mostly referring to Krishna Srivastava 1956, we give q-integral representations for these functions.
      Citation: Axioms
      PubDate: 2020-07-31
      DOI: 10.3390/axioms9030093
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:José Luis Carmona Jiménez, Marco Castrillón López
      First page: 94
      Abstract: We study the reduction procedure applied to pseudo-K&aumlhler manifolds by a one dimensional Lie group acting by isometries and preserving the complex tensor. We endow the quotient manifold with an almost contact metric structure. We use this fact to connect pseudo-K&aumlhler homogeneous structures with almost contact metric homogeneous structures. This relation will have consequences in the class of the almost contact manifold. Indeed, if we choose a pseudo-K&aumlhler homogeneous structure of linear type, then the reduced, almost contact homogeneous structure is of linear type and the reduced manifold is of type C5&oplusC6&oplusC12 of Chinea-Gonz&aacutelez classification.
      Citation: Axioms
      PubDate: 2020-08-01
      DOI: 10.3390/axioms9030094
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Yazid Gouari, Zoubir Dahmani, Shan E. Farooq, Farooq Ahmad
      First page: 95
      Abstract: A coupled system of singular fractional differential equations involving Riemann&ndashLiouville integral and Caputo derivative is considered in this paper. The question of existence and uniqueness of solutions is studied using Banach contraction principle. Furthermore, the question of existence of at least one solution is discussed. At the end, an illustrative example is given in details.
      Citation: Axioms
      PubDate: 2020-08-02
      DOI: 10.3390/axioms9030095
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Omar Bazighifan, Rami Ahmad El-Nabulsi, Osama Moaaz
      First page: 96
      Abstract: The aim of this work is to study oscillatory behavior of solutions for even-order neutral nonlinear differential equations. By using the Riccati substitution, a new oscillation conditions is obtained which insures that all solutions to the studied equation are oscillatory. The obtained results complement the well-known oscillation results present in the literature. Some example are illustrated to show the applicability of the obtained results.
      Citation: Axioms
      PubDate: 2020-08-12
      DOI: 10.3390/axioms9030096
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:P. Njionou Sadjang, S. Mboutngam
      First page: 97
      Abstract: In this paper, we introduce a fractional q-extension of the q-differential operator Dq&minus1 and prove some of its main properties. Next, fractional q-extensions of some classical q-orthogonal polynomials are introduced and some of the main properties of the newly-defined functions are given. Finally, a fractional q-difference equation of Gaussian type is introduced and solved by means of the power series method.
      Citation: Axioms
      PubDate: 2020-08-12
      DOI: 10.3390/axioms9030097
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Maria Letizia Guerra, Laerte Sorini
      First page: 98
      Abstract: Value at Risk (VaR) has become a crucial measure for decision making in risk management over the last thirty years and many estimation methodologies address the finding of the best performing measure at taking into account unremovable uncertainty of real financial markets. One possible and promising way to include uncertainty is to refer to the mathematics of fuzzy numbers and to its rigorous methodologies which offer flexible ways to read and to interpret properties of real data which may arise in many areas. The paper aims to show the effectiveness of two distinguished models to account for uncertainty in VaR computation initially, following a non parametric approach, we apply the Fuzzy-transform approximation function to smooth data by capturing fundamental patterns before computing VaR. As a second model, we apply the Average Cumulative Function (ACF) to deduce the quantile function at point p as the potential loss VaRp for a fixed time horizon for the 100p% of the values. In both cases a comparison is conducted with respect to the identification of VaR through historical simulation: twelve years of daily S&ampP500 index returns are considered and a back testing procedure is applied to verify the number of bad VaR forecasting in each methodology. Despite the preliminary nature of the research, we point out that VaR estimation, when modelling uncertainty through fuzzy numbers, outperforms the traditional VaR in the sense that it is the closest to the right amount of capital to allocate in order to cover future losses in normal market conditions.
      Citation: Axioms
      PubDate: 2020-08-12
      DOI: 10.3390/axioms9030098
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Nopparat Wairojjana, Habib ur Rehman, Ioannis K. Argyros, Nuttapol Pakkaranang
      First page: 99
      Abstract: Several methods have been put forward to solve equilibrium problems, in which the two-step extragradient method is very useful and significant. In this article, we propose a new extragradient-like method to evaluate the numerical solution of the pseudomonotone equilibrium in real Hilbert space. This method uses a non-monotonically stepsize technique based on local bifunction values and Lipschitz-type constants. Furthermore, we establish the weak convergence theorem for the suggested method and provide the applications of our results. Finally, several experimental results are reported to see the performance of the proposed method.
      Citation: Axioms
      PubDate: 2020-08-17
      DOI: 10.3390/axioms9030099
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Henrique Antunes, Walter Carnielli, Andreas Kapsner, Abilio Rodrigues
      First page: 100
      Abstract: In this paper, we propose Kripke-style models for the logics of evidence and truth LETJ and LETF. These logics extend, respectively, Nelson&rsquos logic N4 and the logic of first-degree entailment (FDE) with a classicality operator ∘ that recovers classical logic for formulas in its scope. According to the intended interpretation here proposed, these models represent a database that receives information as time passes, and such information can be positive, negative, non-reliable, or reliable, while a formula ∘A means that the information about A, either positive or negative, is reliable. This proposal is in line with the interpretation of N4 and FDE as information-based logics, but adds to the four scenarios expressed by them two new scenarios: reliable (or conclusive) information (i) for the truth and (ii) for the falsity of a given proposition.
      Citation: Axioms
      PubDate: 2020-08-19
      DOI: 10.3390/axioms9030100
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Nopparat Wairojjana, Habib ur Rehman, Manuel De la Sen, Nuttapol Pakkaranang
      First page: 101
      Abstract: A plethora of applications from mathematical programming, such as minimax, and mathematical programming, penalization, fixed point to mention a few can be framed as equilibrium problems. Most of the techniques for solving such problems involve iterative methods that is why, in this paper, we introduced a new extragradient-like method to solve equilibrium problems in real Hilbert spaces with a Lipschitz-type condition on a bifunction. The advantage of a method is a variable stepsize formula that is updated on each iteration based on the previous iterations. The method also operates without the previous information of the Lipschitz-type constants. The weak convergence of the method is established by taking mild conditions on a bifunction. For application, fixed-point theorems that involve strict pseudocontraction and results for pseudomonotone variational inequalities are studied. We have reported various numerical results to show the numerical behaviour of the proposed method and correlate it with existing ones.
      Citation: Axioms
      PubDate: 2020-08-31
      DOI: 10.3390/axioms9030101
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Pradip Debnath, Hari Mohan Srivastava
      First page: 102
      Abstract: In this paper, we study a problem of global optimization using common best proximity point of a pair of multivalued mappings. First, we introduce a multivalued Banach-type contractive pair of mappings and establish criteria for the existence of their common best proximity point. Next, we put forward the concept of multivalued Kannan-type contractive pair and also the concept of weak &Delta-property to determine the existence of common best proximity point for such a pair of maps.
      Citation: Axioms
      PubDate: 2020-09-07
      DOI: 10.3390/axioms9030102
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Chinda Chaichuay, Atid Kangtunyakarn
      First page: 103
      Abstract: There are many methods for finding a common solution of a system of variational inequalities, a split equilibrium problem, and a hierarchical fixed-point problem in the setting of real Hilbert spaces. They proved the strong convergence theorem. Many split feasibility problems are generated in real Hillbert spaces. The open problem is proving a strong convergence theorem of three Hilbert spaces with different methods from the lasted method. In this research, a new split variational inequality in three Hilbert spaces is proposed. Important tools which are used to solve classical problems will be developed. The convergence theorem for finding a common element of the set of solution of such problems and the sets of fixed-points of discontinuous mappings has been proved.
      Citation: Axioms
      PubDate: 2020-09-07
      DOI: 10.3390/axioms9030103
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Irvanizam Irvanizam, Nawar Nabila Zi, Rahma Zuhra, Amrusi Amrusi, Hizir Sofyan
      First page: 104
      Abstract: In this manuscript, we extend the traditional multi-attributive border approximation area comparison (MABAC) method for the multiple-criteria group decision-making (MCGDM) with triangular fuzzy neutrosophic numbers (TFNNs) to propose the TFNNs-MABAC method. In the proposed method, we utilize the TFNNs to express the values of criteria for each alternative in MCGDM problems. First, we briefly acquaint the basic concept of TFNNs and describe its corresponding some operation laws, the functions of score and accuracy, and the normalized hamming distance. We then review two aggregation operators of TFNNs. Afterward, we combine the traditional MABAC method with the triangular fuzzy neutrosophic evaluation and provide a sequence of calculation procedures of the TFNNs-MABAC method. After comparing it with some TFNNs aggregation operators and another method, the results showed that our extended MABAC method can not only effectively handle the conflicting attributes, but also practically deal with incomplete and indeterminate information in the MCGDM problem. Therefore, the extended MABAC method is more effective, conformable, and reasonable. Finally, an investment selection problem is demonstrated as a practice to verify the reasonability of our MABAC method.
      Citation: Axioms
      PubDate: 2020-09-10
      DOI: 10.3390/axioms9030104
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Meryeme El Harrak, Ahmed Hajji
      First page: 105
      Abstract: In the present paper, we propose a common fixed point theorem for three commuting mappings via a new contractive condition which generalizes fixed point theorems of Darbo, Hajji and Aghajani et al. An application is also given to illustrate our main result. Moreover, several consequences are derived, which are generalizations of Darbo&rsquos fixed point theorem and a Hajji&rsquos result.
      Citation: Axioms
      PubDate: 2020-09-11
      DOI: 10.3390/axioms9030105
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Abdessamad Dehaj, Mohamed Guessous
      First page: 106
      Abstract: We give a geometrical proof of Koml&oacutes&rsquo theorem for sequences of random variables with values in super-reflexive Banach space. Our approach is inspired by the elementary proof given by Guessous in 1996 for the Hilbert case and uses some geometric properties of smooth spaces.
      Citation: Axioms
      PubDate: 2020-09-11
      DOI: 10.3390/axioms9030106
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Kyoung Lee, Seok-Zun Song, Young Jun
      First page: 107
      Abstract: The notions of (quasi, pseudo) star-shaped sets are introduced, and several related properties are investigated. Characterizations of (quasi) star-shaped sets are considered. The translation of (quasi, pseudo) star-shaped sets are discussed. Unions and intersections of quasi star-shaped sets are conceived. Conditions for a quasi (or, pseudo) star-shaped set to be a star-shaped set are provided.
      Citation: Axioms
      PubDate: 2020-09-12
      DOI: 10.3390/axioms9030107
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Alex Citkin, Urszula Wybraniec-Skardowska
      First page: 108
      Abstract: Since its inception, logic has studied the acceptable rules of reasoning, the rules that allow us to pass from certain statements, serving as premises or assumptions, to a statement taken as a conclusion [. ]
      Citation: Axioms
      PubDate: 2020-09-13
      DOI: 10.3390/axioms9030108
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Omar Benslimane, Ahmed Aberqi, Jaouad Bennouna
      First page: 109
      Abstract: The purpose of this work is to prove the existence and uniqueness of a class of nonlinear unilateral elliptic problem (P) in an arbitrary domain, managed by a low-order term and non-polynomial growth described by an N-uplet of N-function satisfying the &Delta2-condition. The source term is merely integrable.
      Citation: Axioms
      PubDate: 2020-09-17
      DOI: 10.3390/axioms9030109
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Kifayat Ullah, Junaid Ahmad, Manuel de la Sen
      First page: 110
      Abstract: The purpose of this research work is to prove some weak and strong convergence results for maps satisfying (E)-condition through three-step Thakur (J. Inequal. Appl.2014, 2014:328.) iterative process in Banach spaces. We also present a new example of maps satisfying (E)-condition, and prove that its three-step Thakur iterative process is more efficient than the other well-known three-step iterative processes. At the end of the paper, we apply our results for finding solutions of split feasibility problems. The presented research work updates some of the results of the current literature.
      Citation: Axioms
      PubDate: 2020-09-17
      DOI: 10.3390/axioms9030110
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:George Tsintsifas
      First page: 111
      Abstract: The paper concerns inequalities between fundamental quantities as area, perimeter, diameter and width for convex plane fugures.
      Citation: Axioms
      PubDate: 2020-09-17
      DOI: 10.3390/axioms9030111
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Noureddine Sabiri, Mohamed Guessous
      First page: 112
      Abstract: Let (&Omega,F,&mu) be a complete probability space, E a separable Banach space and E&prime the topological dual vector space of E. We present some compactness results in LE&prime1E, the Banach space of weak*-scalarly integrable E&prime-valued functions. As well we extend the classical theorem of Koml&oacutes to the bounded sequences in LE&prime1E.
      Citation: Axioms
      PubDate: 2020-09-22
      DOI: 10.3390/axioms9030112
      Issue No:Vol. 9, No. 3 (2020)

    • Authors:Janusz Ciuciura
      First page: 35
      Abstract: A logic is called explosive if its consequence relation validates the so-called principle of ex contradictione sequitur quodlibet. A logic is called paraconsistent so long as it is not explosive. Sette&rsquos calculus P 1 is widely recognized as one of the most important paraconsistent calculi. It is not surprising then that the calculus was a starting point for many research studies on paraconsistency. Fern&aacutendez&ndashConiglio&rsquos hierarchy of paraconsistent systems is a good example of such an approach. The hierarchy is presented in Newton da Costa&rsquos style. Therefore, the law of non-contradiction plays the main role in its negative axioms. The principle of ex contradictione sequitur quodlibet has been marginalized: it does not play any leading role in the hierarchy. The objective of this paper is to present an alternative axiomatization for the hierarchy. The main idea behind it is to focus explicitly on the (in)validity of the principle of ex contradictione sequitur quodlibet. This makes the hierarchy less complex and more transparent, especially from the viewpoint of paraconsistency.
      Citation: Axioms
      PubDate: 2020-03-30
      DOI: 10.3390/axioms9020035
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Mujahid Abbas, Fatemeh Lael, Naeem Saleem
      First page: 36
      Abstract: In this paper we introduce the concepts of ψ -contraction and monotone ψ -contraction correspondence in “fuzzy b -metric spaces” and obtain fixed point results for these contractive mappings. The obtained results generalize some existing ones in fuzzy metric spaces and “fuzzy b -metric spaces”. Further we address an open problem in b -metric and “fuzzy b -metric spaces”. To elaborate the results obtained herein we provide an example that shows the usability of the obtained results.
      Citation: Axioms
      PubDate: 2020-03-31
      DOI: 10.3390/axioms9020036
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Donal O’Regan
      First page: 37
      Abstract: This paper considers the topological transversality theorem for general multivalued maps which have selections in a given class of maps.
      Citation: Axioms
      PubDate: 2020-04-10
      DOI: 10.3390/axioms9020037
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Marcin Bartkowiak, Aleksandra Rutkowska
      First page: 38
      Abstract: In a real market, the quantity of information and recommendations is constantly increasing. However, recommendations are often in linguistic form and no one recommendation is based on a single piece of information. Predictions of individuals and their confidence can vary greatly. Thus, a problem arises concerning different (disjointed or partially coherent) vague opinions of various experts or information from multiple sources. In this paper, we introduce extensions of the Black&mdashLitterman model with linguistic expressed views from different experts/many sources. The study focuses on empirical analysis of proposed fuzzy approach results. In the presented modification every expert presents its opinion about particular assets according to intervals, and then an experton for each asset is built. In the portfolio optimization, we use aggregated views presented by interval, which is the mean value of the experton built on particular views. In an empirical study, we built and tested 10,000 portfolios based on recommendation from EquityRT, which was made by 14&ndash49 experts monthly between November 2017 and June 2019 for the 29 biggest companies from the US market and different sectors. The annual average return from portfolios is 9.5&ndash11.8%, depending on the width of the intervals and additional constraints. This approach allows people to formulate intuitive views and view the opinions of a group of experts.
      Citation: Axioms
      PubDate: 2020-04-10
      DOI: 10.3390/axioms9020038
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Omar Bazighifan, Feliz Minhos, Osama Moaaz
      First page: 39
      Abstract: Some new sufficient conditions are established for the oscillation of fourth order neutral differential equations with continuously distributed delay of the form r t N x ‴ t &alpha &prime + &int a b q t , &thetasym x &beta &delta t , &thetasym d &thetasym = 0 , where t &ge t 0 and N x t : = x t + p t x &phi t . An example is provided to show the importance of these results.
      Citation: Axioms
      PubDate: 2020-04-11
      DOI: 10.3390/axioms9020039
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Florin F. Nichita
      First page: 40
      Abstract: In January 2019, MDPI published a book titled Hopf Algebras, Quantum Groups and Yang&ndashBaxter Equations, based on a successful special issue [. ]
      Citation: Axioms
      PubDate: 2020-04-13
      DOI: 10.3390/axioms9020040
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:J.-Martín Castro-Manzano
      First page: 41
      Abstract: The concept of distribution is a concept within traditional logic that has been fundamental for the syntactic development of Sommers and Englebretsen&rsquos term functor logic, a logic that recovers the term syntax of traditional logic. The issue here, however, is that the semantic counterpart of distribution for this logic is still in the making. Consequently, given this disparity between syntax and semantics, in this contribution we adapt some ideas of term functor logic tableaux to develop models of distribution, thus providing some alternative formal semantics to help close this breach.
      Citation: Axioms
      PubDate: 2020-04-17
      DOI: 10.3390/axioms9020041
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Rabha W. Ibrahim, Rafida M. Elobaid, Suzan J. Obaiys
      First page: 42
      Abstract: A class of Briot&ndashBouquet differential equations is a magnificent part of investigating the geometric behaviors of analytic functions, using the subordination and superordination concepts. In this work, we aim to formulate a new differential operator with complex connections (coefficients) in the open unit disk and generalize a class of Briot&ndashBouquet differential equations (BBDEs). We study and generalize new classes of analytic functions based on the new differential operator. Consequently, we define a linear operator with applications.
      Citation: Axioms
      PubDate: 2020-04-21
      DOI: 10.3390/axioms9020042
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Serena Doria, Radko Mesiar, Adam Šeliga
      First page: 43
      Abstract: Coherent lower previsions generalize the expected values and they are defined on the class of all real random variables on a finite non-empty set. Well known construction of coherent lower previsions by means of lower probabilities, or by means of super-modular capacities-based Choquet integrals, do not cover this important class of functionals on real random variables. In this paper, a new approach to the construction of coherent lower previsions acting on a finite space is proposed, exemplified and studied. It is based on special decomposition integrals recently introduced by Even and Lehrer, in our case the considered decomposition systems being single collections and thus called collection integrals. In special case when these integrals, defined for non-negative random variables only, are shift-invariant, we extend them to the class of all real random variables, thus obtaining so called super-additive integrals. Our proposed construction can be seen then as a normalized super-additive integral. We discuss and exemplify several particular cases, for example, when collections determine a coherent lower prevision for any monotone set function. For some particular collections, only particular set functions can be considered for our construction. Conjugated coherent upper previsions are also considered.
      Citation: Axioms
      PubDate: 2020-04-23
      DOI: 10.3390/axioms9020043
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Subramanian Muthaiah, Dumitru Baleanu
      First page: 44
      Abstract: This article deals with the solutions of the existence and uniqueness for a new class of boundary value problems (BVPs) involving nonlinear fractional differential equations (FDEs), inclusions, and boundary conditions involving the generalized fractional integral. The nonlinearity relies on the unknown function and its fractional derivatives in the lower order. We use fixed-point theorems with single-valued and multi-valued maps to obtain the desired results, through the support of illustrations, the main results are well explained. We also address some variants of the problem.
      Citation: Axioms
      PubDate: 2020-04-25
      DOI: 10.3390/axioms9020044
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Tursun K. Yuldashev
      First page: 45
      Abstract: The questions of solvability of a nonlocal inverse boundary value problem for a mixed pseudohyperbolic-pseudoelliptic integro-differential equation with spectral parameters are considered. Using the method of the Fourier series, a system of countable systems of ordinary integro-differential equations is obtained. To determine arbitrary integration constants, a system of algebraic equations is obtained. From this system regular and irregular values of the spectral parameters were calculated. The unique solvability of the inverse boundary value problem for regular values of spectral parameters is proved. For irregular values of spectral parameters is established a criterion of existence of an infinite set of solutions of the inverse boundary value problem. The results are formulated as a theorem.
      Citation: Axioms
      PubDate: 2020-04-27
      DOI: 10.3390/axioms9020045
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Juan Pedro Lucanera, Laura Fabregat-Aibar, Valeria Scherger, Hernán Vigier
      First page: 46
      Abstract: The paper aims to identify which variables related to capital structure theory predict business failure in the Spanish construction sector during the subprime crisis. An artificial neural network (ANN) approach based on Self-Organizing Maps (SOM) is proposed, which allows one to cluster between default and active firms&rsquo groups. The similarities and differences between the main features in each group determine the variables that explain the capacities of failure of the analyzed firms. The network tests whether the factors that explain leverage, such as profitability, growth opportunities, size of the company, risk, asset structure, and age of the firm, can be suitable to predict business failure. The sample is formed by 152 construction firms (76 default and 76 active) in the Spanish market. The results show that the SOM correctly predicts 97.4% of firms in the construction sector and classifies the firms in five groups with clear similarities inside the clusters. The study proves the suitability of the SOM for predicting business bankruptcy situations using variables related to capital structure theory and financial crises.
      Citation: Axioms
      PubDate: 2020-04-27
      DOI: 10.3390/axioms9020046
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Davor Dragičević, Ciprian Preda
      First page: 47
      Abstract: For linear skew-product three-parameter semiflows with discrete time acting on an arbitrary Hilbert space, we obtain a complete characterization of exponential stability in terms of the existence of appropriate Lyapunov functions. As a nontrivial application of our work, we prove that the notion of an exponential stability persists under sufficiently small linear perturbations.
      Citation: Axioms
      PubDate: 2020-04-27
      DOI: 10.3390/axioms9020047
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Elisabetta Barletta, Sorin Dragomir, Francesco Esposito
      First page: 48
      Abstract: We review several results in the theory of weighted Bergman kernels. Weighted Bergman kernels generalize ordinary Bergman kernels of domains &Omega &sub C n but also appear locally in the attempt to quantize classical states of mechanical systems whose classical phase space is a complex manifold, and turn out to be an efficient computational tool that is useful for the calculation of transition probability amplitudes from a classical state (identified to a coherent state) to another. We review the weighted version (for weights of the form &gamma = &phi m on strictly pseudoconvex domains &Omega = < &phi &lt 0 >&sub C n ) of Fefferman&rsquos asymptotic expansion of the Bergman kernel and discuss its possible extensions (to more general classes of weights) and implications, e.g., such as related to the construction and use of Fefferman&rsquos metric (a Lorentzian metric on &part &Omega &times S 1 ). Several open problems are indicated throughout the survey.
      Citation: Axioms
      PubDate: 2020-04-29
      DOI: 10.3390/axioms9020048
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Anton Romanov, Valeria Voronina, Gleb Guskov, Irina Moshkina, Nadezhda Yarushkina
      First page: 49
      Abstract: The development of the economy and the transition to industry 4.0 creates new challenges for artificial intelligence methods. Such challenges include the processing of large volumes of data, the analysis of various dynamic indicators, the discovery of complex dependencies in the accumulated data, and the forecasting of the state of processes. The main point of this study is the development of a set of analytical and prognostic methods. The methods described in this article based on fuzzy logic, statistic, and time series data mining, because data extracted from dynamic systems are initially incomplete and have a high degree of uncertainty. The ultimate goal of the study is to improve the quality of data analysis in industrial and economic systems. The advantages of the proposed methods are flexibility and orientation to the high interpretability of dynamic data. The high level of the interpretability and interoperability of dynamic data is achieved due to a combination of time series data mining and knowledge base engineering methods. The merging of a set of rules extracted from the time series and knowledge base rules allow for making a forecast in case of insufficiency of the length and nature of the time series. The proposed methods are also based on the summarization of the results of processes modeling for diagnosing technical systems, forecasting of the economic condition of enterprises, and approaches to the technological preparation of production in a multi-productive production program with the application of type 2 fuzzy sets for time series modeling. Intelligent systems based on the proposed methods demonstrate an increase in the quality and stability of their functioning. This article contains a set of experiments to approve this statement.
      Citation: Axioms
      PubDate: 2020-04-30
      DOI: 10.3390/axioms9020049
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Ahmed Alsaedi, Abrar Broom, Sotiris K. Ntouyas, Bashir Ahmad
      First page: 50
      Abstract: In this paper, we study the existence of solutions for nonlocal single and multi-valued boundary value problems involving right-Caputo and left-Riemann&ndashLiouville fractional derivatives of different orders and right-left Riemann&ndashLiouville fractional integrals. The existence of solutions for the single-valued case relies on Sadovskii&rsquos fixed point theorem. The first existence results for the multi-valued case are proved by applying Bohnenblust-Karlin&rsquos fixed point theorem, while the second one is based on Martelli&rsquos fixed point theorem. We also demonstrate the applications of the obtained results.
      Citation: Axioms
      PubDate: 2020-05-01
      DOI: 10.3390/axioms9020050
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Godwin Amechi Okeke, Mujahid Abbas, Manuel de la Sen
      First page: 51
      Abstract: We propose two new iterative algorithms for solving K-pseudomonotone variational inequality problems in the framework of real Hilbert spaces. These newly proposed methods are obtained by combining the viscosity approximation algorithm, the Picard Mann algorithm and the inertial subgradient extragradient method. We establish some strong convergence theorems for our newly developed methods under certain restriction. Our results extend and improve several recently announced results. Furthermore, we give several numerical experiments to show that our proposed algorithms performs better in comparison with several existing methods.
      Citation: Axioms
      PubDate: 2020-05-11
      DOI: 10.3390/axioms9020051
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Aroonkumar Beesham
      First page: 52
      Abstract: The cosmic censorship hypothesis is regarded as one of the most important unsolved problems in classical general relativity viz., will generic gravitational collapse of a star after it has exhausted its nuclear fuel lead to black holes only, under reasonable physical conditions. We discuss the collapse of a fluid with nonzero radial pressure within the context of the Vaidya spacetime considering a decaying cosmological parameter as well as nonzero charge. Previously, a similar analysis was done, but without considering charge. A decaying cosmological parameter may also be associated with dark energy. We found that both black holes and naked singularities can form, depending upon the initial conditions. Hence, charge does not restore the validity of the hypothesis. This provides another example of the violation of the cosmic censorship hypothesis. We also discuss some radiating rotating solutions, arriving at the same conclusion.
      Citation: Axioms
      PubDate: 2020-05-13
      DOI: 10.3390/axioms9020052
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Ruba Almahasneh, Boldizsár Tüű-Szabó, László T. Kóczy, Péter Földesi
      First page: 53
      Abstract: This study proposes a new model and approach for solving a realistic extension of the Time-Dependent Traveling Salesman Problem, by using the concept of distance between interval-valued intuitionistic fuzzy sets. For this purpose, we developed an interval-valued fuzzy degree repository based on the relations between rush hour periods and traffic regions in the &ldquocity center areas&rdquo, and then we utilized the interval-valued intuitionistic fuzzy weighted arithmetic average to aggregate fuzzy information to be able to quantify the delay in any given trip between two nodes (cities). The proposed method is illustrated by a simple numerical example.
      Citation: Axioms
      PubDate: 2020-05-13
      DOI: 10.3390/axioms9020053
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Claudio Corianò, Matteo Maria Maglio
      First page: 54
      Abstract: We review the emergence of hypergeometric structures (of F4 Appell functions) from the conformal Ward identities (CWIs) in conformal field theories (CFTs) in dimensions d &gt 2. We illustrate the case of scalar 3- and 4-point functions. 3-point functions are associated to hypergeometric systems with four independent solutions. For symmetric correlators, they can be expressed in terms of a single 3K integral&mdashfunctions of quadratic ratios of momenta&mdashwhich is a parametric integral of three modified Bessel K functions. In the case of scalar 4-point functions, by requiring the correlator to be conformal invariant in coordinate space as well as in some dual variables (i.e., dual conformal invariant), its explicit expression is also given by a 3K integral, or as a linear combination of Appell functions which are now quartic ratios of momenta. Similar expressions have been obtained in the past in the computation of an infinite class of planar ladder (Feynman) diagrams in perturbation theory, which, however, do not share the same (dual conformal/conformal) symmetry of our solutions. We then discuss some hypergeometric functions of 3 variables, which define 8 particular solutions of the CWIs and correspond to Lauricella functions. They can also be combined in terms of 4K integral and appear in an asymptotic description of the scalar 4-point function, in special kinematical limits.
      Citation: Axioms
      PubDate: 2020-05-14
      DOI: 10.3390/axioms9020054
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Jan L. Cieśliński, Artur Kobus
      First page: 55
      Abstract: Scator set, introduced by Fern&aacutendez-Guasti and Zald&iacutevar, is endowed with a very peculiar non-distributive product. In this paper we consider the scator space of dimension 1 + 2 and the so called fundamental embedding which maps the subset of scators with non-zero scalar component into 4-dimensional space endowed with a natural distributive product. The original definition of the scator product is induced in a straightforward way. Moreover, we propose an extension of the scator product on the whole scator space, including all scators with vanishing scalar component.
      Citation: Axioms
      PubDate: 2020-05-19
      DOI: 10.3390/axioms9020055
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Piotr Kulicki
      First page: 56
      Abstract: Aristotle&rsquos syllogistic is the first ever deductive system. After centuries, Aristotle&rsquos ideas are still interesting for logicians who develop Aristotle&rsquos work and draw inspiration from his results and even more from his methods. In the paper we discuss the essential elements of the Aristotelian system of syllogistic and Łukasiewicz&rsquos reconstruction of it based on the tools of modern formal logic. We pay special attention to the notion of completeness of a deductive system as discussed by both authors. We describe in detail how completeness can be defined and proved with the use of an axiomatic refutation system. Finally, we apply this methodology to different axiomatizations of syllogistic presented by Łukasiewicz, Lemmon and Shepherdson.
      Citation: Axioms
      PubDate: 2020-05-19
      DOI: 10.3390/axioms9020056
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Choukri Derbazi, Zidane Baitiche, Mouffak Benchohra, Alberto Cabada
      First page: 57
      Abstract: In this article, we discuss the existence and uniqueness of extremal solutions for nonlinear initial value problems of fractional differential equations involving the &psi -Caputo derivative. Moreover, some uniqueness results are obtained. Our results rely on the standard tools of functional analysis. More precisely we apply the monotone iterative technique combined with the method of upper and lower solutions to establish sufficient conditions for existence as well as the uniqueness of extremal solutions to the initial value problem. An illustrative example is presented to point out the applicability of our main results.
      Citation: Axioms
      PubDate: 2020-05-21
      DOI: 10.3390/axioms9020057
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Gwang Hui Kim, Themistocles M. Rassias
      First page: 58
      Abstract: In this paper, we investigate the generalized Hyers&ndashUlam stability for the generalized psi functional equation f ( x + p ) = f ( x ) + &phi ( x ) by the direct method in the sense of P. Gǎvruta and the Hyers&ndashUlam&ndashRassias stability.
      Citation: Axioms
      PubDate: 2020-05-23
      DOI: 10.3390/axioms9020058
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Ahmed Salem, Mohammad Alnegga
      First page: 59
      Abstract: In this research article, we introduce a new class of hybrid Langevin equation involving two distinct fractional order derivatives in the Caputo sense and Riemann&ndashLiouville fractional integral. Supported by three-point boundary conditions, we discuss the existence of a solution to this boundary value problem. Because of the important role of the measure of noncompactness in fixed point theory, we use the technique of measure of noncompactness as an essential tool in order to get the existence result. The modern analysis technique is used by applying a generalized version of Darbo&rsquos fixed point theorem. A numerical example is presented to clarify our outcomes.
      Citation: Axioms
      PubDate: 2020-05-23
      DOI: 10.3390/axioms9020059
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Kristof Dekimpe, Joeri Van der Veken
      First page: 60
      Abstract: A marginally trapped surface in a spacetime is a Riemannian surface whose mean curvature vector is lightlike at every point. In this paper we give an up-to-date overview of the differential geometric study of these surfaces in Minkowski, de Sitter, anti-de Sitter and Robertson-Walker spacetimes. We give the general local descriptions proven by Anciaux and his coworkers as well as the known classifications of marginally trapped surfaces satisfying one of the following additional geometric conditions: having positive relative nullity, having parallel mean curvature vector field, having finite type Gauss map, being invariant under a one-parameter group of ambient isometries, being isotropic, being pseudo-umbilical. Finally, we provide examples of constant Gaussian curvature marginally trapped surfaces and state some open questions.
      Citation: Axioms
      PubDate: 2020-05-24
      DOI: 10.3390/axioms9020060
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Francesca Pitolli
      First page: 61
      Abstract: Boundary value problems having fractional derivative in space are used in several fields, like biology, mechanical engineering, control theory, just to cite a few. In this paper we present a new numerical method for the solution of boundary value problems having Caputo derivative in space. We approximate the solution by the Schoenberg-Bernstein operator, which is a spline positive operator having shape-preserving properties. The unknown coefficients of the approximating operator are determined by a collocation method whose collocation matrices can be constructed efficiently by explicit formulas. The numerical experiments we conducted show that the proposed method is efficient and accurate.
      Citation: Axioms
      PubDate: 2020-05-25
      DOI: 10.3390/axioms9020061
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Ravi P. Agarwal, Petio S. Kelevedjiev, Todor Z. Todorov
      First page: 62
      Abstract: Under barrier strips type assumptions we study the existence of C 3 [ 0 , 1 ] &mdashsolutions to various two-point boundary value problems for the equation x ‴ = f ( t , x , x &prime , x &Prime ) . We give also some results guaranteeing positive or non-negative, monotone, convex or concave solutions.
      Citation: Axioms
      PubDate: 2020-05-31
      DOI: 10.3390/axioms9020062
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Jiří Močkoř
      First page: 63
      Abstract: Various types of topological and closure operators are significantly used in fuzzy theory and applications. Although they are different operators, in some cases it is possible to transform an operator of one type into another. This in turn makes it possible to transform results relating to an operator of one type into results relating to another operator. In the paper relationships among 15 categories of modifications of topological L-valued operators, including Čech closure or interior L-valued operators, L-fuzzy pretopological and L-fuzzy co-pretopological operators, L-valued fuzzy relations, upper and lower F-transforms and spaces with fuzzy partitions are investigated. The common feature of these categories is that their morphisms are various L-fuzzy relations and not only maps. We prove the existence of 23 functors among these categories, which represent transformation processes of one operator into another operator, and we show how these transformation processes can be mutually combined.
      Citation: Axioms
      PubDate: 2020-06-02
      DOI: 10.3390/axioms9020063
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Giovanni Calvaruso
      First page: 64
      Abstract: We study and solve the Ricci soliton equation for an arbitrary locally conformally flat Siklos metric, proving that such spacetimes are always Ricci solitons.
      Citation: Axioms
      PubDate: 2020-06-08
      DOI: 10.3390/axioms9020064
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:R. Leelavathi, G. Suresh Kumar, Ravi P. Agarwal, Chao Wang, M.S.N. Murty
      First page: 65
      Abstract: This paper mainly deals with introducing and studying the properties of generalized nabla differentiability for fuzzy functions on time scales via Hukuhara difference. Further, we obtain embedding results on E n for generalized nabla differentiable fuzzy functions. Finally, we prove a fundamental theorem of a nabla integral calculus for fuzzy functions on time scales under generalized nabla differentiability. The obtained results are illustrated with suitable examples.
      Citation: Axioms
      PubDate: 2020-06-08
      DOI: 10.3390/axioms9020065
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Sergey V. Ludkowski
      First page: 66
      Abstract: In this article, the structure of topological metagroups was investigated. Relations between topological and algebraic properties of metagroups were scrutinized. A uniform continuity of functions on them was studied. Smashed products of topological metagroups were investigated.
      Citation: Axioms
      PubDate: 2020-06-14
      DOI: 10.3390/axioms9020066
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Dariusz Surowik
      First page: 67
      Abstract: The article discusses minimal temporal logic systems built on the basis of classical logic as well as intuitionistic logic. The constructions of these systems are discussed as well as their basic properties. The K t system was discussed as the minimal temporal logic system built based on classical logic, while the IK t system and its modification were discussed as the minimal temporal logic system built based on intuitionistic logic.
      Citation: Axioms
      PubDate: 2020-06-16
      DOI: 10.3390/axioms9020067
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Tursun K. Yuldashev, Bakhtiyor J. Kadirkulov
      First page: 68
      Abstract: In this paper, we consider a boundary value problem for a nonlinear partial differential equation of mixed type with Hilfer operator of fractional integro-differentiation in a positive rectangular domain and with spectral parameter in a negative rectangular domain. With respect to the first variable, this equation is a nonlinear fractional differential equation in the positive part of the considering segment and is a second-order nonlinear differential equation with spectral parameter in the negative part of this segment. Using the Fourier series method, the solutions of nonlinear boundary value problems are constructed in the form of a Fourier series. Theorems on the existence and uniqueness of the classical solution of the problem are proved for regular values of the spectral parameter. For irregular values of the spectral parameter, an infinite number of solutions of the mixed equation in the form of a Fourier series are constructed.
      Citation: Axioms
      PubDate: 2020-06-17
      DOI: 10.3390/axioms9020068
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Paulo Guzman, Luciano Lugo, Juan Nápoles Valdés, Miguel Vivas-Cortez
      First page: 69
      Abstract: In this paper, we present a general definition of a generalized integral operator which contains as particular cases, many of the well-known, fractional and integer order integrals.
      Citation: Axioms
      PubDate: 2020-06-20
      DOI: 10.3390/axioms9020069
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Bashir Ahmad, Najla Alghamdi, Ahmed Alsaedi, Sotiris K. Ntouyas
      First page: 70
      Abstract: In this paper, we discuss the existence and uniqueness of solutions for a new class of multi-point and integral boundary value problems of multi-term fractional differential equations by using standard fixed point theorems. We also demonstrate the application of the obtained results with the aid of examples.
      Citation: Axioms
      PubDate: 2020-06-24
      DOI: 10.3390/axioms9020070
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Maxim Khlopov, Biplab Paik, Saibal Ray
      First page: 71
      Abstract: Primordial black holes (PBHs) are the sensitive probe for physics and cosmology of very early Universe. The observable effect of their existence depends on the PBH mass. Mini PBHs evaporate and do not survive to the present time, leaving only background effect of products of their evaporation, while PBHs evaporating now can be new exotic sources of energetic particles and gamma rays in the modern Universe. Here we revisit the history of evolution of mini PBHs. We follow the aspects associated with growth versus evaporation rate of &ldquoa mini PBH being trapped inside intense local cosmological matter inhomogeneity&rdquo. We show that the existence of baryon accretion forbidden black hole regime enables constraints on mini PBHs with the mass M &le 5.5 &times 10 13 g. On the other hand, we propose the mechanism of delay of evaporation of primordial population of PBHs of primordial mass range 5.5 &times 10 13 g &le M &le 5.1 &times 10 14 g. It can provide their evaporation to be the main contributor to &gamma -ray flux distribution in the current Universe. At the final stage of evaporation these PBHs can be the source of ultrahigh energy cosmic rays and gamma radiation challenging probe for their existence in the LHAASO experiment.
      Citation: Axioms
      PubDate: 2020-06-25
      DOI: 10.3390/axioms9020071
      Issue No:Vol. 9, No. 2 (2020)

    • Authors:Mikhail Tkachenko
      First page: 23
      Abstract: We study the factorization properties of continuous homomorphisms defined on a (dense) submonoid S of a Tychonoff product D = &prod i &isin I D i of topological or even topologized monoids. In a number of different situations, we establish that every continuous homomorphism f : S &rarr K to a topological monoid (or group) K depends on at most finitely many coordinates. For example, this is the case if S is a subgroup of D and K is a first countable left topological group without small subgroups (i.e., K is an NSS group). A stronger conclusion is valid if S is a finitely retractable submonoid of D and K is a regular quasitopological NSS group of a countable pseudocharacter. In this case, every continuous homomorphism f of S to K has a finite type, which means that f admits a continuous factorization through a finite subproduct of D. A similar conclusion is obtained for continuous homomorphisms of submonoids (or subgroups) of products of topological monoids to Lie groups. Furthermore, we formulate a number of open problems intended to delimit the validity of our results.
      Citation: Axioms
      PubDate: 2020-02-18
      DOI: 10.3390/axioms9010023
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Leonid Shaikhet
      First page: 24
      Abstract: The known mathematical model of rumor spreading, which is described by a system of four nonlinear differential equations and is very popular in research, is considered. It is supposed that the considered model is influenced by stochastic perturbations that are of the type of white noise and are proportional to the deviation of the system state from its equilibrium point. Sufficient conditions of stability in probability for each from the five equilibria of the considered model are obtained by virtue of the Routh&ndashHurwitz criterion and the method of linear matrix inequalities (LMIs). The obtained results are illustrated by numerical analysis of appropriate LMIs and numerical simulations of solutions of the considered system of stochastic differential equations. The research method can also be used in other applications for similar nonlinear models with the order of nonlinearity higher than one.
      Citation: Axioms
      PubDate: 2020-02-18
      DOI: 10.3390/axioms9010024
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Hans G. Feichtinger
      First page: 25
      Abstract: The Banach Gelfand Triple ( S 0 , L 2 , S 0 &prime ) ( R d ) consists of S 0 ( R d ) , ∥ &middot ∥ S 0 , a very specific Segal algebra as algebra of test functions, the Hilbert space L 2 ( R d ) , ∥ &middot ∥ 2 and the dual space S 0 &prime ( R d ) , whose elements are also called &ldquomild distributions&rdquo. Together they provide a universal tool for Fourier Analysis in its many manifestations. It is indispensable for a proper formulation of Gabor Analysis, but also useful for a distributional description of the classical (generalized) Fourier transform (with Plancherel&rsquos Theorem and the Fourier Inversion Theorem as core statements) or the foundations of Harmonic Analysis, as it is not difficult to formulate this theory in the context of locally compact Abelian (LCA) groups. A new approach presented recently allows to introduce S 0 ( R d ) , ∥ &middot ∥ S 0 and hence ( S 0 &prime ( R d ) , ∥ &middot ∥ S 0 &prime ) , the space of &ldquomild distributions&rdquo, without the use of the Lebesgue integral or the theory of tempered distributions. The present notes will describe an alternative, even more elementary approach to the same objects, based on the idea of completion (in an appropriate sense). By drawing the analogy to the real number system, viewed as infinite decimals, we hope that this approach is also more interesting for engineers. Of course it is very much inspired by the Lighthill approach to the theory of tempered distributions. The main topic of this article is thus an outline of the sequential approach in this concrete setting and the clarification of the fact that it is just another way of describing the Banach Gelfand Triple. The objects of the extended domain for the Short-Time Fourier Transform are (equivalence classes) of so-called mild Cauchy sequences (in short ECmiCS). Representatives are sequences of bounded, continuous functions, which correspond in a natural way to mild distributions as introduced in earlier papers via duality theory. Our key result shows how standard functional analytic arguments combined with concrete properties of the Segal algebra S 0 ( R d ) , ∥ &middot ∥ S 0 can be used to establish this natural identification.
      Citation: Axioms
      PubDate: 2020-02-24
      DOI: 10.3390/axioms9010025
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Samuel Swire, Elizabeth Pasipanodya, Manuel A. Morales, Enrique Peacock-López
      First page: 26
      Abstract: This paper presents the first five variable model of mutualism motivated by the interaction between ants and homopterans. In this mutualism, homopterans benefit both directly through increased feeding rates and indirectly through predator protection. Results of our analyses show oscillatory, complex, and chaotic dynamic behavior. In addition, we show that intraspecies interactions are crucial for closing trophic levels and stabilizing the dynamic system from potential &ldquochaotic&rdquo behavior.
      Citation: Axioms
      PubDate: 2020-03-02
      DOI: 10.3390/axioms9010026
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Gilberto Rivera, Luis Cisneros, Patricia Sánchez-Solís, Nelson Rangel-Valdez, Jorge Rodas-Osollo
      First page: 27
      Abstract: In this paper, we develop and apply a genetic algorithm to solve surgery scheduling cases in a Mexican Public Hospital. Here, one of the most challenging issues is to process containers with heterogeneous capacity. Many scheduling problems do not share this restriction because of this reason, we developed and implemented a strategy for the processing of heterogeneous containers in the genetic algorithm. The final product was named &ldquogenetic algorithm for scheduling optimization&rdquo (GAfSO). The results of GAfSO were tested with real data of a local hospital. Said hospital assigns different operational time to the operating rooms throughout the week. Also, the computational complexity of GAfSO is analyzed. Results show that GAfSO can assign the corresponding capacity to the operating rooms while optimizing their use.
      Citation: Axioms
      PubDate: 2020-03-04
      DOI: 10.3390/axioms9010027
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Xin Sun, Feifei He, Quanlong Wang
      First page: 28
      Abstract: Bit commitment is a cryptographic task in which Alice commits a bit to Bob such that she cannot change the value of the bit after her commitment and Bob cannot learn the value of the bit before Alice opens her commitment. According to the Mayers&ndashLo&ndashChau (MLC) no-go theorem, ideal bit commitment is impossible within quantum theory. In the information theoretic-reconstruction of quantum theory, the impossibility of quantum bit commitment is one of the three information-theoretic constraints that characterize quantum theory. In this paper, we first provide a very simple proof of the MLC no-go theorem and its quantitative generalization. Then, we formalize bit commitment in the theory of dagger monoidal categories. We show that in the setting of dagger monoidal categories, the impossibility of bit commitment is equivalent to the unitary equivalence of purification.
      Citation: Axioms
      PubDate: 2020-03-09
      DOI: 10.3390/axioms9010028
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Nita H Shah, Nisha Sheoran, Yash Shah
      First page: 29
      Abstract: According to World Health Organization (WHO), the population suffering from human immunodeficiency virus (HIV) infection over a period of time may suffer from TB infection which increases the death rate. There is no cure for acquired immunodeficiency syndrome (AIDS) to date but antiretrovirals (ARVs) can slow down the progression of disease as well as prevent secondary infections or complications. This is considered as a medication in this paper. This scenario of HIV-TB co-infection is modeled using a system of non-linear differential equations. This model considers HIV-infected individual as the initial stage. Four equilibrium points are found. Reproduction number R0 is calculated. If R0 &gt1 disease persists uniformly, with reference to the reproduction number, backward bifurcation is computed for pre-AIDS (latent) stage. Global stability is established for the equilibrium points where there is no Pre-AIDS TB class, point without co-infection and for the endemic point. Numerical simulation is carried out to validate the data. Sensitivity analysis is carried out to determine the importance of model parameters in the disease dynamics.
      Citation: Axioms
      PubDate: 2020-03-11
      DOI: 10.3390/axioms9010029
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Nikolaos Kalogeropoulos
      First page: 30
      Abstract: We attempt to provide a mesoscopic treatment of the origin of black hole entropy in (3 + 1)-dimensional spacetimes. We ascribe this entropy to the non-trivial topology of the space-like sections &Sigma of the horizon. This is not forbidden by topological censorship, since all the known energy inequalities needed to prove the spherical topology of &Sigma are violated in quantum theory. We choose the systoles of &Sigma to encode its complexity, which gives rise to the black hole entropy. We present hand-waving reasons why the entropy of the black hole can be considered as a function of the volume entropy of &Sigma . We focus on the limiting case of &Sigma having a large genus.
      Citation: Axioms
      PubDate: 2020-03-16
      DOI: 10.3390/axioms9010030
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Ataollah Arabnia Firozjah, Hamidreza Rahimi, Manuel De la Sen, Ghasem Soleimani Rad
      First page: 31
      Abstract: In this work, we define the concept of a generalized c-distance in cone b-metric spaces over a Banach algebra and introduce some its properties. Then, we prove the existence and uniqueness of fixed points for mappings satisfying weak contractive conditions such as Han&ndashXu-type contraction and Cho-type contraction with respect to this distance. Our assertions are useful, since we remove the continuity condition of the mapping and the normality condition for the cone. Several examples are given to support the main results.
      Citation: Axioms
      PubDate: 2020-03-21
      DOI: 10.3390/axioms9010031
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Abdeljabbar Talal Yousef, Zabidin Salleh
      First page: 32
      Abstract: In this paper, a subclass of complex-valued harmonic univalent functions defined by a generalized linear operator is introduced. Some interesting results such as coefficient bounds, compactness, and other properties of this class are obtained.
      Citation: Axioms
      PubDate: 2020-03-24
      DOI: 10.3390/axioms9010032
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Grigoris Panotopoulos
      First page: 33
      Abstract: We compute the quasinormal frequencies for scalar perturbations of charged black holes in five-dimensional Einstein-power-Maxwell theory. The impact on the spectrum of the electric charge of the black holes, of the angular degree, of the overtone number, and of the mass of the test scalar field is investigated in detail. The quasinormal spectra in the eikonal limit are computed as well for several different space-time dimensionalities.
      Citation: Axioms
      PubDate: 2020-03-24
      DOI: 10.3390/axioms9010033
      Issue No:Vol. 9, No. 1 (2020)

    • Authors:Juan Carlos Ferrando, Salvador López-Alfonso, Manuel López-Pellicer
      First page: 34
      Abstract: We call a subset M of an algebra of sets A a Grothendieck set for the Banach space b a ( A ) of bounded finitely additive scalar-valued measures on A equipped with the variation norm if each sequence &mu n n = 1 &infin in b a ( A ) which is pointwise convergent on M is weakly convergent in b a ( A ) , i.e., if there is &mu &isin b a A such that &mu n A &rarr &mu A for every A &isin M then &mu n &rarr &mu weakly in b a ( A ) . A subset M of an algebra of sets A is called a Nikod&yacutem set for b a ( A ) if each sequence &mu n n = 1 &infin in b a ( A ) which is pointwise bounded on M is bounded in b a ( A ) . We prove that if &Sigma is a &sigma -algebra of subsets of a set &Omega which is covered by an increasing sequence &Sigma n : n &isin N of subsets of &Sigma there exists p &isin N such that &Sigma p is a Grothendieck set for b a ( A ) . This statement is the exact counterpart for Grothendieck sets of a classic result of Valdivia asserting that if a &sigma -algebra &Sigma is covered by an increasing sequence &Sigma n : n &isin N of subsets, there is p &isin N such that &Sigma p is a Nikod&yacutem set for b a &Sigma . This also refines the Grothendieck result stating that for each &sigma -algebra &Sigma the Banach space ℓ &infin &Sigma is a Grothendieck space. Some applications to classic Banach space theory are given.
      Citation: Axioms
      PubDate: 2020-03-24
      DOI: 10.3390/axioms9010034
      Issue No:Vol. 9, No. 1 (2020)

    Your IP address: