backreaction.blogspot.com Open in urlscan Pro
2a00:1450:4001:80f::2001  Public Scan

URL: http://backreaction.blogspot.com/
Submission: On May 25 via api from GB — Scanned from GB

Form analysis 1 forms found in the DOM

Name: inputGET http://backreaction.blogspot.com/search

<form action="http://backreaction.blogspot.com/search" name="input" method="get">
  <input value=" " name="q" size="20" type="text">
  <input value="Go!" type="submit">
</form>

Text Content

PAGES

 * Home
 * Talk To A Scientist
 * Comment Rules
 * About






SATURDAY, MAY 21, 2022


THE CLOSEST WE HAVE TO A THEORY OF EVERYTHING


[This is a transcript of the video embedded below. Some of the explanations may
not make sense without the animations in the video.]


In English they talk about a “Theory of Everything”. In German we talk about the
“Weltformel”, the world-equation. I’ve always disliked the German expression.
That’s because equations in and by themselves don’t tell you anything. Take for
example the equation x=y. That may well be the world-equation, the question is
just what’s x and what is y. However, in physics we do have an equation that’s
pretty damn close to a “world-equation”. It’s remarkably simple, looks like
this, and it’s called the principle of least action. But what’s S? And what’s
this squiggle. That’s what we’ll talk about today.

The principle of least action is an example of optimization where the solution
you are looking for is “optimal” in some quantifiable way. Optimization
principles are everywhere. For example, equilibrium economics optimizes the
distribution of resources, at least that’s the idea. Natural selection optimizes
the survival of offspring. If you shift around on your couch until you’re
comfortable you are optimizing your comfort. What these examples have in common
is that the optimization requires trial and error. The optimization we see in
physics is different. It seems that nature doesn’t need trial and error. What
happens is optimal right away, without trying out different options. And we can
quantify just in which way it’s optimal.

I’ll start with a super simple example. Suppose a lonely rock flies through
outer space, far away from any stars or planets, so there are no forces acting
on the rock, no air friction, no gravity, nothing. Let’s say you know the rock
goes through point A at a time we’ll call t_A and later through point B at time
t_B. What path did the rock take to get from A to B?

Well, if no force is acting on the rock it must travel in a straight line with
constant velocity, and there is only one straight line connecting the two dots,
and only one constant velocity that will fit to the duration. It’s easy to
describe this particular path between the two points – it’s the shortest
possible path. So the path which the rock takes is optimal in that it’s the
shortest.

This is also the case for rays of light that bounce off a mirror. Suppose you
know the ray goes from A to B and want to know which path it takes. You find the
position of point B in the mirror, draw the shortest path from A to B, and
reflect the segment behind the mirror back because that doesn’t change the
length of the path. The result is that the angle of incidence equals the angle
of reflection, which you probably remember from middle school.

This “principle of the shortest path” goes back to the Greek mathematician Hero
of Alexandria in the first century, so not exactly cutting edge science, and it
doesn’t work for refraction in a medium, like for example water, because the
angle at which a ray of light travels changes when it enters the medium. This
means using the length to quantify how “optimal” a path is can’t be quite right.

In 1657, Pierre de Fermat figured out that in both cases the path which the ray
of light takes from A to B is that which requires the least amount of time. If
there’s no change of medium, then the speed of light doesn’t change and taking
the least time means the same as taking the shortest path. So, reflection works
as previously.

But if you have a change of medium, then the speed of light changes too. Let us
use the previous example with a tank of water, and let us call speed of light in
air c_1, and the speed of light in water c_2.

We already know that in either medium the light ray has to take a straight line,
because that’s the fastest you can get from one point to another at constant
speed. But you don’t know what’s the best point for the ray to enter the water
so that the time to get from A to B is the shortest.

But that’s pretty straight-forward to calculate. We give names to these
distances, calculate the length of the paths as a function of the point where it
enters. Multiply each path with the speed in the medium and add them up to get
the total time.

Now we want to know which is the smallest possible time if we change the point
where the ray enters the medium. So we treat this time as a function of x and
calculate where it has a minimum, so where the first derivative with respect to
x vanishes.

The result you get is this. And then you remember that those ratios with square
roots here are the sines of the angles. Et voila, Fermat may have said, this is
the correct law of refraction. This is known as the principle of least time, or
as Fermat’s principle, and it works for both reflection and refraction.

Let us pause here for a moment and appreciate how odd this is. The ray of light
takes the path that requires the least amount of time. But how does the light
know it will enter a medium before it gets there, so that it can pick the right
place to change direction. It seems like the light needs to know something about
the future. Crazy.

It gets crazier. Let us go back to the rock, but now we do something a little
more interesting, namely throw the rock in a gravitational field. For simplicity
let’s say the gravitational potential energy is just proportional to the height
which it is to good precision near the surface of earth. Again I tell you the
particle goes from point A at time T_A to point B at time t_B. In this case the
principle of least time doesn’t give the right result.

But in the early 18th century, the French mathematician Maupertuis figured out
that the path which the rock takes is still optimal in some other sense. It’s
just that we have to calculate something a little more difficult. We have to
take the kinetic energy of the particle, subtract the potential energy and
integrate this over the path of the particle.

This expression, the time-integral over the kinetic minus potential energy is
the “action” of the particle. I have no idea why it’s called that way, and even
less do I know why it’s usually abbreviated S, but that’s how it is. This action
is the S in the equation that I showed at the very beginning.

The thing is now that the rock always takes the path for which the action has
the smallest possible value. You see, to keep this integral small you can either
try to make the kinetic energy small, which means keeping the velocity small, or
you make the potential energy large, because that enters with a minus.

But remember you have to get from A to B in a fixed time. If you make the
potential energy large, this means the particle has to go high up, but then it
has a longer path to cover so the velocity needs to be high and that means the
kinetic energy is high. If on the other hand the kinetic energy is low, then the
potential energy doesn’t subtract much. So if you want to minimize the action
you have to balance both against each other. Keep the kinetic energy small but
make the potential energy large.

The path that minimizes the action turns out to be a parabola, as you probably
already knew, but again note how weird this is. It’s not that the rock actually
tries all possible paths. It just gets on the way and takes the best one on
first try, like it knows what’s coming before it gets there.

What’s this squiggle in the principle of least action? Well, if we want to
calculate which path is the optimal path, we do this similarly to how we
calculate the optimum of a curve. At the optimum of a curve, the first
derivative with respect to the variable of the function vanishes. If we
calculate the optimal path of the action, we have to take the derivative with
respect to the path and then again we ask where it vanishes. And this is what
the squiggle means. It’s a sloppy way to say, take the derivative with respect
to the paths. And that has to vanish, which means the same as that the action is
optimal, and it is usually a minimum, hence the principle of least action.

Okay, you may say but you don’t care all that much about paths of rocks.
Alright, but here’s the thing. If we leave aside quantum mechanics for a moment,
there’s an action for everything. For point particles and rocks and arrows and
that stuff, the action is the integral over the kinetic energy minus potential
energy.

But there is also an action that gives you electrodynamics. And there’s an
action that gives you general relativity. In each of these cases, if you ask
what the system must do to give you the least action, then that’s what actually
happens in nature. You can also get the principle of least time and of the
shortest path back out of the least action in special cases.

And yes, the principle of least action *really uses an integral into the future.
How do we explain that?

Well. It turns out that there is another way to express the principle of least
action. One can mathematically show that the path which minimizes the action is
that path which fulfils a set of differential equations which are called the
Euler-Lagrange Equations.

For example, the Euler Lagrange Equations of the rock example just give you
Newton’s second law. The Euler Lagrange Equations for electrodynamics are
Maxwell’s equations, the Euler Lagrange Equations for General Relativity are
Einstein’s Field equations. And in these equations, you don’t need to know
anything about the future. So you can make this future dependence go away.

What’s with quantum mechanics? In quantum mechanics, the principle of least
action works somewhat differently. In this case a particle doesn’t just go one
optimal path. It actually goes all paths. Each of these paths has its own
action. It’s not only that the particle goes all paths, it also goes to all
possible endpoints. But if you eventually measure the particle, the
wave-function “collapses”, and the particle is only in one point. This means
that these paths really only tell you probability for the particle to go one way
or another. You calculate the probability for the particle to go to one point by
summing over all paths that go there.

This interpretation of quantum mechanics was introduced by Richard Feynman and
is therefore now called the Feynman path integral. What happens with the strange
dependence on the future in the Feynman path integral? Well, technically it’s
there in the mathematics. But to do the calculation you don’t need to know what
happens in the future, because the particle goes to all points anyway.

Except, hmm, it doesn’t. In reality it goes to only one point. So maybe the
reason we need the measurement postulate is that we don’t take this dependence
on the future which we have in the path integral seriously enough.

Posted by Sabine Hossenfelder at 8:00 AM No comments: Labels: Physics, Video
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest





THE SUPERDETERMINED WORKSHOP FINALLY TOOK PLACE


In case you’re still following this blog, I think I owe you an apology for the
silence. I keep thinking I’ll get back to posting more than the video scripts
but there just isn’t enough time in the day. 


Still, I’m just back from Bonn, where our workshop on Superdeterminism and
Retrocausality finally took place. And since I told you how this story started
three years ago I thought I’d tell you today how it went.

Superdeterminism and Retrocausality are approaches to physics beyond quantum
mechanics, at least that’s how I think about it – and that already brings us to
the problem: we don’t have an agreed-upon definition for these terms. Everyone
is using them in a slightly different way and it’s causing a lot of confusion. 


So one of the purposes of the workshop was to see if we can bring clarity into
the nomenclature. The other reason was to bring in experimentalists, so that the
more math-minded among us could get a sense of what tests are technologically
feasible.

I did the same thing 15 years ago with the phenomenology of quantum gravity, on
which I organized a series of conferences (if you’ve followed this blog for a
really long time you’ll remember). This worked out beautifully – the field of
quantum gravity phenomenology is in much better condition today than it was 20
years ago.

It isn’t only that I think we’ll quite possibly see experimental confirmation
(or falsification!) of quantum gravity in the next decade or two, because I
thought that’d be possible all along. Much more important is that the
realization that it’s possible to test quantum gravity (without building a
Milky-Way sized particle collider) is slowly sinking into the minds of the
community, so something is actually happening.

But, characteristically, the moment things started moving I lost interest in the
whole quantum gravity thing and moved on to attack the measurement problem in
quantum mechanics. I have a lot of weaknesses, but lack of ambition isn’t one of
them.

The workshop was originally scheduled to take place in Cambridge in May 2020. We
picked Cambridge because my one co-organizer, Huw Price, was located there, the
other one, Tim Palmer, is in Oxford, and both places collect a decent number of
quantum foundations people. We had the room reserved, had the catering sorted
out, and had begun to book hotels. Then COVID happened and we had to cancel
everything at the last minute. We tentatively postponed the meeting to late
2020, but that didn’t come into being either.

Huw went to Australia, and by the time the pandemic was tapering out, he’d moved
on to Bonn. We moved the workshop with him to Bonn, more specifically to a place
called the International Center for Philosophy. Then we started all over again.

We didn’t want to turn this workshop into an online event because that’d have
defeated the purpose. There are few people working on superdeterminism and
retrocausality and we wanted them to have a chance to get to personally know
each other. Luckily our sponsor, the Franklin Fetzer Fund, was extremely
supportive even though we had to postpone the workshop twice and put up with
some cancellation fees.

Of course the pandemic isn’t quite over and several people still have travel
troubles. In particular, it turned out there’s a nest of retrocausalists in
Australia and they were more or less stuck there. Traveling from China is also
difficult at the moment. And we had a participant affiliated with a Russian
university who had difficulties traveling for yet another reason. The world is
in many ways a different place now than it was 2 years ago.

One positive thing that’s come out of the pandemic though is that it’s become
much easier to set up zoom links and live streams and people are more used to
it. So while we didn’t have remote talks, we did have people participating from
overseas, from Australia, China, and Canada. It worked reasonably well, leaving
aside the usually hiccups, that they partly couldn’t see or hear, the zoom event
expired when it shouldn’t have, etc.

I have organized a lot of workshops and conferences and I have attended even
more of them. This meeting was special in a way I didn’t anticipate. Many of the
people who are working on superdeterminism and retrocausality have for decades
been met with a mix of incredulity, ridicule, and insults. In fact, you might
have seen this play out with your own eyes in the comment sections of this and
other blogs. For many of us, me included, this was the first time we had an
audience who took our work seriously.

All of this talk about superdeterminism and new physics beyond quantum mechanics
may turn out to be complete rubbish of course. But at least at present I think
it’s the most promising route to make progress in the foundations of physics.
The reason is quite simple: If it’s right, then new physics should appear in a
parameter range that we can experimentally access from two sides, by making
measuring devices smaller, and by bringing larger objects into quantum states.
And by extrapolating the current technological developments, we'll get there
soon enough anyway. The challenge is now to figure out what to look for when the
data come in.

The talks from the workshop were recorded. I will post a link when they appear
online. We’re hoping to produce a kind of white paper that lays out the
terminology that we can refer to in the future. And I am working on a new paper
in which I try to better explain why I think that either superdeterminism or
retrocausality is almost certainly correct. So this isn’t the end of the story,
it’s just the beginning. Stay tuned. 

Posted by Sabine Hossenfelder at 2:01 AM No comments: Labels: Quantum
foundations
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest





FRIDAY, MAY 13, 2022


CAN WE MAKE A BLACK HOLE? AND IF WE COULD, WHAT COULD WE DO WITH IT?


[This is a transcript of the video embedded below. Some of the explanations may
not make sense without the animations in the video.]




Wouldn’t it be cool to have a little black hole in your office? You know, maybe
as a trash bin. Or to move around the furniture. Or just as a kind of nerdy
gimmick. Why can we not make black holes? Or can we? If we could, what could we
do with them? And what’s a black hole laser? That’s what we’ll talk about today.

Everything has a gravitational pull, the sun and earth but also you and I and
every single molecule. You might think that it’s the mass of the object that
determines how strong the gravitational pull is, but this isn’t quite correct.

If you remember Newton’s gravitational law, then, sure, a higher mass means a
higher gravitational pull. But a smaller radius also means a higher
gravitational pull. So, if you hold the mass fixed and compress an object into a
smaller and smaller radius, then the gravitational pull gets stronger.
Eventually, it becomes so strong that not even light can escape. You’ve made a
black hole.

This happens when the mass is compressed inside a radius known as the
Schwarzschild-radius. Every object has a Schwarzschild radius, and you can
calculate it from the mass. For the things around us the Schwarzschild-radius is
much much smaller than the actual radius.

For example, the actual radius of earth is about 6000 kilometers, but the
Schwarzschild-radius is only about 9 millimeters. Your actual radius is maybe
something like a meter, but your Schwarzschild radius is about 10 to the minus
24 meters, that’s about a billion times smaller than a proton.

And the Schwarzschild radius of an atom is about 10 to the minus 53 meters,
that’s even smaller than the Planck length which is widely regarded to be the
smallest possible length, though I personally think this is nonsense, but that’s
a different story.

So the reason we can’t just make a black hole is that the Schwarzschild radius
of stuff we can handle is tiny, and it would take a lot of energy to compress
matter sufficiently. It happens out there in the universe because if you have
really huge amounts of matter with little internal pressure, like burned out
stars, then gravity compressed it for you. But we can’t do this ourselves down
here on earth. It’s basically the same problem like making nuclear fusion work,
just many orders of magnitude more difficult.

But wait. Einstein said that mass is really a type of energy, and energy also
has a gravitational pull. Yes, that guy again. Doesn’t this mean, if we want to
create a black hole, we can just speed up particles to really high velocity, so
that they have a high energy, and then bang them into each other. For example,
hmm, with a really big collider. 

Indeed, we could do this. But even the biggest collider we have built so far,
which is currently the Large Hadron Collider at CERN, is nowhere near reaching
the required energy to make a black hole. Let’s just put in the numbers.

In the collisions at the LHC we can reach energies about 10 TeV, that
corresponds to a Schwarzschild radius of about 10 to the minus 50 meters. But
the region in which the LHC compresses this energy is more like 10 to the minus
19 meters. We’re far far away from making a black hole.

So why were people so worried 10 years ago that the LHC might create a black
hole? This is only possible if gravity doesn’t work the way Einstein said. If
gravity for whatever reason would be much stronger on short distances than
Einstein’s theory predicts, then it’d become much easier to make black holes.
And 10 years ago the idea that gravity could indeed get stronger on very short
distances was popular for a while. But there’s no reason to think this is
actually correct and, as you’ve noticed, the LHC didn’t produce any black holes.

Alright, so far it doesn’t sound like you’ll get your black hole trash can. But
what if we built a much bigger collider? Yes, well, with current technology it’d
have to have a diameter about the size of the milky-way. It’s not going to
happen. Something else we can do?

We could try to focus a lot of lasers on a point. If we used the world’s
currently most powerful lasers and focused them on an area about 1 nanometer
wide, we’d need about 10 to the 37 of those lasers. It’s not strictly speaking
impossible, but clearly it’s not going to happen any time soon.  

Ok, good, but what if we could make a black hole? What could we do with it?
Well, surprise, there’s a couple of problems. Black holes have a reputation for
sucking stuff in, but actually if they’re small, the problem is the opposite.
They throw stuff out. That stuff is Hawking radiation. 

Stephen Hawking discovered in the early 1970s that all black holes emit
radiation due to quantum effects, so they lose mass and evaporate. The smaller
the black holes, the hotter, and the faster they evaporate. A black hole with a
mass of about 100 kilograms would entirely evaporate in less than a nanosecond.

Now “Evaporation” sounds rather innocent and might make you think of a puddle
turning into water vapor. But for the black hole it’s far from innocent. And if
the black hole’s temperature is high, the radiation is composed of all
elementary particles, photons, electrons, quarks, and so on. It’s really
unhealthy. And a small black hole converts energy into a lot of those particles
very quickly. This means a small black hole is black basically a bomb. So it
wouldn’t quite work out the way it looks in the Simpson’s clip. Rather than
eating up the city it’d blast it apart.

But if you’d manage to make a black hole with masses about a million tons, those
would live a few years, so that’d make more sense. Hawking suggested to surround
them with mirrors and use them to generate power. It’d be very climate friendly,
too. Louis Crane suggested to put such a medium sized black hole in the focus of
a half mirror and use its radiation to propel a spaceship.

Slight problem with this is that you can’t touch black holes, so there’s nothing
to hold them with. A black hole isn’t really anything, it’s just strongly curved
space. They can be electrically charged but since they radiate they’ll shed
their electric charge quickly, and then they are neutral again and electric
fields won’t hold them. So some engineering challenges that remain to be solved.

What if we don’t make a black hole but just use one that’s out there? Are those
good for anything? The astrophysical black holes which we know exist are very
heavy. This means their Hawking temperature is very small, so small indeed that
we can’t measure it, as I just explained in a recent video. But if we could
reach such a black hole it might be useful for something else.

Roger Penrose already pointed out in the early 1970s that it’s possible to
extract energy from a big, spinning black hole by throwing an object just past
it. This slows down the black hole by a tiny bit, but speeds up the object
you’ve thrown. So energy is conserved in total, but you get something out of it.
It’s a little like a swing-by that’s used in space-flight to speed up space
missions by using a path that goes by near a planet.

And that too can be used to build a bomb… This was pointed out in 1972 in a
letter to Nature by Press and Teukolsky. They said, look, we’ll take the black
hole, surround it with mirrors, and then we send in a laser beam, just past the
black hole. That gets bend around and comes back with a somewhat higher energy,
like Penrose said. But then it bounces off the mirror, goes around the black
hole again, gains a little more energy, and so on. This exponentially increases
the energy in the laser light until the whole thing blasts apart.

Ok, so now that we’ve talked about blowing things up with bombs that we can’t
actually build, let us talk about something that we can actually build, which is
called an analogue black hole. The word “analogue” refers to “analogy” and not
to the opposite of digital. Analogue black holes are simulations of black holes
in fluids or solids where you can “trap” some kind of radiation.

In some cases, what you trap are sound waves in a fluid, rather than light. I
should add here that “sound waves” in physics don’t necessarily have something
to do with what you can hear. They are just periodic density changes, like the
sound you can hear, but not necessarily something your ears can detect.

You can trap sounds waves in a similar way to how a black hole traps light. This
can happen if a fluid flows faster than the sound speed in that fluid. You see,
in this case there’s some region from within which the sound waves can’t escape.

Those fluids aren’t really black holes of course, they don’t actually trap
light. But they affect sound very much like real black holes affect light. If
you want to observe Hawking radiation in such fluids, they need to have quantum
properties, so in practice one uses superfluids. Another way to create a black
hole analogue it is with solids in which the speed of light changes from one
place to another.

And those analogue black holes can be used to amplify radiation too. It works a
little differently than the amplifications we already discussed because one
needs two horizons, but the outcome is pretty much the same: you send in
radiation with some energy, and get out radiation with more energy. Of course
the total energy is conserved, you take that from the background field which is
the analogy for the black hole. This radiation which you amplify isn’t
necessarily light, as I said it could be sound waves, but it’s an “amplified
stimulated emission”, which is why this is called a black hole laser.

Black hole lasers aren’t just a theoretical speculation. It’s reasonably well
confirmed that analogue black holes actually act much like real black holes and
do indeed emit Hawking radiation. And there have been claims that black hole
lasing has been observed as well. It has remained somewhat controversial exactly
what the experiment measured, but either way it shows that black hole lasers are
within experimental reach. They’re basically a new method to amplify radiation.
This isn’t going to result in new technology in the near future, but it serves
to show that speculations about what we could do with black holes aren’t as far
removed from reality as you may have thought.


Posted by Sabine Hossenfelder at 8:00 PM No comments: Labels: Astrophysics,
Particle Physics, Physics, Video
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest




SATURDAY, MAY 07, 2022


HOW BAD IS DIESEL?


[This is a transcript of the video embedded below. Some of the explanations may
not make sense without the animations in the video.]




I need a new car, and in my case “new” really means “used”. I can’t afford one
of the shiny new electric ones, so it’s either gasoline or diesel. But in recent
years we’ve seen a lot of bad headlines about diesel. Why do diesel engines have
such a bad reputation? How much does diesel exhaust affect our health really?
And what’s the car industry doing about it? That’s what we will talk about
today.

In September 2015, news broke about the Volkswagen emissions scandal, sometimes
referred to as Dieselgate. It turned out Volkswagen had equipped cars with a
special setting for emission tests, so that they would create less pollution
during the test than on the road. Much of the world seems to have been shocked
how the allegedly accurate and efficient Germans could possibly have done such a
thing. I wasn’t really surprised. Let me tell you why.

My first car was a little red ford fiesta near the end of its life. For the
emissions test I used to take it to a cheap repair shop in the outskirts of a
suburb of a suburb. There was no train station and really nothing else nearby,
so I’d usually just wait there. One day I saw the guy from the shop fumbling
around on the engine before the emissions test and asked him what he was doing.
Oh, he said, he’s just turning down the engine so it’ll pass the test. But with
that setting the car wouldn’t properly drive, so later he’ll turn it up again.

Well, I thought, that probably wasn’t the point of the emissions test. But I
didn’t have money for a better car. When I heard the news about the Volkswagen
scandal, that made total sense to me. Of course the always efficient Germans
would eventually automatize this little extra setting for the emissions test.

But why is diesel in particular so controversial? Diesel and gasoline engines
are similar in that they’re both internal combustion engines. In these engines,
fuel is ignited which moves a piston, so it converts chemical energy into
mechanical energy.

The major difference between diesel and gasoline is the way these explosions
happen. In a gasoline engine, the fuel is mixed with air, compressed by pistons
and ignited by sparks from spark plugs. In a diesel engine, the air is
compressed first which heats it up. Then the fuel is injected into the hot air
and ignites. 

One advantage of diesel engines is that they don’t need a constant ignition
spark. You just have to get them going once and then they’ll keep on running.
Another advantage is that the energy efficiency is about thirty percent higher
than that of gasoline engines. They also have lower carbon dioxide emissions per
kilometer. For this reason, they were long considered environmentally
preferable.

The disadvantage of diesel engines is that the hotter and more compressed gas
produces more nitrous oxide and more particulates. And those are a health
hazard.

Nitrous Oxides are combinations of one Nitrogen and one or several Oxygen atoms.
The most prevalent ones in diesel exhaust are nitric oxide (NO) and nitrogen
dioxide (NO2 ). When those molecules are hit by sunlight they can also split off
an oxygen atom which then creates ozone by joining an O2 in the air. Many
studies have shown that breathing in ozone or nitrous oxides irritates airways
and worsens respiratory illness, especially asthma.

It’s difficult to find exact numbers for comparing nitric oxide components for
diesel with gasoline because they depend strongly on the car and make and road
conditions and how long the car’s been driving etc.

A road test on 149 diesel and gasoline cars manufactured from 2013 to 2016 found
that Nitrogen oxide emissions from diesel cars are about a factor ten higher
than those of gasoline cars.

This is nicely summarized in this figure, where you can see why this discussion
is so heated. Diesel has on average lower carbon-dioxide emission but higher
emissions of nitrogen oxides, gasoline cars the other way round. However, you
also see that there are huge differences between the cars. You can totally find
diesel engines that are lower in both emissions than some gasoline cars. Also
note the two hybrid cars which are low on both emissions.

The other issue with diesel emissions are the particulates, basically tiny
grains. Particulates are classified by their size, usually abbreviated with PM
for ‘particulate matter’ and then a number which tells you their maximal size in
micrometers. For example, PM2.5 stands for particulates the size of 2 point 5
micrometers or smaller.

This classification is somewhat confusing because technically PM 10 includes
PM2.5. But it makes sense if you know that regulations put bounds on the total
amount of particulates in a certain class in terms of weight, and most of the
weight in some size classification comes from the largest particles.

So a PM10 limit will for all practical purposes just affect the largest of those
particles. To reduce the smaller ones, you then add another limit for, say
PM2.5.

Diesel particulates are made of soot and ash from incomplete burning of the
fuel, but also abrasion from the engine parts, that includes metals, sulfates,
and silicates. Diesel engines generate particulates with a total mass of up to
100 times more than similar-sized petrol engines.

What these particulates do depends strongly on their size. PM10 particles tend
to settle to the ground by gravity in a matter of hours whereas PM0.1 can stay
in the atmosphere for weeks and are then mostly removed by precipitation. The
numbers strongly depend on weather conditions.

When one talks about the amount of particulate matter in diesel exhaust one has
to be very careful exactly how one quantifies them. Most of the *mass* of
particulate matter in diesel exhaust is in range of about a tenth of a
micrometer. But most of the particles are about a factor of ten in size smaller.
It’s just that since they’re so much smaller they don’t have much total mass.

This figure (p 157) shows the typical distribution of particulate matter in
diesel exhaust. The brown dotted line is the distribution of mass. As you can
see it peaks somewhat above tenth of a micrometer, that’s where PM 0.1 begins.
For comparison, that’s a hundred to a thousand times smaller than pollen. The
blue line is the distribution of the number of particles.

As you can see it peaks at a much smaller size, about 10 nanometers. That’s
roughly the same size as viruses, so these particulates are really, really tiny,
you can’t see them by eye. The green curve shows yet something else, it’s the
surface of those particles. The surface is important because it determines how
much the particles can interact with living tissue.  

The distinction between mass, surface, and amounts of particulate matter may
seem like nitpicking but it’s really important because regulations are based on
them.

What do we know about the health impacts of particulates? The WHO has classified
airborne particulates as a Group 1 carcinogen. That they’re in group 1 means
that the causal relation has been established. But the damage that those
particles can do depends strongly on their size. Roughly speaking, the smaller
they are, the more easily they can enter the body and the more damage they can
do.

PM10 can get into the lower part of the respiratory system, PM 2.5 and smaller
can enter the blood through the lung, and from thereon it can reach pretty much
every organ.

The body does have some defense mechanisms. First there’s the obvious like
coughing and sneezing, but once the stuff’s in the lower lungs it can stay there
for months and if you breathe in new particulates all the time, the lung doesn’t
have a chance to clear out. In other organs, the immune system tries to attack
the particles but the most prevalent element in these particulates is carbon,
and that is biopersistent, which means they just sit there and accumulate in the
tissue.

Here’s a photo of such particulates that have accumulated in bronchial tissue.
(Fig 2) It isn’t just that having dirt accumulate in your organs isn’t good
news, the particulates can also carry toxic compounds on their surfaces.
According to the WHO, PM 2.5 exposure has been linked to an increased risk heart
attacks, strokes, respiratory disease, and premature death [Source (3)].

One key study was published in 2007 by researchers from several American
institutions. They followed over 65,000 postmenopausal American women who had no
history of diagnosed cardiovascular disease.

They found that a 10 microgram increase of PM 2.5 per cubic meter was associated
with a 24 percent increase for experiencing a first cardiovascular event (at 95%
CL), and a 76% increase for death resulting from cardiovascular disease, also at
95% CL. These results were already adjusted to remove already known risk
factors, such as those stemming from age, household income, pre-existing
conditions, and so on.

OA 2013 study that was published in The Lancet followed over 300,000 people from
nine European countries for more than a decade. They found that a 5 microgram
increase of PM 2.5 per cubic meter was correlated with an 18% increased risk of
developing lung cancer. Again those results are already adjusted to take into
account otherwise known risk factors. The PM exposure adds on top of that.

There’ve been lots of other studies claiming correlations between exposure to
particulate matter and all kinds of diseases, though not all of them have great
statistics. One even claimed they found a correlation between exposure to
particulate pollution and decreasing intelligence, which explains it all,
really.

Okay, so far we have seen that diesel exhaust really isn’t healthy. Well, glad
we talked about it, but that doesn’t really help me to decide what to do about
my car. Let’s then look at what the regulations are and what the car industry
has been doing about it.

The World Health organization has put out guideline values for PM10 and PM2.5,
both an annual mean and a daily mean, but as you see in this table the actual
regulations in the EU are considerably higher. In the US people are even more
relaxed about air pollution. Australia has some of the strictest air pollution
standards but even those are above what the WHO recommends.

If you want to put these numbers in perspective, you can look up the air quality
at your own location on the website iqair.com that’ll tell you the current PM
2.5 concentration. If you live in a major city chances are you’ll find the level
frequently exceeds the WHO recommendation.

Of course the reason for this is not just diesel exhaust. In fact, if you look
at this recently published map of global air pollution levels, you’ll see that
some of the most polluted places are tiny villages in southern Chile and
Patagonia. The reason is not that they love diesel so much down there, but
almost everybody heats the house and cooks with firewood.

Indeed, more than half of PM2.5 pollution comes from fuel combustion in industry
and households, while road transport accounts merely for about 11 percent. But
more than half of the road traffic contribution to particulate matter comes from
abrasion, not from exhaust. The additional contribution from diesel exhaust to
the total pollution is therefore in the single percent values. Though you have
to keep in mind that these are average values, the percentages can be very
different in specific locations. These numbers are for the European Union but
they are probably similar in the United States and the UK.

And of the fraction coming from diesel, only some share come from passenger
cars, the rest is trucks which are almost exclusively diesel. Just how the
breakdown between trucks and diesel passenger cars looks depends strongly on
location.

Nevertheless, diesel exhaust is a significant contribution to air pollution,
especially in cities where traffic is dense and air flow small. This is why many
countries have passed regulation to force car manufacturers to make diesel
cleaner.

Europeans have regularly updated their emission standards since 1992. The
standards are called Euro 1, Euro 2, and so on, with the current one being Euro
6. The Euro 7 standard is expected for 2025. The idea is that only cars with
certain standards are allowed into cities, though each city picks its own
standard.

For example, London currently uses Euro 6, Brussels 5, and in Paris the rules
change every couple of months and depend on the time of the day and just paying
the fee may be less painful than figuring out what you’re supposed to do.

Basically these European standards limit the emissions of carbon dioxide,
nitrogen oxides, and particulates, and some other things. (Table) The industry
is getting increasingly better at adapting to these restrictions. As a
consequence, new diesel cars pollute considerably less than those from one or
two decades ago.

One of the most popular ways to make diesel cleaner is filtering the exhaust
before it is released into the air.  A common type of filter are cordierite wall
flow filters which you see in this image. They are very efficient and relatively
inexpensive. These filters remove particles of size 100 nano meters and up.

The ones approved by the Environmental Protection Agency in the USA filter at
least 85 percent of particulates, though some filter percentages in the upper
90s.  When the filter is “full” it gets burned by the engine itself. Remember
that most of the particulates gets created by incomplete combustion in the first
place, so you can in principle burn them again.

However, a consequence of this is some of the particulates simply get too small
to be caught in the filter and they eventually escape. Another downside is that
some filters result in an increase of nitrogen oxide emission when the filter is
burned. Still, the filters do take out a significant fraction of the
particulates.

Another measure to reduce pollution is exhaust gas recirculation. This isn’t
only used in diesel cars but also in gasoline cars and it works by recirculating
a portion of the exhaust gas back to the engine cylinders. This dilutes the
oxygen in the incoming air stream and brings down the peak temperature. Since
nitrogen oxides are mostly produced at higher temperature, this has the effect
of reducing their fraction in the exhaust. But this recirculation has the
downside that with the drop of combustion temperature the car drives less
efficiently.

These technologies have been around for decades, but since emission regulations
have become more stringent, carmakers have pushed their development and
integration forward. This worked so well that in 2017 an international team of
researchers published a paper in Science magazine in which they claimed that
 modern gasoline produces more carbonaceous particulate matter than modern
filter-equipped diesel cars.

What’s carbonaceous? That’s particles which contain carbon, and those make up
about 50 percent of the particulates in the emissions. So not all of it but a
decent fraction. In the paper they argue that whether gasoline or diesel cars
are more polluting depends on what pollutant you look at, the age of the engine
and whether it carries a filter or a catalytic converter.

I think what we learn from this is that being strict about environmental
requirements and regulations seems to work out pretty well for diesel emissions,
and the industry has proved capable of putting their engineers at work and
finding solutions. Not all is good, but it’s getting better.

And this has all been very interesting but hasn’t really helped me make up my
mind about what car to buy. So what should I do? Let me know what you think in
the comments.


Posted by Sabine Hossenfelder at 8:00 AM No comments: Labels: Environment,
Science Policy, Video
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest




SATURDAY, APRIL 30, 2022


DID THE W-BOSON JUST "BREAK THE STANDARD MODEL"?


[This is a transcript of the video embedded below. Some of the explanations may
not make sense without the animations in the video.]


Hey there’s yet another anomaly in particle physics. You have probably seen the
headlines, something with the mass of one of those particles called a W-boson.
And supersymmetry is once again the alleged explanation. How seriously should
you take this? And why are particle physicists constantly talking about
supersymmetry, hasn’t that been ruled out? That’s what we’ll talk about today.

Last time I talked about an anomaly in particle physics was a few months ago and
would you know two weeks later it disappeared. Yes, it disappeared. If you
remember there was something weird going on with the neutrino oscillations in an
experiment called LSND, then a follow-up experiment called Mini-Boone confirmed
this, and then they improved the accuracy of the follow-up experiment and the
anomaly was gone. Poof, end of story. No more neutrino anomaly.

You’d think this would’ve taught me to not get excited about anomalies but, ha,
know me better. Now there’s another experimental group that claims to have found
an anomaly and of course we have to talk about this. This one actually isn’t a
new experiment, it’s a new analysis of data from an experiment that was
discontinued more than 10 years ago, a particle collider called the Tevatron at
Fermilab in the United States. It reached collision energies of about a Tera
electron volt, Tev for short, hence the name.

The data were collected from 2002 to 2011 by the collaboration of the CDF
experiment. During that time they measured about 4 million events that contained
a particle called the W-boson.

The W-boson is one of the particles in the standard model, it’s one of those
that mediate the weak nuclear force. So it’s similar to the photon, but it has a
mass and it’s extremely short-lived. It really only shows up in particle
colliders. The value of the mass of the W-boson is related to other parameters
in the standard model which have also been measured, so it isn’t an independent
parameter, it has to fit to the others.

The mass of the W-boson has been measured a few times previously, you can see a
summary of those measurements in this figure. On the horizontal axis you have
the mass of the W-boson. The grey line is the expectation if the standard model
is correct. The red dots with the error bars are the results from different
experiments. The one at the bottom is the result from the new analysis.

One thing that pops into your eye right away is that the mean value of the new
measurement isn’t so different from earlier data analyses. The striking thing
about this new analysis is the small error bar. That the error bar is so small
is the reason why this result has such a high statistical significance. They
quote a disagreement with the standard model at 6.9 sigma. That’s well above the
discovery threshold in particle physics which is often somewhat arbitrarily put
at 5 sigma.

What did they do to get the error bar so small? Well for one thing they have a
lot of data. But they also did a lot of calibration cross-checks with other
measurements, which basically means they know very precisely how to extract the
physical parameters from the raw data, or at least they think they do. Is this
reasonable? Yes. Is it correct? I don’t know. It could be. But in all honesty, I
am very skeptical that this result will hold up. More likely, they have
underestimated the error and their result is actually compatible with the other
measurements.

But if it does hold up, what does it mean? It would mean that the standard model
is wrong because there’d be a measurement that don’t fit together with the
predictions of the theory. Then what? Well then we’d have to improve the
standard model. Theoretical particle physicists have made many suggestions for
how to do that, the most popular one has for a long time been supersymmetry.
It’s also one of the possible explanations for the new anomaly that the authors
of the paper discuss.

What is supersymmetry? Supersymmetry isn’t a theory, it’s a property of a class
of models. And that class of models is very large. These models have all in
common that they introduce a new partner particle for each particle in the
standard model. And then there are usually some more new particles. So, in a
nutshell, it’s a lot more particles.

What the predictions of a supersymmetric model are depends strongly on the
masses of those new particles and how they decay and interact. In practice this
means whatever anomaly you measure, you can probably find some supersymmetric
model that will “explain” it. I am scare quoting “explain” because if you can
explain everything you really explain nothing.

This is why supersymmetry is mentioned in one breath with every anomaly that you
hear of: because you can use it to explain pretty much everything if you only
try hard enough. For example, you may remember the 4.2 sigma deviation from the
standard model in the magnetic moment of the muon. Could it be supersymmetry?
Sure. Or what’s with this B-meson anomaly, that lingers around at 3 sigma and
makes headlines once or twice year. Could that be supersymmetry? Sure.

Do we in any of these cases actually *know that it has to be supersymmetry? No.
There are many other models you could fumble together that would also fit the
bill. In fact, the new CDF paper about the mass of the W-boson also mentions a
few other possible explanations: additional scalar fields, a second Higgs, dark
photons, composite Higgs, and so on.

There’s literally thousands of those models, none of which has any evidence
going in its favor. And immediately after the new results appeared particle
physicists have begun cooking up new “explanations”. Here are just a few
examples of those. By the time this video appears there’ll probably be a few
dozen more.

But wait, you may wonder now, hasn’t the Large Hadron Collider ruled out
supersymmetry? Good point. Before the Large Hadron Collider turned on, particle
physicists claimed that it would either confirm or rule out supersymmetry.
Supersymmetry was allegedly an easy to find signal. If supersymmetric particles
existed, they should have shown up pretty much immediately in the first
collisions. That didn’t happen. What did particle physicists do? Oh suddenly
they claimed that of course this didn’t rule out supersymmetry. It’d just ruled
out certain supersymmetric models. So which version is correct? Did or didn’t
the LHC rule out supersymmetry?

The answer is that the LHC indeed did not rule out supersymmetry, it never
could. As I said, supersymmetry isn’t a theory. It’s a huge class of models that
can be made to fit anything. Those physicists who said otherwise were either
incompetent or lying or both, the rest knew it but kept their mouth shut, and
now they hope you’ll forget about this and give them money for a bigger
collider.

As you can probably tell, I am very not amused that the particle physics
community never came clear on that. They never admitted to having made false
statements, accidentally or deliberately, and they never gave us any reason to
think it wouldn’t happen again. I quite simply don’t trust them.

Didn’t supersymmetry have something to do with string theory? Yes, indeed. So
what does this all mean for string theory? The brief answer is: nothing
whatsoever. String theory requires supersymmetry, but the opposite is not true,
supersymmetry doesn’t necessarily require string theory. So even in the unlikely
event that we would find evidence for supersymmetry, this wouldn’t tell us
whether string theory is correct. It would certainly boost confidence in string
theory but ultimately wouldn’t help much because string theorists never managed
to get the standard model out of their theory, despite the occasional claim to
the contrary.

I’m afraid all of this sounds rather negative. Well. There’s a reason I left
particle physics. Particle physics has degenerated into a paper production
enterprise that is of virtually no relevance for societal progress or for
progress in any other discipline of science. The only reason we still hear so
much about it is that a lot of funding goes into it and so a lot of people still
work on it, most of them don’t like me. But the disciplines where the
foundations of physics currently make progress are cosmology and astrophysics,
and everything quantum, quantum information, quantum computing, quantum
metrology, and so on, which is why that’s what I mostly talk about these days.

The LHC has just been upgraded and started operating again a few days ago. In
the coming years, they will collect a lot more data than they have so far and
this could lead to new discoveries. But when the headlines come in, keep in mind
that the more data you collect, the more anomalies you’ll see, so it’s almost
guaranteed they will see a lot of bumps at low significance “that could break
the standard model” but then go away. It’s possible of course that one of those
is the real thing, but to borrow a German idiom, don’t eat the headlines as hot
as they’re cooked.

Posted by Sabine Hossenfelder at 8:00 AM No comments: Labels: Particle Physics,
Physics, Video
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest




SATURDAY, APRIL 23, 2022


I STOPPED WORKING ON BLACK HOLE INFORMATION LOSS. HERE’S WHY.


[This is a transcript of the video embedded below. Some of the explanations may
not make sense without the animations in the video.]




It occurred to me the other day that I’ve never told you what I did before I
ended up in the basement in front of a green screen. So today I want to tell you
why I, as many other physicists, was fascinated by the black hole paradox that
Steven Hawking discovered before I was even born. And why I, as many other
physicists, tried to solve it. But why I, in the end, unlike many other
physicists, decided that it’s a waste of time. What’s the black hole information
paradox? Has it been solved and if not, will it ever be solved? What if anything
is new about those recent headlines? That’s what we’ll talk about today.

First things first, what’s the black hole information loss paradox. Imagine you
have a book and you throw it into a black hole. The book disappears behind the
horizon, the black hole emits some gravitational waves and then you have a black
hole with a somewhat higher mass. And that’s it.

This is what Einstein’s theory of general relativity says. Yes, that guy again.
In Einstein’s theory of general relativity black holes are extremely simple.
They are completely described by only three properties: their mass, angular
moment, and electric charge. This is called the “no hair” theorem. Black holes
are bald and featureless and you can mathematically prove it.

But that doesn’t fit together with quantum mechanics. In quantum mechanics,
everything that happens is reversible so long as you don’t make a measurement.
This doesn’t mean processes look the same forward and backward in time, this
would be called time-reversal “invariance”. It merely means that if you start
with some initial state and wait for it to develop into a final state, then you
can tell from the final state what the initial state was. In this sense,
information cannot get lost. And this time-reversibility is a mathematical
property of quantum mechanics which is experimentally extremely well confirmed.

However, in practice, reversing a process is possible only in really small
systems. Processes in large systems become for all practical purposes
irreversible extremely quickly. If you burn your book, for example, then for all
practical purposes the information in it was destroyed. However, in principle,
if we could only measure the properties of the smoke and ashes well enough, we
could calculate what the letters in the book once were.

But when you throw the book into a black hole that’s different. You throw it in,
the black hole settles into its hairless state, and the only difference between
the initial and final state is the total mass. The process seems irreversible.
There just isn’t enough information in the hairless black hole to tell what was
in the book. The black hole doesn’t fit together with quantum mechanics. And
note that making a measurement isn’t necessary to arrive at this conclusion.

You may remember that I said the black hole emits some gravitational waves. And
those indeed contain some information, but so long as general relativity is
correct, they don’t contain enough information to encode everything that’s in
the book.

Physicists knew about this puzzle since the 1960s or so, but initially they
didn’t take it seriously. At this time, they just said, well, it’s only when we
look at the black hole from the outside that we don’t know how reverse this
process. Maybe the missing information is inside. And we don’t really know
what’s inside a black hole because Einstein’s theory breaks down there. So maybe
not a problem after all.

But then along came Stephen Hawking. Hawking showed in the early 1970s that
actually black holes don’t just sit there forever. They emit radiation, which is
now called Hawking radiation. This radiation is thermal which means it’s random
except for its temperature, and the temperature is inversely proportional to the
mass of the black hole.

This means two things. First, there’s no new information which comes out in the
Hawking radiation. And second, as the black hole radiates, its mass shrinks
because E=mc^2 and energy is conserved, and that means the black hole
temperature increases as it evaporates. As a consequence, the evaporation of a
black hole speeds up. Eventually the black hole is gone. All you have left is
this thermal radiation which contains no information.

And now you have a real problem. Because you can no longer say that maybe the
information is inside the black hole. If a black hole forms, for example, in the
collapse of a star, then after it’s evaporated, all the information about that
initial star, and everything that fell into the black hole later, is completely
gone. And that’s inconsistent with quantum mechanics.

This is the black hole information loss paradox. You take quantum mechanics and
general relativity, combine them, and the result doesn’t fit together with
quantum mechanics.

There are many different ways physicists have tried to solve this problem and
every couple of months you see yet another headline claiming that it’s been
solved. Here is the most recent iteration of this cycle, which is about a paper
by Steve Hsu and Xavier Calmet. The authors claim that the information does get
out. Not in gravitational waves, but in gravitons that are quanta of the
gravitational field. Those are not included in Hawking’s original calculation.
These gravitons add variety to black holes, so now they have hair. This hair can
store information and release it with the radiation.

This is a possibility that I thought about at some point myself, as I am sure
many others in the field have too. I eventually came to the conclusion that it
doesn’t work. So I am somewhat skeptical that their proposal actually solves the
problem. But maybe I was wrong and they are right. Gerard ‘t Hooft by the way
also thinks the information comes out in gravitons, though in a different way
then Hsu and Calmet. So this is not an outlandish idea.

I went through different solutions to the black hole information paradox in an
earlier video and will not repeat them all here, but I want to instead give you
a general idea for what is happening. In brief, the issue is that there are many
possible solutions.

Schematically, the way that the black hole information loss paradox comes about
is that you take Einstein’s general relativity and combine it with quantum
mechanics. Each has its set of assumptions. If you combine them, you have to
make some further assumptions about how you do this. The black hole information
paradox then states that all those assumptions together are inconsistent. This
means you can take some of them, combine them and obtain a statement which
contracts another assumption.  Simple example for what I mean with
“inconsistent”, the assumption x< 0 is inconsistent with the assumption x > 1.

If you want to resolve an inconsistency in a set of assumptions, you can remove
some of the assumptions. If you remove sufficiently many, the inconsistency will
eventually vanish. But then the predictions of your theory become ambiguous
because you miss details on how to do calculations. So you have to put in new
assumptions to replace the ones that you have thrown out. And then you show that
this new set of assumptions is no longer inconsistent. This is what physicists
mean when they say they “solved the problem”.

But. There are many different ways to resolve an inconsistency because there are
many different assumptions you can throw out. And this means there are many
possible solutions to the problem which are mathematically correct. But only one
of them will be correct in the sense of describing what indeed happens in
nature. Physics isn’t math. Mathematics is a great tool, but in the end you have
to make an actual measurement to see what happens in reality.

And that’s the problem with the black hole information loss paradox. The
temperature of the black holes that we can observe today is way too small to
measure the Hawking radiation. Remember that the larger the black hole, the
smaller its temperature. The temperature of astrophysical black holes is below
the temperature of the CMB. And even if that wasn’t the case, what do you want
to do? Sit around 100 billion years to catch all the radiation and see if you
can figure out what fell into the black hole? It’s not going to happen.

What’s going to happen with this new solution? Most likely, someone’s going to
find a problem with it, and everyone will continue working on their own
solution. Indeed, there’s a good chance that by the time this video appears this
has already happened. For me, the real paradox is why they keep doing it. I
guess they do it because they have been told so often this is a big problem that
they believe if they solve it they’ll be considered geniuses. But of course
their colleagues will never agree that they solved the problem to begin with. So
by all chances, half a year from now you’ll see another headline claiming that
the problem has been solved.

And that’s why I stopped working on the black hole information loss paradox. Not
because it’s unsolvable. But because you can’t solve this problem with
mathematics alone, and experiments are not possible, not now and probably not in
the next 10000 years.

Why am I telling you this? I am not talking about this because I want to change
the mind of my colleagues in physics. They have grown up thinking this is an
important research question and I don’t think they’ll change their mind. But I
want you to know that you can safely ignore headlines about black hole
information loss. You’re not missing anything if you don’t read those articles.
Because no one can tell which solution is correct in the sense that it actually
describes nature, and physicists will not agree on one anyway. Because if they
did, they’d have to stop writing papers about it.


Posted by Sabine Hossenfelder at 8:00 AM No comments: Labels: Astrophysics,
Philosophy, Physics, Video
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest




SATURDAY, APRIL 16, 2022


HOW SERIOUS IS ANTIBIOTIC RESISTANCE?


[This is a transcript of the video embedded below. Some of the explanations may
not make sense without the animations in the video.]


Antibiotics save lives. But increasingly more bacteria are becoming resistant to
antibiotics. As a result, some infections can simply no longer be treated. Just
a few weeks ago an international team of scientists lead by researchers at the
University of Washington published a report in the Lancet, according to which
antibiotic resistance now kills more than a million people worldwide each year.
And the numbers are rising.

How serious is the situation? What are scientists doing to develop new
antibiotics? Did you know that bacteria are not the most abundant organism on
earth? And what do rotten eggplants have to do with all of that? That’s what we
will talk about today.

First things first, what are antibiotics? Literally the word means “against
life” which doesn’t sound particularly healthy. But “antibiotic” just refers to
any type of substance that kills bacteria (bactericidal) or inhibits their
growth (bacteriostatic). Antibiotics are roughly categorized either as “broad
spectrum”, which target many types of bacteria, or “narrow spectrum” which
target very specific bacteria.

The big challenge for antibiotics is that you want them to work in or on the
body of an infected person, without killing the patient along with the bacteria.
That’s what makes things difficult.

There are various ways antibiotics work, and most of them target some difference
between bacteria and cells, so that the antibiotic harms the bacteria but not
the cell.

For example, our cells have membranes, but they don’t have cell walls, which is
a rigid protective layer that covers the membrane. But bacteria do have cell
walls. So one way that antibiotics work is to destabilize the cell wall.
Penicillin for example does that.

Another thing you can do is to prevent bacterial cells from producing certain
enzymes that the bacteria need for replication, or inhibit their synthesis of
folic acid which they need to grow.

As you can see, antibiotics work in a number of entirely different ways. And
each of them can fight some bacteria but not others. You also have to take into
account w*here the bacterial infection is, because not all antibiotics reach all
parts of the body equally well. This is why you need a prescription for
antibiotics – they have to fit to the infection you’re dealing with, otherwise
they’re in the best case useless. In the worst case you may breed yourself a
tough strain that will resist further treatment.

This problem was pointed out already by the Scottish physician Alexander Fleming
who discovered the first antibiotic, penicillin, 1928. Penicillin is still used
today, for example to treat scarlet fever. According to some estimates, it has
saved about 200 million lives, so far.

But already in 1945, Fleming warned the world of what would happen next, namely
that bacteria would adapt to the antibiotics and learn to survive them. They
become “resistant”. Fleming wrote

> “The greatest possibility of evil in self-medication is the use of too-small
> doses, so that, instead of clearing up infection, the microbes are educated to
> resist penicillin and a host of penicillin-fast organisms is bred out which
> can be passed on to other individuals.”

To some extent antibiotic resistance is unavoidable – it’s just how natural
selection works. But the problem becomes significantly worse if one doesn’t pull
through an antibiotic treatment at full force, because then bacteria will
develop resistance much faster.

The world didn’t listen to Fleming’s warning. One big reason was that in the
1940s, scientists discovered that antibiotics were good for something else: They
made farm animals grow faster, regardless of whether those animals were ill.

On average, livestock that were fed antibiotic growth promoters grew 3-11%
faster. So farmers began feeding antibiotics to chickens, pigs, and cattle
because that way they would have more meat to sell.

Things were pretty crazy at the time. By the 1950’s the US industry was
“painting” steaks with antibiotics to extend their shelf life. They were washing
spinach with antibiotics. Sometimes they even mixed antibiotics into ground
meat. You could buy antibiotic soap. The stuff leaked everywhere. Studies at the
time found penicillin even in milk and some people promptly developed an allergy
to it.

It wasn’t until 1971 that the UK banned the use of some antibiotics for animal
farming. But it’s only since 2006 that the use of antibiotics as growth
promoters in animals is generally forbidden in the European Union. In the USA it
took until 2017 for a similar ban to come into effect.

Using antibiotics for meat production isn’t the only problem. Another problem is
over-prescription. According to the American Center for Disease Control, about
30 percent of prescriptions for antibiotics in the USA are unnecessary or
useless, in most cases because they are mistakenly prescribed against
respiratory infections that are caused by viruses, against which antibiotics do
nothing.

A 2018 paper found that the global consumption of antibiotics per person has
increased by 39% from 2000 to 2015 and it’s probably still increasing, though
the increase is largely driven by low and middle income countries which are
catching up. And with that, antibiotic resistance is on the rise.

Already in 2019, the World Health Organization (WHO) declared that antimicrobial
resistance (which includes antibiotic resistance) is currently one of the top 10
global public health threats. They say that “antibiotics are becoming
increasingly ineffective as drug-resistance spreads globally leading to more
difficult to treat infections and death”.

According to the recent study from the Lancet which I mentioned in the
introduction, the number of people who die from treatment-resistant bacterial
infections is currently about 1.27 million per year. That’s about twice as many
people who die from malaria. They also estimate that antibiotic resistance
indirectly contributes to as many as 4.95 million deaths each year. The Lancet
article also found that young children below 5 years are at the highest risk.

So the situation is not looking good. What are scientists doing?

First there are a couple of obvious ideas, like bringing back old antibiotics
that have gone out of use, because bacteria may have lost their resistance to
them, and keep looking for new inspirations in nature. For example, in 2016, a
group of researchers from Denmark reported they’d found that leaf-cutting ants
use natural antibiotics. The next one you probably guessed: Artificial
Intelligence to the rescue.

2 years ago, researchers from MIT published a paper in the magazine Cell in
which they explain how they used deep learning to find new antibiotics. They
first trained their software on 2500 molecules whose antibiotic functions are
known and also taught it to recognize structures that are known to be toxic.

Then they rated 6000 other molecules with scores from 0 to 1 for how likely the
molecules were to make good antibiotics. Among the molecules with high scores
they focused on those whose structure was different from that of the known
antibiotics because they were hoping to find something really new.

They found one molecule that fit the bill: halicin. Halicin is not a new drug,
they just renamed what was previously known under the catchy name c-Jun
N-terminal kinase inhibitor SU3327. They called it halicin after HAL from the
Space Odyssey, I am guessing because their Artificial Intelligence is exploring
a big “chemical space” or otherwise I’m too dumb to get it.

They did an experiment and found that indeed halicin worked against some
multiresistant bacteria, both in a petri dish and in in mice. Then they repeated
the process but with a much bigger library of more than ten million molecules.
They identified some promising candidates for new antibiotics and are now doing
further tests.

It’s a long way from the petri dish to the market, but this seems really
promising, though it has the usual limitations of artificial intelligence:
software can only learn if there’s something to train on, so this is unlikely to
discover entirely new pathways of knocking out bacteria.

Another avenue that researchers are pursuing is the revival of phage therapy.
Phages are viruses that attack bacteria. They are about 100 times smaller than
bacteria and are the most abundant organism in the planet. There are an
estimated 10 million trillion trillion of them around us, that’s ten to the 31.
And phages are everywhere: on surfaces, in soil, on our skin, even inside our
body. They enter a bacterium and replicate inside of it, until the bacterium
bursts and dies in the process. You can see the potential: breed phages that
infect the right bacteria and you’ve solved the problem.

One great benefit of phages is that they target very specific bacteria so they
spare the beneficial bacteria in our body. The question is, where do you get the
right phage for an infection? The first successful phage treatment was done in
1919, however, the method was never widely adopted because breeding the right
phages is slow and cumbersome and when antibiotics were discovered they were
just vastly more convenient.

However, with antibiotic resistance on the rise, phage treatments are getting
new attention. Researchers now hope that genetic engineering will make it faster
and easier to breed the right phages. The first successful treatment with
genetically modified phages was reported in 2019 in Nature Medicine by a group
of researchers from the United States and the UK. They bred a cocktail of three
phages, one of which they found on a rotting eggplant from South Africa.

The group around Dr. Strathdee at the University of California San Diego hopes
that one day we will have an open source library for genetically engineered
phages which is accessible to everyone and she’s currently raising funds for
that. Strathdee and her team don’t think that phage therapy will ever replace
antibiotics altogether but that it will be an important contribution for
particularly hopeless cases.

Another new method to fight bacteria was proposed in 2019 by researchers from
Texas. They have found a way to kill bacteria while they are passive, so while
they are not replicating. This can’t be done with normal antibiotics that
usually target growth or replication. But the researchers have found substances
that open a particularly large channel in the membrane on the surface of the
bacterium. The bacterium then basically leaks out and dies. Another good thing
about this method is that even if it doesn’t kill a bacterium it can make it
easier for antibiotics to enter. They have tested this in a petri dish and seen
good results.

To name one final line of research that scientists are pursuing: Several groups
are looking for new ways to use antimicrobial peptides. Peptides are part of our
innate immune system. They are natural broad spectrum antibiotics and earlier
studies have shown that they’re effective even against bacteria that resist
antibiotics.

Problem is, peptides break down quickly when they come into contact with bodily
fluids, such as blood. But researchers from Italy and Spain have found a way to
make peptides more stable by attaching them to nanoparticles that fight off
certain enzymes which would otherwise break down the peptides. These peptide
nanoparticles can for example be inhaled to treat lung infections. They tested
it successfully in mice and rats and published their results in a 2020 paper.
And just last year, researchers from Sweden have developed a hydrogel that
contains these peptides and that can be put on top of skin wounds.

It is hard to overstate just how dramatically antibiotics have changed our life.
Typhus, tuberculosis, the plague, cholera, leprosy. These are all bacterial
infections, and before we had antibiotics they regularly killed people,
especially children. During World War I more people died from bacterial
infection than from the fights.

As you have seen, bacterial resistance is a real problem and it’ll probably get
worse for some more time. But scientists are on the case, and some recent
research looks quite promising.

Posted by Sabine Hossenfelder at 8:00 AM No comments: Labels: Biochemistry,
Science, Science Policy, Video
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest




SATURDAY, APRIL 09, 2022


IS NUCLEAR POWER GREEN?


[This is a transcript of the video embedded below. Some of the explanations may
not make sense without the animations in the video.]




A lot of people have asked me to do a video about nuclear power. But that turned
out to be really difficult. You won’t be surprised to hear that opinions about
nuclear power are extremely polarized and every source seems to have an agenda
to push. Will nuclear power help us save the environment and ourselves, or is it
too dangerous and too expensive? Do thorium reactors or the small modular ones
change the outlook? Is nuclear power green? That’s what we’ll talk about today.

I want to do this video a little differently so you know where I’m coming from.
I’ll first tell you what I thought about nuclear power before I began working on
this video. Then we’ll look at the numbers, and in the end, I’ll tell you if
I’ve changed my mind.

When the accident in Chernobyl happened I was 9 years old. I didn’t know
anything about nuclear power or radioactivity. But I was really scared because I
saw that the adults were scared. We were just told, you can’t see it but it’ll
kill you.

Later, when I understood that this had been an unnecessary scare, I was somewhat
pissed off at adults in general and my teachers in particular. Yes, radioactive
pollution is dangerous, but in contrast to pretty much any other type of
pollution it’s easy to measure. That doesn’t make it go away but at least we
know if it’s there. Today, I worry much more about pollution from the chemical
industry which you won’t find unless you know exactly what you’re looking for
and also have a complete chemistry lab in the basement. And I worry about
climate change.

So, I’ve been in favor of nuclear power as a replacement for fossil fuels since
I was in high school. In 2008, I over-optimistically predicted the return of
nuclear power. Then of course in 2011, the Fukushima accident happened, after
which the German government decided to phase out nuclear power, but continued
digging up coal, buying gas from Russia, and importing nuclear power from
France.

However, in all fairness I haven’t looked at the numbers for more than 20 years.
So that’s what we’ll do next, and then we’ll talk again later.

Fossil fuels presently make up almost two thirds of global electric power
production. Hydropower makes up about 16 percent, and all other renewables
together about 10 percent. Power from nuclear fission makes up the rest, also
about 10 percent.

Nuclear power is “green” in the sense that it doesn’t directly produce carbon
dioxide. I say “directly” because even though the clouds coming out of nuclear
power plants are only water vapor, power plants don’t grow on trees. They have
to be built from something by someone, and the materials, their transport, and
the construction itself have a carbon footprint. But then, so does pretty much
everything else. I mean, even breathing has a carbon footprint. So one really
has to look at those numbers in comparison.

A good comparison comes from the 2014 IPCC report. This table summarizes several
dozens of studies with a minimum, maximum, and median value. All the following
numbers are in grams of carbon dioxide per kilowatt hour and they are average
values for the entire lifecycle of those technologies, so including the
production.

For coal, the median that the IPCC quotes is 820, gas is a bit lower with 490,
solar is a factor 10 lower than gas, with about 40. Wind is even better than
solar with a median of about 11. And the median for nuclear is 12 grams per
kiloWatthour, so comparable to that of wind, but there is a huge gap to the
maximum value which according to some sources is as high as 110, so about twice
as high as solar.

An estimate that’s a little bit higher than even the highest value the IPCC
quotes comes from the World Information Service on Energy, WISE, which is based
in the Netherlands. They calculated that nuclear plants produce 117 grams of
carbon dioxide per kilowatt-hour.

It’s not entirely irrelevant to mention that the mission of WISE is to “fight
nuclear” according to their own website. That doesn’t make their number wrong,
but they clearly have an agenda and may not be the most reliable source. 

But these estimates differ not so much because someone is stupid or lying, at
least not always, but because there is some uncertainty in these numbers that
affect the outcome. That’s things like the quality of uranium resources, how far
they need to be transported, different methods of mining or fuel production, and
their technological progress, and so on. In the scientific literature, the value
that is typically used is somewhat higher than the IPCC median, about 60-70
grams of carbon dioxide per kilo Watthour. And the numbers for renewables should
also be taken with a grain of salt because they need to come with energy storage
which will also have a carbon footprint.

I think the message we can take away here is that either way you look at it, the
carbon footprint of nuclear power is dramatically lower than that of fossil
fuels, and roughly comparable to some renewables, exact numbers are hard to come
by.

So that’s one thing nuclear has going in its favor: it has a small carbon
footprint. Another advantage is that compared to wind and solar, it doesn’t
require much space. Nuclear power is therefore also “green” in the sense that it
doesn’t get in the way of forests or agriculture. And yet another advantage is
that it generates power on demand, and not just when the wind blows or the sun
shines.

Let us then talk about what is maybe the biggest disadvantage of nuclear power.
It’s not renewable. The vast majority of nuclear power plants which are
currently in operation work with Uranium 235.

At the moment, we use about 60 thousand tons per year. The world resources are
estimated to be about 8 million tons. This means if we were to increase nuclear
power production by a factor of ten, then within 15 to 20 years uranium mining
would become too expensive to make economic sense.

This was pretty much the conclusion of a paper that was published a few months
ago by a group of researchers from Austria. They estimate that optimistically
nuclear power from uranium-235 would save about 2 percent of global carbon
dioxide emissions by 2040. That’s not nothing, but it isn’t going to fix climate
change – there just isn’t enough uranium on this planet.

The second big problem with nuclear power is that it’s expensive. A medium sized
nuclear power plant currently costs about 5-10 billion US dollars, though large
ones can cost up to 20 billion.

Have a look at this figure is from the World Nuclear Energy Status report 2021
(page 293). It shows what’s called the levelized cost of energy in US dollar per
megawatt hour, that’s basically how much it costs to produce power over the
entire lifetime of some technology, so not just the running cost but including
the production. As you can see, nuclear is the most expensive. It’s even more
expensive than coal, and at the moment roughly 5 times more expensive than solar
or wind.  If the current trend continues, the gap is going to get even wider.


On top of this comes that insurance for nuclear power plants is mandatory, the
premium is high, and those expenses from the plant owners go on top of the
electricity price. So at the moment nuclear power just doesn’t make a lot of
economic sense. Of course this might change with new technologies, but before we
get to those we have to talk about the biggest problem that nuclear power has.
People are afraid of it.

Accidents in nuclear power plants are a nightmare because radioactive
contamination can make regions uninhabitable for decades, and tragic accidents
like Chernobyl and Fukushima have arguably been bad publicity. However, the data
say that nuclear power has historically been much safer than fossil fuels, it’s
just that the death toll from fossil fuels is less visible.

In 2013, researchers from the NASA Goddard Institute for Space Studies and
Columbia University calculated the fatalities caused by coal, gas and nuclear,
and summarized their findings in units of Deaths per TeraWatthour. They found
that coal kills more than a hundred times more people than nuclear power, the
vast majority by air pollution. They also calculate that since the world began
using nuclear power instead of coal and gas, nuclear power has prevented more
than 1.8 million deaths. 

Another study in 2016 found a death rate for nuclear that was even lower, about
a factor 5 less. The authors of this paper also compared the risk of nuclear to
hydro and wind and found that these renewables actually have a slightly higher
death rate, though in terms of economic damage, nuclear is far worse.

I am guessing now you all want to know just how exactly people die from
renewables. Well, since you ask. For wind it’s stuff like “a bus collided with a
truck transporting a turbine tower” or an aircraft crashed into a wind turbine,
or workers falling off the platform of an offshore windfarm. For solar, it’s
accidents in manufacturing sites, electric shocks from improper wiring, or falls
from roofs.

The number for hydropower is dominated by a single accident when a dam broke in
China in 1975. The water flooded several villages and killed more than 170
thousand people.

The Chernobyl accident, in comparison, killed less than 40 people directly. The
World Health Organization estimates long-term deaths from cancer as a
consequence of the Chernobyl accident to be 4000-9000. There is a group of
researchers which claims it’s at least a factor 10 higher but this claim has
remained highly controversial.

The number of direct fatalities from the Fukushima accident is zero. One worker
died 7 years later from lung cancer, almost certainly a consequence of radiation
exposure. About 500 died from the evacuation, mostly elderly and ill people
whose care was interrupted. And this number is unlikely to change much in the
long run.

According to the WHO, the radiation exposure of the Fukushima accident was low
except for the direct vicinity of the power plant which was evacuated. They do
not expect the cancer risk for the general population to significantly rise. The
tsunami which caused the accident to begin with killed considerably more people,
at least 15 thousand.

I don’t want to trivialize accidents in the nuclear industry, of course they are
tragic. But there’s no doubt that they pale in comparison to fossil fuels, which
cause pollution that, according to some estimates kills as much as a million
people per year. Also, fun fact, coal contains traces of radioactive minerals
that are released when you burn it. Indeed, radioactivity levels are typically
*higher* near coal plants than near nuclear power plants.

Again, you see, there are some differences in the details but pretty much
everyone who has ever seriously looked at the numbers agrees that nuclear is one
of the safest power sources we know of.

Okay, so we have seen that the biggest two disadvantages of nuclear power are
that it’s not renewable and that it’s expensive. But this is for the
conventional nuclear power plants that use uranium 235 which is only 0 point 7
percent of all uranium we find on Earth.

Another option is to use fast breeder reactors which work with the other 99
point 3 percent of uranium on earth, that’s the isotope uranium-238.

A fast breeder transmutes uranium-238 to plutonium-239, which can then be used
as reactor fuel like uranium-235. And this process continues running with the
neutrons that are produced in the reaction itself, so the reactor “breeds” its
own fuel, hence the name


Fast breeders are not new; they have been used here and there since the 1940’s.
But they turned out to be expensive, unreliable, and troublesome. The major
problems are that they are cooled with sodium which is very reactive, and they
also can’t be shut down as quickly as the conventional nuclear power plants. To
make a long story short, they didn’t catch on, and I don’t think they ever will.

But technology in the nuclear industry has much advanced in the past decades.
The most important innovations are molten salt reactors, thorium reactors, and
small modular reactors.

Molten salt reactors work by mixing the fuel into some type of molten salt. The
big benefit of doing this is that it’s much safer. That’s partly because molten
salt reactors operate at lower pressure, but mostly because the reaction has a
“negative temperature coefficient”. That’s a complicated way of saying that the
energy-production slows down when the reactor overheats, so you don’t get a
runaway effect.

Molten salt reactors have their own problems though. The biggest one is that the
molten salt fuel is highly corrosive and quickly degrades the material meant to
contain it.  How much of a problem this is in practice is currently unclear.

Molten salt reactors can be run with a number of different fuels, one of them is
thorium. Thorium is about 4 times more abundant than uranium, however, fewer
resources are known, so in practice this isn't going to make a big difference in
the short run.

The real advantage is that these reactors can use essentially the entire
thorium, not just a small fraction of it, as is the case with the normal uranium
reactors. This means, thorium reactors produce more energy from the same amount
of fuel and, as a consequence, thorium could last for thousands of years.
 Thorium is also a waste product of the rare-earth mining industry, so trying to
put it to use is a good idea.

However, the problem is still that the technology is expensive. There is
currently only one molten salt thorium reactor in operation, and that’s in
China. It started operating in September 2021.

It’s just a test facility that will generate only 2 Megawatt, but if they are
happy with the test the Chinese have plans for a bigger reactor with 373
Megawatt for the next decade, though that is still fairly small for a power
plant. It’ll be very interesting to see what comes out of this.

And the biggest hope of the nuclear industry is currently small modular
reactors. The idea is that instead of building big and expensive power plants,
you build reactors that are small enough to be transported. Mass-producing them
in a factory could bring down the cost dramatically.

A conventional plant generates typically a few Gigawatt in electric energy. The
small modular reactors are comparable in size to a small house, and have an
energy output of some tens of Megawatt. For comparison, that’s about ten times
as much as a wind turbine on a good day. That they are modular means they are
designed to work together so one can build up power plants gradually to the
desired capacity.

Several projects for small modular reactors are at an advanced stage in the USA,
Russia, China, Canada, the UK, and South Korea. Most of the current projects use
uranium as fuel, partly in the molten salt design.

But the big question is, will the economics work out in the end? This isn’t at
all clear, because making the reactors smaller may make them cheaper to
manufacture, but they’ll also produce less energy during their lifetime.
Certainly at this early stage, small modular reactors aren’t any cheaper than
big ones.

A cautious example comes from the American company NuScale. They sit in Utah and
have been in business since 2007. They were planning to build twelve small
reactors with 60 MegaWatt by 2027. Except for being small they are basically
conventional reactors that work with enriched Uranium.

Each of those of those reactors is a big cylinder, about 3 meters in diameter
and 20 feet tall. Their original cost estimate was about 4.2 billion dollars.
However, last year they announced that had to revise their estimate to $6.2
billion and said they’d need three years longer.

In terms of cost per energy that’s even more expensive than conventional nuclear
power plants. The project is subsidized by the department of energy with 1.4
billion, but several funders backed out after the announcement that the cost had
significantly increased.

Ok, so that concludes my rundown of the numbers. Let’s see what we’ve learned.

What speaks in favor of nuclear energy is that it’s climate friendly, has a
small land use, and creates power on demand. What speaks against it is that it’s
expensive and ultimately not renewable. The disadvantages could be alleviated
with new technologies, but it’s unclear whether that will work, and even if it
works, it almost certainly won’t have a significant impact on climate change in
the next 20 years.

It also speaks against nuclear power that people are afraid of it. Even if these
fears are not rational that doesn’t mean they don’t exist. If someone isn’t
comfortable near a nuclear power plant, that affects their quality of life, and
that can’t just be dismissed.

There are two points I didn’t discuss which you may have expected me to mention.
One is nuclear proliferation and the risk posed by nuclear power plants during
war times. This is certainly an important factor, but it’s more political than
scientific, and that would be an entirely different discussion.  

The other point I didn’t mention is nuclear waste. That’s because I think it’s a
red herring which some activist groups are using in the attempt to scare people.
For what I am concerned, burying the stuff in a safe place solves the problem
just fine. It’s right that there aren’t any final disposal sites at the moment,
but Finland is expected to open one next year and several other countries will
follow. And no, provided adequate safety standards, I wouldn’t have a problem
with a nuclear waste deposit in my vicinity.

So, what did I learn from this. I learned that nuclear power has become
economically even more unappealing than it already was 20 years ago, and it’s
not clear this will ever change. Personally I would say that this development
can be left to the market. I am not in favor of regulation that makes it even
more difficult for us to reduce carbon emission, to me this just seems insane.
In all fairness it looks like nuclear won’t help much, but then again, every
little bit helps.

Having said that, I think part of the reason the topic is so controversial is
that what you think is the best strategy depends on local conditions. There is
no globally “right” decision. If your country has abundant solar and wind power,
it might not make sense to invest in nuclear. Though you might want to keep in
mind that climate change can affect wind and precipitation patterns in the long
run.

If your country is at a high risk of earthquakes, then maybe nuclear power just
poses too high a risk. If on the other hand renewables are unreliable in your
region of the world, you don’t have a lot of space, and basically never see
earthquakes, nuclear power might make a lot of sense.

In the end I am afraid my answer to the question “Is nuclear power green?” is
“It’s complicated.”



Posted by Sabine Hossenfelder at 8:00 AM No comments: Labels: Science and
Society, Science Policy, Video
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest


Older Posts Home

Subscribe to: Posts (Atom)



SUPPORT ME ON PATREON





BUY MY BOOK (PAID LINK)





RECENT COMMENTS

Sabine Hossenfelder commented on everything vibrates it really does: “Hi
All,I've just deleted some comments of people insulting each other and I'm
turning…”
C Thompson commented on everything vibrates it really does: “Remi didn't read
what was on the tin, obviously.”
C Thompson commented on everything vibrates it really does: “I'm doing well,
getting back to normal, thanks.I do think though, one needs to keep a hard
line…”
Mr. Jonathan Camp commented on everything vibrates it really does: “Hi C
Thompson,I would recommend to you the Philosophy of Stoicism. It has been of
great value to…”
Steven Evans commented on everything vibrates it really does: “Remi1:06 PM,
August 21, 2021"It's also incredibly rude to lecture adults in their beliefs…”



FOLLOW ME ON TWITTER





MORE ABOUT ME

 * Sabine on Twitter
 * Sabine on YouTube
 * Sabine on Facebook
 * Sabine's Homepage
 * Sabine on Google Scholar
 * Sabine's ArXiv Papers




SEARCH THIS BLOG





FOLLOW ME ON FACEBOOK





LABELS

Physics (652) Science and Society (227) Video (224) Papers (220) Quantum Gravity
(165) Random Thoughts (163) Astrophysics (161) Books (117) Particle Physics
(113) Academia (110) Cosmology (101) Science (78) Travel (78) Philosophy (77)
Distraction (70) Blog (67) Photo (56) Sociology of Science (56) Rant (49)
History of Science (47) Physicists (46) Quantum foundations (43) Dear Dr B (27)
Politics (24) Psychology (10) Sociology (4)



BLOG ARCHIVE

 * ▼  2022 (25)
   * ▼  May 2022 (4)
     * The closest we have to a Theory of Everything
     * The Superdetermined Workshop finally took place
     * Can we make a black hole? And if we could, what co...
     * How Bad is Diesel?
   * ►  Apr 2022 (5)
   * ►  Mar 2022 (5)
   * ►  Feb 2022 (5)
   * ►  Jan 2022 (6)

 * ►  2021 (63)
   * ►  Dec 2021 (5)
   * ►  Nov 2021 (5)
   * ►  Oct 2021 (6)
   * ►  Sep 2021 (5)
   * ►  Aug 2021 (4)
   * ►  Jul 2021 (5)
   * ►  Jun 2021 (4)
   * ►  May 2021 (7)
   * ►  Apr 2021 (6)
   * ►  Mar 2021 (6)
   * ►  Feb 2021 (6)
   * ►  Jan 2021 (4)

 * ►  2020 (84)
   * ►  Dec 2020 (6)
   * ►  Nov 2020 (6)
   * ►  Oct 2020 (10)
   * ►  Sep 2020 (5)
   * ►  Aug 2020 (7)
   * ►  Jul 2020 (5)
   * ►  Jun 2020 (6)
   * ►  May 2020 (8)
   * ►  Apr 2020 (10)
   * ►  Mar 2020 (7)
   * ►  Feb 2020 (9)
   * ►  Jan 2020 (5)

 * ►  2019 (119)
   * ►  Dec 2019 (8)
   * ►  Nov 2019 (7)
   * ►  Oct 2019 (6)
   * ►  Sep 2019 (8)
   * ►  Aug 2019 (9)
   * ►  Jul 2019 (10)
   * ►  Jun 2019 (14)
   * ►  May 2019 (13)
   * ►  Apr 2019 (9)
   * ►  Mar 2019 (10)
   * ►  Feb 2019 (13)
   * ►  Jan 2019 (12)

 * ►  2018 (93)
   * ►  Dec 2018 (8)
   * ►  Nov 2018 (12)
   * ►  Oct 2018 (9)
   * ►  Sep 2018 (7)
   * ►  Aug 2018 (7)
   * ►  Jul 2018 (6)
   * ►  Jun 2018 (10)
   * ►  May 2018 (7)
   * ►  Apr 2018 (8)
   * ►  Mar 2018 (8)
   * ►  Feb 2018 (4)
   * ►  Jan 2018 (7)

 * ►  2017 (73)
   * ►  Dec 2017 (4)
   * ►  Nov 2017 (7)
   * ►  Oct 2017 (8)
   * ►  Sep 2017 (5)
   * ►  Aug 2017 (5)
   * ►  Jul 2017 (4)
   * ►  Jun 2017 (6)
   * ►  May 2017 (8)
   * ►  Apr 2017 (7)
   * ►  Mar 2017 (6)
   * ►  Feb 2017 (6)
   * ►  Jan 2017 (7)

 * ►  2016 (76)
   * ►  Dec 2016 (7)
   * ►  Nov 2016 (6)
   * ►  Oct 2016 (5)
   * ►  Sep 2016 (5)
   * ►  Aug 2016 (7)
   * ►  Jul 2016 (4)
   * ►  Jun 2016 (5)
   * ►  May 2016 (7)
   * ►  Apr 2016 (6)
   * ►  Mar 2016 (7)
   * ►  Feb 2016 (8)
   * ►  Jan 2016 (9)

 * ►  2015 (90)
   * ►  Dec 2015 (11)
   * ►  Nov 2015 (9)
   * ►  Oct 2015 (8)
   * ►  Sep 2015 (8)
   * ►  Aug 2015 (10)
   * ►  Jul 2015 (5)
   * ►  Jun 2015 (7)
   * ►  May 2015 (5)
   * ►  Apr 2015 (8)
   * ►  Mar 2015 (6)
   * ►  Feb 2015 (5)
   * ►  Jan 2015 (8)

 * ►  2014 (80)
   * ►  Dec 2014 (7)
   * ►  Nov 2014 (7)
   * ►  Oct 2014 (7)
   * ►  Sep 2014 (5)
   * ►  Aug 2014 (8)
   * ►  Jul 2014 (5)
   * ►  Jun 2014 (7)
   * ►  May 2014 (7)
   * ►  Apr 2014 (7)
   * ►  Mar 2014 (3)
   * ►  Feb 2014 (7)
   * ►  Jan 2014 (10)

 * ►  2013 (96)
   * ►  Dec 2013 (8)
   * ►  Nov 2013 (6)
   * ►  Oct 2013 (10)
   * ►  Sep 2013 (9)
   * ►  Aug 2013 (9)
   * ►  Jul 2013 (9)
   * ►  Jun 2013 (7)
   * ►  May 2013 (9)
   * ►  Apr 2013 (8)
   * ►  Mar 2013 (6)
   * ►  Feb 2013 (7)
   * ►  Jan 2013 (8)

 * ►  2012 (126)
   * ►  Dec 2012 (8)
   * ►  Nov 2012 (8)
   * ►  Oct 2012 (8)
   * ►  Sep 2012 (9)
   * ►  Aug 2012 (11)
   * ►  Jul 2012 (11)
   * ►  Jun 2012 (10)
   * ►  May 2012 (13)
   * ►  Apr 2012 (15)
   * ►  Mar 2012 (11)
   * ►  Feb 2012 (12)
   * ►  Jan 2012 (10)

 * ►  2011 (123)
   * ►  Dec 2011 (30)
   * ►  Nov 2011 (11)
   * ►  Oct 2011 (8)
   * ►  Sep 2011 (9)
   * ►  Aug 2011 (9)
   * ►  Jul 2011 (10)
   * ►  Jun 2011 (9)
   * ►  May 2011 (7)
   * ►  Apr 2011 (6)
   * ►  Mar 2011 (7)
   * ►  Feb 2011 (8)
   * ►  Jan 2011 (9)

 * ►  2010 (139)
   * ►  Dec 2010 (8)
   * ►  Nov 2010 (9)
   * ►  Oct 2010 (9)
   * ►  Sep 2010 (8)
   * ►  Aug 2010 (9)
   * ►  Jul 2010 (12)
   * ►  Jun 2010 (13)
   * ►  May 2010 (13)
   * ►  Apr 2010 (15)
   * ►  Mar 2010 (15)
   * ►  Feb 2010 (14)
   * ►  Jan 2010 (14)

 * ►  2009 (167)
   * ►  Dec 2009 (16)
   * ►  Nov 2009 (11)
   * ►  Oct 2009 (10)
   * ►  Sep 2009 (12)
   * ►  Aug 2009 (11)
   * ►  Jul 2009 (15)
   * ►  Jun 2009 (16)
   * ►  May 2009 (13)
   * ►  Apr 2009 (15)
   * ►  Mar 2009 (15)
   * ►  Feb 2009 (16)
   * ►  Jan 2009 (17)

 * ►  2008 (267)
   * ►  Dec 2008 (37)
   * ►  Nov 2008 (19)
   * ►  Oct 2008 (24)
   * ►  Sep 2008 (20)
   * ►  Aug 2008 (21)
   * ►  Jul 2008 (24)
   * ►  Jun 2008 (23)
   * ►  May 2008 (21)
   * ►  Apr 2008 (20)
   * ►  Mar 2008 (18)
   * ►  Feb 2008 (18)
   * ►  Jan 2008 (22)

 * ►  2007 (311)
   * ►  Dec 2007 (36)
   * ►  Nov 2007 (23)
   * ►  Oct 2007 (24)
   * ►  Sep 2007 (19)
   * ►  Aug 2007 (24)
   * ►  Jul 2007 (24)
   * ►  Jun 2007 (19)
   * ►  May 2007 (24)
   * ►  Apr 2007 (24)
   * ►  Mar 2007 (30)
   * ►  Feb 2007 (31)
   * ►  Jan 2007 (33)

 * ►  2006 (152)
   * ►  Dec 2006 (27)
   * ►  Nov 2006 (19)
   * ►  Oct 2006 (14)
   * ►  Sep 2006 (13)
   * ►  Aug 2006 (13)
   * ►  Jul 2006 (17)
   * ►  Jun 2006 (17)
   * ►  May 2006 (10)
   * ►  Apr 2006 (11)
   * ►  Mar 2006 (10)
   * ►  Feb 2006 (1)




VISITOR COUNT





FEEDS

Blogger Atom Feed
Blogger Comment Feed
Feedburner Comment Feed
Blogger RSS Feed








Powered by Blogger.



This site uses cookies from Google to deliver its services and to analyse
traffic. Your IP address and user agent are shared with Google, together with
performance and security metrics, to ensure quality of service, generate usage
statistics and to detect and address abuse.Learn moreOk