Tuesday, 27 November 2012

A quick post on Quantum Levitation

Today in physics we were looking at magnetic fields, and our teacher decided he'd show us a video of quantum levitation, but wasn't really able to explain what was going on, so I thought I'd try and write a blog on it. If you've not come across it before, it's this: Quantum Levitation. Though I'd seen it before, it just clicked to me that it was something very similar to the Meissner Effect, a phenomena that I came across over the summer.

The Meissner effect occurs when you take a superconducting material cooled below its critical temperature and place it in a magnetic field. The magnetic field induces a current that flows through the material, which in turn creates a magnetic field. Much like more typical induction, the magnetic field created by the current acts in the opposite direction to the  external magnetic field, and cancels it out, thus stopping any magnetic fields from passing through the material: they instead pass round it. This can be seen in the diagram to the right, where a super conducting material has been surrounded by small bar magnets to illustrate the field lines of magnetic field. The strength of the current that flows around the superconducting material varies as the magnetic flux increases, meaning it's possible to measure tiny changes in magnetic fields. This property of super conducting materials has many applications, the most well-known of which is in superconducting quantum interference devices (or SQUID, for short), which are the main detectors in an MRI scanner.

So, how is this effect used in quantum levitation? Well, because of the Meissner effect, if you place a superconducting material above a magnetic plate, or on a track, as you saw in the video, it acts against the magnetic field with its own, and hovers. However, with typical examples of the Meissner effect, the superconductor doesn't stay locked as strongly as the one in the video, it wobbles unsteadily around a point. The "locking" is a consequence of both the thinness of the superconductor (which is usually made out of ruby or sapphire) and impurities in it: where there are inconsistencies in the structure, the magnetic field can pass through the superconductor, and hold it in place steadily. When the disc is placed in the field, the person putting there puts work in to move it into the magnetic field, but then as long as it stays on an equipotential (a line on which the strength of the field doesn't change), very little energy is required to get it to move around the track.

Sadly, practical uses of this idea are very hard to put into practice. The effect requires a track of magnets, and a superconducting material, which typically has to be cooled to very low temperature, meaning using this in something such as transport very expensive and impractical. However, just this year, it was reported in the scienfitic journal Advanced Materials that scientists have observed superconducting characteristics of graphite at room temperature, though this has yet to be completely confirmed, it's a step forward into seeing superconductors at more normal temperatures. If these obstacles can be overcome,  it's possible that quantum locking may one day be a practical solution to many problems. 


If you're interested in how this was made, a more in detail video by I think the guy in the first video here:
http://youtu.be/VyOtIsnG71U

The images were found on wikipedia's page on the Meissner effect

Wednesday, 14 November 2012

Gravitational Potential

Many apologies for the still dreadful quality, and the less than stellar editing. It turns out that Windows Movie Maker is incredibly difficult to use when you've got about 40 separate, unidentifiable clips in the 'resources' section.

Either way, I hope this is still coherent and of use. ~Georgie

P.S. This should have been my October post. I'm sorry.

Wednesday, 17 October 2012

Understanding Mass

For this post I thought I'd take a break from my main quantum posts to take a look at something a bit different: how the mass of a particle can be explained and calculated. There's been a lot of hype in the media recently about the Higgs field and mass, and so I thought I'd do a quick explanation of what I understand of it all.

The first place to start is with the most famous equation in all of physics, Einstein's E=MC². What this equation is telling us is that energy and mass are two forms of the same thing, and are proportional to each other: the energy (E) of a particle is the product of its mass (M) and the speed of light squared (C²). So, how does this explain the basic mass of an atom?  

As you'll know, all atoms are made up of electrons, and nucleons (protons and neutrons). As electrons have a negligible mass in comparison to the nucleons, we can say that the mass of an atom is determined by the nucleus of the atom, which is probably something you'll have learnt in your GCSE science lessons. The link between E=MC² and the mass of an atom becomes more obvious when you consider what's going on inside of the nucleus of the atom: you have a bunch of positively charged protons packed in very close together. According to electromagnetism, the similarly charged protons should repel, but instead they're held together by another of the four fundamental forces*, called the Strong Nuclear Force, . The SNF does a certain amount of work to overcome the repulsive electromagnetic forces, thus keeping the nucleus together. It's somewhat like pushing two magnets together -  you have to put energy in to make the repulsive ends of magnets touch. This energy used to keep the nucleus together is called the "rest energy" of an atom, as it exists even when the particle is at absolute rest, and given that energy and mass are the same thing, the atom therefore has to have a "rest mass", which would be the rest energy divided by C². 

One place where it's useful to think of mass and energy like this is in a phenomena called "pair production". If an electromagnetic wave has enough energy, it can spontaneously create a pair of a particle and its antimatter counterpart. Using energy-mass equivalence, we can predict what waves will produce what pairs of particles, as the energy of the wave has to be equal to or greater than the mass of the particle and antiparticle produced. 

So, where does the Higgs Boson come in to all of this? Well, all I've done so far is explain how the mass of more complicated nuclei is calculated, which is all well and good, unless you want to know why the more fundamental particles have any mass at all. The answer to this question was suggested almost fifty years ago by a number of scientists, including Peter Higgs. They suggested that fundamental particles gain energy (and therefore mass) through their interactions with the Higgs field, which permeates all of space. As different particles travel through the Higgs field, they interact differently with higgs bosons, the bosonic (force carrying) particle of the Higgs field, and thus gain different amounts of mass. 

So, I'll leave it at that for this post. Sorry for the lack of posts recently, I've been busy with university applications and the like, and with the first term of my A2s. I was inspired to write this post after the events that occurred at a UCL lecture on relativity I went to. In the lecture, someone asked how the rest mass of a particle was calculated, and he didn't know, so after the lecture I went up to him and explained how I thought you calculated the rest mass, and he thought it sounded about right, and yeah.

Thanks for reading as ever,

GM ^^

(*the others being gravity, electromagnetic, and weak nuclear force)

Wednesday, 12 September 2012

Planetary Rings and Volcanic Moons

(This makes most sense if you look at this post first -- http://makingphysicssense.blogspot.co.uk/2012/08/how-tides-work.html )



Apologies for the seemingly random cuts to other shots - my webcam is more than a little useless. Also the spaciness. I've been really spacey recently, this is about the best you'll get.

I think that covers it all? Good. ~Georgie

Tuesday, 11 September 2012

Quantum Tunnelling and the Scanning Tunnelling Microscope


In the last post, we learnt that the uncertainty principle means that for microscopic systems, we cannot say for certain what exactly will happen when one thing affects another: all we can say is that there is a certain probability that an outcome will occur. Although this may seem like a backward step from classical physics, where we can say with absolute certainty what will happen, quantum physics is the only accurate model we have for predicting the outcomes of quantum systems. However, the probabilities we do calculate do translate into real life, and thus the technologies built on them can perform in ways that would be impossible under the laws of classical physics, giving these new technologies huge advantages over older technologies.

Consider throwing a ball straight up into the air: it rises, reaches a definite apex where it's vertical velocity is zero, and then falls back down. Now, imagine that instead of a macroscopic ball, we're dealing with a microscopic particle. As we've already discussed, it's impossible to tell the exact position of our particle at any time: so what does this mean for the "flight" of our particle? It means that it's impossible to tell when the particle reaches any sort of "apex", be it the natural limit to its movement through space, some sort of barrier, or a gap between materials. More importantly, the uncertainty principle means that there's is even some probability that the particle will pass on even further!  This probability decreases the further the particle moves past the point it should, and quickly drops toward zero, meaning that it is unlikely to see particles passing through metres of concrete, for example. This phenomena is is known as "quantum tunnelling" and is one of the most useful practical applications of the Uncertainty Principle. 

The scanning tunnelling microscope (STM) is the most direct application of quantum tunnelling in technology. It uses the ability of an electron current to tunnel to image surfaces of materials right down to the individual atoms that make it up. It was first invented in 1981 by IBM Zurich, and has since become absolutely essential to anyone studying the atomic structure of solid materials, physicists, biologists and nanotechnologists included.

The STM works by passing a very small current round a metal needle which is then moved over the surface of a sample.  The metal needle at its tip is only one atom thick. The metal tip is held at around 5 nanometres, 0.000000005m, from the surface, meaning that mathematically, the wavefunctions of the electrons in the current flowing around the metal needle and the electrons in the surface overlap and because the metal tip is held at a different voltage to the surface of the material, the electrons in the current begin to tunnel across at a rate proportional to the distance between the metal tip and the surface of the material. This flow of electrons across the gap creates a tiny current which can be amplified and then measured. The smaller the distance between the metal tip and the surface, the more electrons will tunnel across the gap and the larger the current will be. The tunnelling current is incredibly sensitive to even tiny changes in distance, so even the difference between one atom and the next can be detected. Creating an image with an STM is analogous to taking a rubbing of a tree bark: the places where there are ridges in the bark give you darker lines, like how places where the atoms are closer to the metal tip will give you higher currents. Using a computer and a current amplifier, it’s possible to interpret the values of current measured and build up an image of the surface of the material.

STMs have helped to revolutionise the way scientists in a variety of fields work when studying microscopic structures. For example, before the invention of the STM, if a chemist wanted to learn about what chemicals work with what catalysts, they would have to go through a trial and error type experiment, simply trying to fit “keys in locks”, as it were. However, by using an STM, chemists can now image and study the actual active sites of the catalysts and other chemicals, and fit them together without having to go through the trial and error process.  Furthermore, by taking advantage of the fact that an STM is not limited to just vacuums and can function in any atmosphere at a large range of temperatures, it is possible to watch these chemical reactions occur in real time at a molecular level. This allows us to build a much greater understanding of the processes involved. The STM has also been used to study strands of DNA to help us understand the behaviour of genetic material in more detail, which could lead to the development of new medicines to treat genetic diseases. 

However, the STM isn’t perfect. It can only provide a view of the first layer or so of a material, as “the experimental ‘‘image’’ is relatively insensitive to the positions of atoms beyond the first atomic layer”. It also requires the material to be an electrical conductor, which limits the numbers of materials it can be used to scan. It also requires the material to be extremely stable and clean, the tip of metal point to be very sharp, the sample to be isolated from any external vibrations, and sophisticated electronics. These requirements can make using an STM a challenging and expensive process.   

Another month, another blog post. I should really do these more regularly. This one was going to be part of my EP, but I had to scrap it as my question wasn't really suitable for the qualification, as there's not enough debate in the discussion of -how- an effect is used. I didn't want to waste all the research I've done, so I'm going to slowly adapt my dissertation into a number of posts. 
Thanks for reading as always, 
GM ^^

Sunday, 12 August 2012

How the tides work

This video ought to explain pretty much everything.

Sorry about how I completely forgot to mention that tidal acceleration affects the entire object, not just whatever water may happen to be on its surface.

Next time I'll talk about Roche limits and thus why planets have rings and moons like Io have active volcanoes. I think I quite like planetary science. ~Georgie

Friday, 10 August 2012

A Universe of Uncertainties

Sorry it's been so long since I've put up a post! I've been busy with my extended project, which goes by the title  "How are quantum-mechanical effects and phenomena used in modern technologies?", and so haven't had much time to put together a post. 

So, picking up where I left off, at the end of the last post I wrote (click here if you want to refresh your memory), I introduced one part of the uncertainty principle: "Any determination of the alternative taken by a process capable of following more than one alternative destroys the interference between alternative", which was illustrated in the experiment explored in the last blog post. However, the uncertainty principle is usually known in a different form, presented to us by Heisenberg back in the 20's:



where "delta x" is the uncertainty in position, "delta p" is the uncertainty in momentum and "h bar" is Planck's constant (which we met back in my first post) divided by 2pi. What this means is that the more certain you are about one of the two variables, the less certain you are about the other: if you're fairly certain you know how fast a particle is moving, you can't be sure exactly where it is in space. Although on a macroscopic scale, this inequality doesn't really make much difference to us (as Planck's constant is in the order of 10^-34), it makes a huge different at a microscopic scale, and it leads to a number of weird phenomena. So, why is this the case? 

To understand why there has to be uncertainty, we have to consider what the dual nature of particles really means. The best place to start is with wavefunctions, which are the mathematical way of describing particles. A wavefunction has a value for every point in the universe, and a value squared gives you the probability of the particle being at a specific position at a specific time. The wavelength of this wavefunction (the distance from one peak to another) corresponds to the momentum of the particle. 


Imagine we have an electron somewhere in room with us. If we start by considering the electron as purely a wave, we get the following wavefunction and corresponding probability :




where the electron has an equal probability of being absolutely anywhere in the universe, and is moving with a constant, known momentum. Of course, this isn't right, as the electron we're imagining is in the room with us somewhere. So, what do we do? Well, the wavefunction only represents one "state" of the electron, moving at some constant momentum, and really, as we don't know how fast the electron is moving, so it could be in any one of an infinite "states". For example:



In order to truly represent the electron that's somewhere in our room, we need to add all these wavefunctions together, so that we're considering the infinite number of probabilities involved. To do this, you have to use calculus, What we get is a wavefunction, when squared, that looks like this:


What's happened is in the places where the waves are in phase, you've had them build up, and in the places where they were out of phase, they've cancelled eachother out. This just leaves you with a "wave packet" where the probability of the position of the electron is no longer evenly spread over the entirety of the universe, but a small amount of space (i.e. your room), and at some momentum. However, by working out where the electron is to a much larger degree of certainty, we've had to add the probabilities of the electron being at lots of different momentums: we can no longer be certain of its momentum. Our knowledge of one of our variables increased at the expense of our knowledge about the other!

And with that, we've arrived back at the original inequality, which is basically just setting a limit on how much we can know, by saying the product of the uncertainties has to be greater than or equal to some constant, meaning neither uncertainty can actually reach zero. However, this limit is a  fundamental feature of our universe of dual-natured particles: it's more than just saying that we can't measure both at the same time, it's saying that the variables don't exist in an absolute sense! 


What's more, momentum and position aren't the only two things that this rule applies too. There are a number of other variables, known as conjugate variables, and include Time and Space, and Angular Momentum and Angular position. 


As weird as this all seems, the uncertainty principle is now many decades old, and yet no one has managed to defeat the limitations in measurements which it implies. Furthermore, the rest of quantum mechanics relies on the workings of the principles, and the predictions of quantum mechanics have been confirmed over and over again to the incredible degrees of accuracy. 


The Uncertainty Principle leads on to a variety of phenomena, some that we've touched on in this post (superpositions, for example) and others, such as particles popping in and out of existence at random, and particles tunnelling through barriers they shouldn't be able to, and that, I shall leave to another post. 


I hope you enjoyed this post, I should have another one coming fairly soon! I'm pretty excited to talk about the uses of quantum tunnelling, which has all sorts of practical uses (uses that I'm hoping to write a large part of my extended project on), and I'm sure that'll be the subject of one the next posts. 


Thanks for reading, 

-GM