Tuesday 10 July 2012

England has particle accelerators too!

The Diamond Light Source is a particle accelerator. In England. Which is not something I knew two weeks ago. It's entirely possible you already knew that, but either way, you definitely know now. It's in Oxfordshire (I think, anyway) and so it was about an hour and a half drive for us. My school ran a minibus type trip, so that about 15 students and two teachers went, and we got to look at things and talk to people and learn and things like that. It was all very productive, or so I'm told.

This year, Diamond is celebrating the fact that it's been 10 years since they got the funding, so there were lots of stands in the atrium with people that we could talk to while we were waiting for our talk and tour. The people I actually talked to for a decent length of time were the crystallography people and the protein/drug people.

For everything to make sense, I think I'll explain about the facilities first, even though I don't remember any of the relevant numbers. Because it was in a shut down period, we got to look at every thing, included the areas that would otherwise be flooded with radiation. Electrons are accelerated to ridiculously fast speeds and they travel in this tiny little metal tube, which is cooled by water. There are magnets, which bend the path of the electrons, and insurgent devices that do the same thing. When that happens, the electrons give off some energy in the form of x-rays, which go into the beam lines.

The beam lines are the places where people do experiments. There are somewhere between 20 and 30 at the moment (I forget exactly), but they were adding another while we were there. They have three sections. The first is the room where the x-rays are filtered, so that only one wavelength comes through, which makes experiments easier. The second room is where the experiment actually happens. Neither of these rooms are safe for people while experiments are in progress, so there are robotic arms and the like in the latter, so scientists can control what's going on. Because of the radiation, both rooms are lead lined, as is anything painted yellow. The third room is the room where the scientist(s) can sit and collate and analyse the data. There are lots of computers; to control the experiment and to use the data collected. (Personally, I was pretty impressed by having two monitors for one computer screen.)

The stands in the atrium were different organisations that use the light source for their research. The crystallographers diffract the x-rays through their crystals and can thus indentify the structure of each material. The structure of DNA was discovered in this way, by two scientists using Diamond's predecessor. More importantly, I found out why chocolate tastes kind of weird after it melts and resolidifies. The nice form of chocolate that you can buy is beta six, and it's not actually the most stable alignment of the molecules, so when it melts and then cools down again, it solidifies in the beta five form, which doesn't taste so nice, but is much more stable. Beta five also has a higher melting point, so it doesn't melt as easily, so tastes a little bitty and is less smooth.

The protein/drug people generally use the Diamond Light Source to analyse proteins using diffraction in a similar way to the crystallographers. As I recall (I'm finishing this post about 3 weeks after we visited) the more we understand about how proteins are folded, the easier it is to find drug molecules that will fit on correctly and therefore help cure the illness. The difficulty in this is that protein folding is incredibly complex and it's difficult to find a molecule that fits the protein exactly. The closer to an exact fit the molecule is, the less chance the drug will bind to any other protein, but when you're not sure exactly how the protein is folded, it's obviously quite hard to find the right molecule.


My contributions to this blog are clearly not quite as useful as GM's, but hey, this is a post that is vaguely interesting, right? ~Georgie

Monday 9 July 2012

Setting down some foundations


For tonight’s blog post, I thought I’d begin to elaborate on the main idea that I introduced in my last post: the idea that something can exhibit both particle and wave-like properties, known more commonly as “wave-particle” duality. Wave-particle duality effects all particles to a certain extent, and is most easily observed in particles of very small (or no) mass. Whereas in the first post, we considered the wave-particle duality of light, this time we’re going to talk about electrons.


Imagine we have the set up shown in figure 1, where you have an electron fired from a source, through one of two slits in a middle screen, and detected at some position on the screen on the far right. As a particle, you'd think that, say, if a the electron were to pass through slit one, it'd be more likely to land somewhere on the top part of the screen, and if it were to go through slit two, the opposite would happen. This result is easily confirmed by experiment: if you stick a measuring device, say, some light source by each of the slits, you can tell which of the slits an electron has gone through by watching to see at which slit some of the light is scattered. When you do this, you get the graphs below:

As you can see, the probabilities match up with the predictions: most of the electrons land in line with the slit it went through in each case, with the chance of a few electrons appearing further away from the slit.

However, things start to get more interesting if you take the measuring device away, which leaves you in the dark as to which slit the electron passed through. As you no longer care which hole the electron goes through, it can take either path, meaning the probabilities compile, and you add them up. This, generally, would be : P(x) = P1 + P2, where P1 and P2 are the probabilities from the graph on the left and right respectively.
Therefore logically, you'd expect the shape of the graph to be some sort of average between the two graphs above, shaped like a quadratic curve, symmetrical around the horizontal axis.

As you've probably guessed, this is far too simple for quantum mechanics! The graph looks something more like the graph on the right hand side:
In this graph, you've got a much more complex curve, which, if you've done a bit of AS physics (if not, read more here), should actually look pretty familiar to you: it's exactly the same shape as the interference patterns you get when you pass light through the same set up! The electrons are acting as waves, constructively and destructively interfering with each other, causing a variety of maxima and minima of concentrations of electrons. Because of the more complicated nature of the maths involved in such a wave, the probabilities are no longer as simple as they first appeared. In this case, we say that P(x) is the square of the "probability amplitude". The probability amplitude is calculated by doing the sum of the solutions of wave equations representing the wave of the electron spreading from each of the slits to the detectors. This is why, when describing a particle from a quantum perspective , we consider its "wavefunction". 

But hold on! All we did was take away our measuring device, and suddenly we have a completely different set of results! How can this be? To really understand what's going on, we have to consider what we're actually doing when we measure what slit the electron is going through. To "see" the electron, they have to scatter the photons from the measuring device, and it's this process that causes the interference pattern to break down and a regular pattern to reemerge. However, you could argue that, well, you're using photons that have a momentum of Planck's constant divided by their wavelength, and we could minimise the disruption the photons causes by increasing its wavelength. But then we meet another problem here too: if the wavelength is too long it's impossible to tell which slit the electron went through, and the interference pattern returns! 

This completely counter-intuitive effect, that an observer can destroy the interference between two events, was first observed by Heisenberg. Heisenberg went on to state a number of things, including the idea that there is a limitation to the subtlety to which experiments could be performed, and these ideas went on to become Heisenberg's uncertainty principle. This principle gives rise to all sorts of weird things, and forced people to consider even the actual limits of reality! However, this blog post is definitely long enough as it is, and I'll get on to those sort of things next time xD

  - So, that's it for this post. Although the material may not seem like the most exciting stuff in the world, it leads on nicely to all sorts of things, such as quantum tunneling, which you can expect in the posts to come ^^ My contribution to this blog is mostly going to be quantum type things, as I'm planning on doing an extended project on the subject, and I'm using this as a place to have a good go at writing about it.

Thanks for reading,
GM