What happens to your markers? A look inside the Space Warps Analysis Pipeline

Once you hit the big “Next” button, you’re moving on to a new image of the deep night sky – but what happens to the marker you just placed? And you may have noticed us in Talk commenting that each image is seen by about ten people – so what happens to all of those markers? In this post we take you inside the Space Warps Analysis Pipeline, where your markers, get interpreted and translated into image classifications and eventually, lens discoveries.

The marker positions are automatically stored in a database which is then copied and sent to the Science Team every morning for analysis. The first problem we have to face at Space Warps is the same one we run into in life every day – namely, that we are only human, and we make mistakes. Lots of them! If we were all perfect, then the Space Warps analysis would be easy, and the CFHTLS project would be done by now. Instead though, we have to allow for mistakes – mistakes that we make when we’ve done hundreds of images tonight already and we’re tired, or mistakes we make because we didn’t realise what we were supposed to be looking for, or mistakes we make – well, you know how it goes. We all make mistakes! And it means that there’s a lot of uncertainty encoded in the Space Warps database.

What we can do to cope with this uncertainty is simply to allow for it. It’s OK to make mistakes at Space Warps! Other people will see the same images and make up for them. What we do is try and understand what each volunteer is good at: Spotting lenses? Or rejecting images that don’t contain lenses? We do this by using the information that we have about how often each volunteer gets images “right”, so that when a new image comes along, we can estimate the probability that they got it “right” that time. This information has to come from the few images where we do actually know “the right answer” – the training images. Each time you classify a training image, the database records whether you spotted the sim or caught the empty image, and the analysis software uses this information to estimate how likely you are to be right about a new, unseen survey image. But this estimation process also introduces uncertainty, which we also have to cope with!

We wrote the analysis software that we use ourselves, specially for Space Warps. It’s called “SWAP”, and is written in a language called python (which hopefully makes it easy for you to read!) Here’s how it works. Every volunteer, when they do their first classification, is assigned a software “agent” whose job it is to interpret its volunteer’s marker placements, and estimate the probability of the image at hand containing a gravitational lens. These “agents” are very simple-minded: in order to make sense of the markers, we’ve programmed them to make a basic assumption: that they can interpret their volunteer’s classification behavior using just two numbers, the probabilities of being right when a lens is present, and of being right when a lens is not present, which they estimate using your results for the training images. The advantage of working with such simple agents is that SWAP runs quickly (easily in time for the next day’s database dump!), and can be easily checked: it’s robust. The whole collaboration of volunteers and their SWAP agents makes up a giant “supervised learning” system: you guys train on the sims, and the agents then try and learn how likely you are to have spotted, or missed, something. And thanks to some mathematical wizardry from Surhud, we also track how likely the agents are to be wrong about their volunteers.

CFHTLS_2013-06-17_10:12:52_trajectories

What we find is that the agents have a reasonably wide spread of probabilities: the Space Warps collaboration is fairly diverse! Even so, *everyone* is contributing. To see this we can plot, for each image, its probability of containing a lens, and follow how this probability changes over time as more and more people classify it. You can see one of these “trajectory” plots above: images start out at the top, assigned a “prior” probability of 1 in 5000 (about how often we expect lenses to occur). As they are classified more and more times they drift down the plot, and either to the left (the low probability side) if no markers are placed on them, and to the right (the high probability side) if they get marked. You can see that we do pretty well at rejecting images for not containing lenses! And you can also see that at each step, no image falls straight down: every time you classify an image, its probability is changed in response.

Notice how nearly all of the red “dud” images (where we know there is no lens) end up on the left hand side, along with more than 99% of the survey images. All survey images that end up to the left of the red dashed line get “retired” – withdrawn from the interface and not shown any more. The sims, meanwhile, end up mostly on the right, as they are correctly classified as lenses: at the moment we are only missing about 8% of the sims, and when we look at those images, it does turn out they are the ones containing the sims that are the most difficult to spot. This gives us a lot of confidence that we will end up with a fairly complete sample of real lenses.

Indeed, what do we expect to find based on the classifications you have made so far? We have already been able to retire almost 200,000 images with SWAP, and have identified over 1500 images that you and your agents think contain lenses with over 95% probability. That means that we are almost halfway through, and that we can expect a final sample of a bit more than 3000 lens candidates. Many of these images will turn out not to contain lenses (a substantial fraction will be systems that look very much like lensed systems, but are not actually lenses) – but it’s looking as though we are doing pretty well at filtering out empty images while spotting almost all the visible lenses. Through your classifications, we achieving both of these goals well ahead of our expectations. Please keep classifying and there will be some exciting times ahead!

It’s good to Talk! Why we need to hear from you to find Space Warps

Some of you may be wondering what happens to an image after you hit “Next” and why “Talk”ing about your lens candidates is important, so here’s a brief explanation!

WHAT HAPPENS TO THE IMAGES YOU DON’T MARK?
Each night, we retire images from the pool based on your collective classifications. If the community together says no (i.e. by enough people not placing a marker on the image), we throw out the image so that we can focus your classifications on fresh data and images that might contain gravitational lenses. After only five weeks, you guys have made an astonishing 5.2 million classifications. This means we’ve been able to already reject about 60% of the total CFHT Legacy Survey as not containing gravitational lenses!

WHAT HAPPENS TO THE IMAGES YOU DO MARK?
When you mark an image two things happen. First, we record your mark in our database so we can compare it with what other people thought. Second, that image is automatically saved into your Talk profile under a collection called “My Candidates”. Talk allows you to discuss your interesting candidates with the rest of the Space Warps community. It’s great to see so many discussions happening there already, so please keep talking! Talking in Space Warps is an essential part of refining the list of plausible candidates, which is explained next.

HERE’S HOW YOU CAN HELP
As we work our way through the images, it looks as though we are going to end up with a sample of a few thousand lens candidates from your markings. That’s great – it means Space Warps is a very effective filter! But a few thousand is still several times more than the number of actual lenses we expect – so we’ll need to investigate the images of the candidates further before presenting them to the rest of the astronomy community. This is where you, and Talk, can really help us out!

BECOME CURATORS OF YOUR LENS COLLECTIONS
If you see a lens candidate, either when browsing Talk, or while you are marking, that you would like to see investigated further, make a “collection” called ‘Probable Lens Candidates’ and add this object to it! Remember, you can also add images you think are the most likely lens candidates from your automatically filled ‘My Candidates’ collection. Then, later on, you might do some further investigation of the images in your collection – or someone else in the collaboration might do it, after browsing your collections. Either way, collecting the candidates is the first step.  You can start a discussion about any candidate or collection any time, and ask the Space Warps community to share their thoughts.

WHAT HAVE YOU FOUND SO FAR?
We’ve just started looking at the most commonly marked images, and there are some promising candidates already being discussed in Talk. Some of these are previously known lens candidates: as you may know, the CFHT Legacy Survey has been searched using automated computer algorithms. We’ve started to label the candidates from those searches in Talk, you’ll see the label “Known Lens Candidate” at the bottom right of the image in the individual object page of Talk. As well as the labels, Budgieye has done a phenomenal job in collecting known CFHT-LS lens candidates from the research literature in a dedicated discussion board. Much like the tricky simulations, some of these known candidates may be difficult to spot.

Most excitingly, some of you have started discussing a few lens candidates that we think have been missed by the algorithms – watch this space for a special post about these potential new lens candidates next week!!!

HOW TO GET STARTED IN TALK
If you want some top tips on using Talk, please visit the discussion board (thanks Budgieye!)

Thanks again for your phenomenal work – and let’s get Talking!!!

Sim City

Anupreeta, Surhud and Phil

Simulations! They’re everywhere in Space Warps, sneaking into the images and popping up messages all over the place. They’ve sparked a fair bit of discussion in Talk – lots of people like them, some people find they get in the way, and we hear the same few questions a lot. In this post we have a go at answering them!

Q: Why have we put all these simulated lenses in the survey images?
A: The sims serve two purposes. The first one is training: since many volunteers in the Space Warps community may not have seen gravitational lensing in action before, the sims are there to help you become familiar with what to look for. They give you some first-hand experience in identifying gravitational lenses, and show you the common (and some uncommon) configurations of multiple images that gravitational lenses can form.

The second purpose is driven by the science. We would like to find more new examples of gravitational lens systems that are known to exist in nature, but are difficult to detect. And because they are difficult to detect, we expect to miss some of them. That’s OK (we’re only human!), but we’d at least like to understand which lenses we missed, and why! Some gravitational lenses could be missed for mundane reasons, such as lying close to the border of the image, for example. Other lenses which may have formed arcs could be missed if these arcs have very low brightness, or, because the lensed features are hidden in the light of the lensing galaxy, or if these lensed features are too red. Our aim is not to just discover lenses, but to be thorough and quantify what sort of lenses we might miss. How well we find the sims will tell us how many lenses we likely missed, and which ones.

Sims are made using real massive galaxies - which are clustered! Here the sim-making robot has placed two simulated lensed arc systems in the same field as a real gravitational lens...

Sims are made using real massive galaxies – which are clustered! Here the sim-making robot has placed two simulated lensed arc systems in the same field as a real gravitational lens…

Q: How did we make the simulated lens systems?
A: As our goal of making sims is to generate a reasonable training sample that is also fairly realistic, we use realistic models both for the background sources and foreground galaxies. These models have some key properties that are largely enough to describe the wide range of lens systems that we have seen so far in the Universe. The mass, distance and shape of the foreground galaxies, and the colors, sizes, brightness and distances of the background galaxies and quasars all play an important role. We use realistic values of these key properties (and their interdependence), as measured by other astronomers in surveys like the CFHTLS. For each sim, we select a massive object from the CFHTLS catalog, and ask, what would this object’s image look like if there was a source behind it being gravitationally lensed? It’s quite difficult to select massive objects (measuring mass is one of the reasons we want to find more lenses!), but we can make an approximation by selecting bright, red-colored objects (which for certain ranges of brightness and color are mostly massive elliptical galaxies). We then select a source, either from the CFHTLS catalog of faint galaxies (which includes estimates of brightness, colour, size and also distance), or from the known distribution of quasar brightnesses and colours.

To mimic the distorting and magnifying effects of gravitational lenses, we create lens models from our understanding of the theory of gravitational lensing combined with observations of known lenses. We know of several hundred gravitational lenses now, and it turns out that in almost all cases, the details of the lensing effect can be described using quite simple models for the lens mass distributions. These lens models are then used to simulate the arcs, doubles and quads you see in the Space Warps images: in each pixel of the simulated image we compute the value of the brightness of the lensed features predicted by the model.

The final step is to make sure that the simulated lensed features appear as they would in a real image. The Space Warps images were all taken with the Canada-France-Hawaii Telescope on Mauna Kea, and their resolution is limited mostly due to the atmosphere – we can tell how blurry the images are by looking at stars in the images. It turns out that the CFHTLS images all have roughly the same resolution, so we blur the lensed features by the same amount. We then add noise, and overlay the simulated lensed features on top of the image that contains the massive object we selected, so that the image looks as realistic as possible.

Q: Should I mark all the simulated images, even if I have marked/seen them before? Why is this useful?
A: Marking sims is very important – the analysis of the Space Warps classifications depends on it! We’ll blog about this process soon, but the central point is this: when we present a new lens candidate found at Space Warps to the rest of the scientific community, we need to estimate how likely it is to be a real gravitational lens system. This is tricky: the Space Warps classifications come from citizen scientists who have various degrees of experience and skill. We expect some people to be good at spotting faint arcs, others might be good at searching all the way to the edges of the images, while others might be better at efficiently rejecting objects that look like lenses but are not. Our analysis software uses the simulated lens sample to quantify the collaboration’s expertise, and then assesses the likelihood of a lens candidate based on the classifications it has received. The uninteresting and less likely lens candidates are then “retired” from the database every day, so that we don’t have to look at them any more than necessary. Without the sims (and also the dud images, that are known not to contain any lenses), it would be much harder to estimate the likelihood of an image containing a lens, given its classifications. So please keep marking them, even if you’ve seen them before!

Q. Can’t I turn the sims off?
A. We thought about this – but when we were testing the site, we found that if we went for a long period without being shown a simulation, we started missing lenses because we were going too fast! So we decided to keep the sims in, albeit at a low frequency, to keep us on our toes!

Q: Why do some sims look a bit odd?
A: We use simple models to represent the lens and source galaxies; these simple models work fairly well in most cases, but sometimes they fail to capture some of the more unusual objects in the Universe. Since the whole process of generating sims is automated (so that we can make a large enough sample to get good statistics from) and we can only perform visual checks on a small sample, we do expect to have a few systems that may not look quite right.

Here are some of the most common failure modes:

(a) Simulated arcs around nearby spiral galaxies. These are listed as having the incorrect distance in the CFHTLS catalog: they have the right brightness for a massive galaxy, but that’s because they are near, and not actually massive! They are assigned larger distances as the colour of their bulges is very similar to that of massive elliptical galaxies further away. Such spiral galaxies would have very small Einstein radius – but in the simulation, the arcs are predicted to be too far away from the bulge (i.e. the centre of the spiral) for it to be a plausible lens.

(b) Abnormally thick arcs. This happens when the source is too big and bright to be a plausible background source. Again, this can happen if the source galaxy we drew from the CFHTLS catalog was listed with an inaccurate size.

(c) Wide separation lenses around tiny galaxies. This can happen if the galaxy we are using as a lens is listed in the catalog as being brighter than it is (most likely due to inaccuracies in the inferred distance to this galaxy).

Q: I see two lens systems in a single image (a combination of simulated and/or real lenses), what should I mark?
A: Please mark at least one lensed image for each lens system: we need the simulated lens to be marked so that the analysis software knows you saw it, but then marking any real lenses will register that object as a potential real lens candidate as well. (Also, see the FAQ page on Space Warps).

Q: How can I tell if I am marking a simulated or real image?
A: You shouldn’t be able to, as the simulated images should give you a good indication of what a real lens should look like! You’ll know as soon as you hit “Finished Marking” though, because the Space Warps system always gives feedback straight away.

Q: I see a simulated lens on top of a known real lens. What do I do?
A: We have tried to exclude the real lenses from the simulated lens sample, but unfortunately all real lenses were not successfully excluded (Apologies!). This is not a matter of concern though: we’ll re-inject the images that were used in making the sims into the Space Warps database, but without the simulated lenses. For the time being, please continue to mark any or all of the lensed images that you spot, irrespective of whether you think they are simulated or real.

The discussion of the sims in Talk has been really helpful – thanks for your questions, and for catching the problems mentioned above!

Space Warps Update

Wow, what a week! Space Warps has had a phenomenal response since it launched on May 8th. You have made a staggering 1.7 million classifications so far – that’s over 11,000 images swiped per hour! A big thank you to all of you, our new collaborators, in making this first week so successful.

We’ve just begun to analyse the first batch of data. Your classifications have already allowed us to retire about 80% of the images in the first dataset (or D1 as we call it) that we are pretty sure don’t contain lenses. New images are now up on this site awaiting your classifications, this is the second of ten datasets in total.

Don’t forget that you can discuss any interesting potential lenses you find using Talk. As you mark images, the system automatically collects both the potential lens candidates and simulations that you have marked. You can find these in “My Collections”, found on the “Profile” section of Talk. Many discussions are already happening on a variety of images and topics. Please do start discussing your favourites with our community, or join existing discussions. For example, you might like to visit the  “Where do I go to see the good stuff?” discussion board to get started.

With your help, we can make dataset 2 (or D2) as successful as D1. You can read more about the progress of the project on our blog, and we’ll email you from time to time.

It’s great to see you enjoying Space Warps. Thank you very much for your contributions!

Aprajita, Phil, Anupreeta and the Space Warps Team

Engage!

Hooray! Space Warps is live, and the spotters are turning up in numbers. Check out the site at spacewarps.org – there’s a few little bugs that Anu, Surhud  and the dev team are ironing out, but basically it’s looking pretty good! Thanks very much to everyone who’s helped out in the last few months – your feedback has been very useful indeed in designing a really nice, easy to use website that hopefully will enable many new discoveries. And to all of you who are new to Space Warps – welcome!

If you’re feeling really keen, why don’t you come and hang out in the discussion forum at talk.spacewarps.org? We’re starting to tag images to help organise them, and the more interesting conversations we have there, the more useful it will be for the newer volunteers. And of course, you can vote on the candidates spotted by other people, by making your own collection. Come and take part in the Space Warps collaboration!

PS. Aprajita and I will be making a special guest appearance on the regular Galaxy Zoo Hangout tomorrow – tune in for more slightly distorted spacetime chat!

SW-wheres-wallens-answer

Space Warps CFHTLS

A simulated lens quasar, an example prediction of what we might find in Space Warps.

A simulated lens quasar, the likes of which we hope to find during Space Warps’ first project.

Our first project is to search the 400,000 images of the  Canada France Hawaii Telescope Legacy Survey, or CFHTLS – we’re asking people to spot gravitational lenses in its images, in order to find some new examples, and also to learn how to design automated lens detection systems to use in the future. I caught up with Jean-Paul Kneib, from the Strong Lenses in the Legacy Survey (SL2S) project, and Space Warps co-PI Anupreeta More to ask them to explain a bit more about it.

Jean-Paul, just how big is this survey, and how is it different from the SDSS?
CFHTLS is a survey conducted with the CFHT 3.6m telescope using the Megacam camera. It targeted 4 patches of the sky, adding up to about 150 sq deg. That’s about 60 times smaller than the SDSS-DR8 imaging area, but it goes typically 2.5 magnitude (about 10 times) deeper than SDSS, with higher resolution images. The average seeing was 0.6 arcsec, compared to an average of 1.4 arcsec for the SDSS data. So, in short CFHTLS is a mini SDSS but focussed on the deeper Universe, which means it is a great survey in which to find strong lensing systems!
Sounds good! Was it designed specially for this purpose, or for something else? What was the original idea?
The original design was to measure what we call “cosmic shear” – that is, the tiny deformations that large scale structures produce on the appearance of faint galaxies. This cosmic shear measurement is used to put constraints on cosmological parameters. But similarly the strong lensing systems could also reveal us something about cosmology … but first we need to find them!
Sounds good! Anupreeta, you’ve been thinking about lenses for cosmology lately – how are you thinking of using a sample of lenses from the CFHTLS to say something about the universe as a whole?
Various cosmological models of the Universe predict different numbers of galaxy clusters at various times, and also differences in how concentrated is the mass distribution within these massive structures. Both these factors affect how efficiently the galaxy clusters will produce highly magnified and distorted arcs. That means that the abundance of arcs in surveys like CFHTLS can be used in turn to understand which cosmological model best describes our Universe.
Lens systems allow us to primarily understand the properties -like the mass – of the lensing galaxies. However, it is possible to derive extra constraints from certain types of lenses in order to learn more about the Universe – for example, its age. We see that quasars change their brightness over time; in a lensed quasar system, the different lensed images appear to vary at different times due to the different paths taken by the light rays to reach us. The time delay seen between these multiple images, combined with the speed of light through the lens, allows a measurement of distance to be made.  By measuring these time delays accurately, we can measure distance, compare it with redshift, model the expansion of the Universe, and predict its age.
I know you’ve worked on the CFHTLS data in the past, with the “Arcfinder” code – what kinds of lenses did you find with it, and what do you think it missed?
I mainly looked for arcs in the g-filter since the arcs look brighter in this filter than any other. This helped optimize the arc detection. As the CFHTLS imaging goes very deep compared to SDSS, we found a fainter sample of arcs. In order to contain the number of false positive detections, I had to apply some limits on some of the arc properties such as surface brightness, length-to-width ratio, curvature and area. These limits were essentially decided arbitrarily after some testing on a smaller known lens sample from the CFHTLS. However, it was not known beforehand how this might affect the completeness of the lens sample and the limits on which of the arc properties could be relaxed or made stricter. There are various factors to which a code is sensitive to e.g. a certain arc may satisfy most thresholds, but will go undetected because it happened to be located in an image region with high noise levels or was partially overlapping with a bright galaxy. People are less susceptible to these fluctuations when they look at images, and can cover a wider dynamic range in terms of arc properties and, simultaneously, assess the likelihood of an arc-like image of being a lensed image, given its color, shape, curvature, proximity and alignment with respect to a nearby lensing galaxy in a way which is not currently possible with Arcfinder code.
Raphael Gavazzi wrote his “RingFinder” code to look for galaxy-scale lenses in the CFHTLS. That research programme has been quite successful, we found several dozen lenses and used them to study the distribution of dark matter within the lens galaxies. RingFinder only looks at simple, smooth, bright red elliptical galaxies, and then tries to dig into the lens light looking for blue arcs. We expect it to have missed some red arcs, and also lensed features that are not arc-shaped – like the lensed quasars. One thing I am interested in with Space Warps is making a sample of low-medium probability objects: these will be great for testing tools like RingFinder against: can we make it more flexible, and able to cope with spirals, mergers and other galaxies that look like lenses but are not.
 Anu: have you thought about how the results from Space Warps might be used to improve Arcfinder? That would be cool!
The results from the Space Warps are going to be interesting and exciting in many ways. In terms of improving the Arcfinder, Space Warps will provide a more comprehensive library of lenses – I hope the spotters will find the lenses that Arcfinder missed! By measuring the properties of these new lenses, we will be able to put together a better set of thresholds that would have increased the completeness and purity of the Arcfinder lens sample. It might be possible that some new lens properties that we haven’t thought of yet might prove more useful in terms of getting higher purity. It would  be great to be able to improve the Arcfinder algorithm in this way.
With your help we’re going to find a lot of useful things in CFHTLS, I think!

A New Name, Debugging, and Some Mind Games

Spring is here, or at least coming along, and the Lens Zoo development tiger team is emerging from its incubator. Since just before Christmas we have been hard at work pulling together the many different pieces needed to make a Lens Zoo work well. This week, the Science Team is helping debug the identification interface that the Dev Team built, and then we’ll be ready to beta test it. It’s looking very cool. Following discussions here and elsewhere, we settled on the project name “Space Warps.” As Thomas J pointed out, with this name we won’t ever have to explain who “Len” is!

While all that is happening, we are also starting to think about how the other parts of the project might work. Once our spotters have identified an initial batch of lens candidates, we will have to figure out what to do with them all (and they will be numerous!). A good first check is to phone a friend: with the Talk system, we’ll be able to assemble collections of lens candidates for everyone to comment on. You can see this happening already, freestyle, in Galaxy Zoo Talk. We’ll be trying to come up with ways of making it easier to browse collections in Talk, and to be able to cast your vote on whether you think each object is a lens, or not.

But hang on: isn’t voting rather subjective? Well, yes and no. A key part of the lens-finding process is modeling, that is, figuring out whether the features we see in the image could actually be due to gravitational lensing. A minimum requirement for a successful lens candidate is that its images be explained by a plausible lens model! Fortunately, some initial lens modeling can be done mentally: this is why much of the site development effort so far has gone into the training material needed to help people understand what gravitational lenses look like, and how the arcs and multiple images are formed. Think about what you are doing when you make a judgement about a lens candidate: you are imagining how that image could have been formed, and to do that, you need a model of a gravitational lens in your head!

For the more difficult and ambiguous cases though, we’ll need to actually make some predicted images, using a computer model – so we’re thinking of other ways that we could enable this. Several members of the Science Team have written lens modeling software, we just need to make it possible for all of you to use it! More on this soon.

Recycling features from around the Zooniverse

Cecile Faure and Brian Carstensen

At the Zurich workshop we looked at some of the already existing or in development zooniverse projects, to see what features we could borrow, copy or adapt in the lens zoo. There’s some useful bits and pieces we can recycle – see what you think!

  • Visible progress on the starting page: We  thought it might be nice to have a “skywalker” on the starting page, showing the total area of sky being investigated. In our first case, this would be the CFHT-LS survey.  This feature will be in use in some of the new projects (e.g. Sea-floor explorer) – it’s a nice way for us to see how far we have got. We also would like a progress report on this page such as in the Milky Way Project.

  • Pop-up tutorials: The majority of the people who met in Zurich thought that the initial classification process should be guided by little pop-up help boxes, like it is in Moon Zoo
  • Online scientific discussion: To enable this, we think we’d like to use a Talk-like forum, as is done in PlanetHunters – although there are some aspects of it we’d like to change as well. In addition to the ability to discuss candidates, we would like you to be able to vote on them, to make the most of the expert volunteers’ experience. This would be a new feature – none of the Zoos have this yet!

  • Annotating images: In order to have a good discussion about a lens candidate, we need to know which features we are talking about! We could do this by enabling you to put markers – like thumbtacks – on the images, during the classification stage. We think this would help a lot in the discussion, as each volunteer could indicate exactly what he/she is talking about. We could also then collect the coordinates of the objects in a database to make further analysis easier. A similar feature  is already in use in Ancient lives.

  • Links to social networks: It was not much discussed, but it feels like this is necessary: new users might well find it fun to show off their new discoveries on facebook, etc, and this might then encourage more people to get involved. We’d like to find as many new LensHunters as we can!

  • Volunteer achievements: At the moment, Galaxy Zoo users are labelled as newbies, heros and so on, according to their activity level on the forum. This is great – new users can see who has been there longest and is most active in helping others get started. Is there more feedback we could give users, that would improve their experience? Would you like to see your achievements logged in terms of the number of images you have inspected? Or the number of good lens candidates successfully detected? Or something else?
  • Zoonibot:  Wikipedia has various “robots” that wander around its system, automatically making small corrections and suggestions. The Zoonibot is a first attempt at one of these robots for the Zooniverse Talk system. There are many things the Zoonibot could help with – such as pointing new users towards some of the reference and tutorial material on the site, if they seem sto be getting stuck. While we don’t want it to replace human interactions in the forum, it seems like the Zoonibot could be helpful in some situations. What do you think?


One more consensus came out of the workshop: it’s great if a zoo website *looks good.*  There are some very talented designers working at the Zooniverse, who can help turn your ideas into reality. Keep them coming in the comments!

 

 

The first LensZoo project preview: beat the robots of the CFHT Legacy Survey!

Anupreeta More, Surhud More and Phil Marshall

Gravitational lensing is a spectacular phenomenon found in the Universe.  Predicted by Fritz Zwicky in the 1930’s, galaxies and clusters of galaxies acting as lenses are not just beautiful to look at but they also have plethora of applications, including revealing the whereabouts of the elusive Dark Matter. Gravitational lenses are rare objects since we require the foreground and background galaxies to be aligned on the sky to within a few thousandths of a degree.

Over the coming decade, larger and larger imaging surveys will map out ever wider and deeper regions of the Universe. This means we should be able to find more gravitational lenses, but it also means that we will have increasing amounts of data to inspect in order to find them. As a result, we would like to automate the process of finding gravitational lens systems from these vast treasure troves of data. However, as you know, discovering gravitational lens systems requires some skill, and the lens candidates need to satisfy a varied set of criteria before they can be tagged as promising lens systems. Our brains are more suited to carry out such tasks than are simple computer algorithms, so it makes sense for humans to look at the candidates that the robots flag as interesting. However, so far, astronomers have had difficulties in building robots that are capable of finding all the different kinds of lens systems that are potentially interesting. This is partly because we have not yet discovered very many lenses, nor exhaustively cataloged all the things that look like lenses but are not lenses in reality. To understand how to make the robots work better, we need to jump in to the data alongside them!

Our first project in the Lens Zoo is going to be a slightly unusual one, in that it’s focus will be on beating a lens finding robot, rather than checking through its outputs. We are going to use the optical and infrared data from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) for this project. With the help of computer algorithms called “ArcFinder” and “RingFinder”, we have found a sample of lens candidates from the CFHTLS – but we know that these algorithms don’t do a very complete job. Opportunity knocks!  We would like the citizen scientists of the Lens Zoo to help us search the images of the CFHTLS to discover a variety of lenses which were missed by our robots.

The CFHTLS spans an area of about 170 square degrees of sky. It’s images are both higher resolution than those of the SDSS, with median seeing in the i’ band of around 0.7 arcsec, and deeper (i’ < 24.5 magnitudes) – which means that more gravitational lenses should be visible per square degree. The picture on the left shows a lens from CFHTLS that the ArcFinder did spot, a small galaxy group that is lensing a background star-forming blue galaxy. On the right is the SDSS image of this system, to show you the difference in image quality.

     

 

 

 

 

 

 

We have two goals for this project. First, we want to find all the gravitational lenses that the aforementioned algorithms missed – perhaps because the sources are quasars, or distant red galaxies, or because the lenses are complex, or confusing. Second, we want to catalog all the objects that look like lenses, but are not: these “false positives” will make an important training set for us to test our improved robots on.  This will be the first time that this survey’s images will have been exhaustively inspected – so there are bound to be some surprises!

A Postcard from Zurich

Zurich: home to Albert Einstein when he first started thinking about light passing through warped spacetime, and so what better place to have our first workshop! The Lens Zoo team and a few Galaxy Zoo forum moderators and Lens Hunters met up at the Institute of Theoretical Physics at the University of Zurich at the weekend, both in person, and remotely via a Google+ video Hangout. Even the team from Chicago who got up at 3am to be projected four feet high onto a screen managed to stay cheerful the whole time! We spent a couple of days thinking through the problems that we’ll face when trying to find thousands of gravitational lenses over the next few years.

So, what did we talk about all weekend? Among other things: how we should display images, and how we can best enable their investigation, how to teach new users about gravitational lensing, which features of the various Zooniverse projects we could make use of, and what tools we have to help advanced Lens Hunters to go the extra mile. For now, you can see the slides that the science team made for some of the sessions in the links below. We’ve got a bunch of problems to solve, but also some good ideas to get started with. The team will be writing their own postcards from Zurich on here soon, and we look forward to hearing your comments as we go. We need and value your input!

PDF files of session slides (watch out, these are quite large files!):

Zurich_2012-07-14_Session-1_Aprajita_GZlenses

Zurich_2012-07-14_Session-2_Phil_Targets+Surveys

Zurich_2012-07-14_Session-3_Anupreeta_CFHTLS

Zurich_2012-07-14_Session-4_Aprajita_Image-Display

Zurich_2012-07-14_Session-5_Cecile_Zooniverse-Tools