Astronomy

If I have amateur astrophotography image data that might be of scientific value, what should I do?

If I have amateur astrophotography image data that might be of scientific value, what should I do?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I have some image data that may be interesting to someone, somewhere, but I don't know who to contact or notify.

I would imagine it would depend on the type of data, but maybe there is a central place people can contact?


It depends, as you say, on the subject matter. For instance, images useful in variable star study can be provided to the AAVSO http://www.aavso.org/ and images relating to asteroids can be provided to the IAU Minor Planet Center. http://www.minorplanetcenter.net/ Other subject matter will be accepted elsewhere.

However, for the images to be useful in scientific fields, you will need to comply with the submission instructions, which a casual snap probably will not comply with. On the MPC guide for beginners there is a list of 44 technical suggestions for submitting scientific data: http://www.minorplanetcenter.net/iau/info/Astrometry.html The AAVSO has a tutorial in six chapters on photometry using a DSLR camera (and another for astro CCD cameras) http://www.citizensky.org/content/dslr-documentation-and-reduction

If you just want to get it out there and have people comment on what is in it, post it to Google+.


The value of an astrophoto

Can an astrophoto represent reality of what is out there? Can an aesthetically-driven astroimage have scientific interest? Can we talk about science versus art , when comparing astroimages that have been minimally processed with images that have gone through some more complex post-processing? Do minimally processed astroimages have more value than those with a more involved post-processing?

These being recurring topics in the astroimaging community, I've decided to post my thoughts here - it will make it easier next time someone brings these issues, once again, somewhere. :-)

(I use the terms "minimally processed" and similar throughout this article referring to images that may only include during post-processing a small set of operations such as deconvolution, DDP, some non-linear histogram transform and little more. It is not meant to be a derogatory term in any way.)

Can an astrophoto represent reality of what is out there?

I believe that in astrophotography there's no such thing as a natural or realistic appearance. Reality in an image is just impossible to depict, and even more so in astrophotography. The reasons why I strongly believe this might take some writing, and there are other points I'd like to cover without you falling asleep before you get to them, so I'll probably go back to this topic at a future date. For now, just think for a second: we're trying to represent objects and structures that are thousands or millions of light years away and that are often larger in size than what our mind can even conceive. and we're doing that right in front of our eyes, and in a monitor that at most is just a few inches wide (not to mention the extremely poor dynamic range they can represent). How's that for real ?

Can an aesthetically-driven astroimage have scientific interest?

Actually, I don't think scientific interest is something that needs to pass the " is it minimally post-processed? " test.

The way I see it, there will be aesthetics-driven images that might ignite some scientific interest, and likewise, there will be images minimally processed that may never attract the interest of scientists at all. It's quite simple. For example, when some astronomers saw this image I took of the Virgo galaxy cluster, ( overprocessed to some, of course) they contacted me to provide them with a non-linear stretch of the raw data - which I did, and it proved to be quite interesting (I cannot say more than that at this time, sorry). Should I have not pushed post-processing with some techniques such as HDRWT, wavelets, morphological transformations, etc. the image likely wouldn't have ignited any "scientific interest" at all.

Yes, if your post-processing has introduced artifacts that haven't been seen before, you might ignite some scientific interest for the wrong reasons, and that's why you must be careful not to introduce such artifacts! But other than that, this debate is quite simple, there shouldn't be a debate.

Can we talk about science versus art , when comparing astroimages that have been minimally processed with images that have gone through some more complex post-processing?

This is another topic that I don't quite know why it's brough out so often. It's as if there's some sort of consensus that astrophotography needs to be separated into the "science approved" images and "astro art" or something, when the way I see it, that's a neither and nor.

I don't usually consider an astrophoto to be pure "science" once the image is no longer linear, so, unless the post-processing involved incurred in blatant inventions, I don't usually make the distinction of " science vs art " with images that are non-linear when presented to the viewer, regardless of the amount of post-processing.

So when these topics come up at mailing lists, web forums or even conversations, I don't think it's correct to talk about " science vs art " as if those folks who do minimal processing to their images are producing science-approved images while everyone else is doing just " astro art ".

Of course, some of these folks will tell you otherwise, but the way I see it, in most cases, both groups are producing something that has inherited qualities from combining both disciplines - art and science. And that is what astrophotography really is, as far as I'm concerned. In simple terms, if it's only science, you're doing astronomy, and if it's about aesthetics and nothing else, it's likely just art. To me, astrophotography is a bit of both - yet not a whole lot of either - and if one of them is missing, then it's something else.


Are we not respecting the data when we apply post-processing techniques such as the star reduction method described here ? Are we being unethical ?

I disagree that techniques such as the "star reduction" method (link above) and many others show no respect for the data (more on that later) or are unethical, but regardless of what you think, to me, bringing up that question is, once again, missing the point completely about what the value of astrophotography is. In the next paragraph you'll probably understand - agreeing or not - what I mean by that.


Do minimally processed astroimages have more value than those with a more involved post-processing?

What I believe is that an image can have documentary value , whether the pixels around a star have been dimmed, have inherited values from the surrounding pixels, or have been left intact after some operator-chosen non-linear histogram stretch. And such documentary value can be just as valid regardless of which of the previous operations have been performed.

What matters is the intent of the operator and there's a lot one can say about that. Of course, one can start "inventing features" to the point the image may lose its documentary value. This is not to say that doing such things are necessarily "wrong", because astrophotography can also have a worthwhile emotional and aesthetic value despite some people who say are science-driven may ridicule the idea. When you go that far , it simply means that the image no longer has documentary value, and it should be treated, viewed and analyzed as such.

As I said earlier, to me, generally speaking, if you like to analyze data, you should stay in linear-land, and once you cross that line, your image enters the documentary zone . This is an area where your image is subject to certain personal treatment and interpretation. Whether you are content by doing a couple of post-processing operations such as deconvolution, a non-linear stretch, and a few more, or take advantage of many other post-processing techniques that can indeed enhance the documentary value of your data, that's a personal choice.

And within that personal choice, in some cases, and depending on the goals, a minimalist processing may in fact be a very good choice for bringing up some very good documentary value in an astroimage (some people in fact favor the look of such images, but we're not talking about the look of astroimages here, so comments in that regard aren't needed in this discussion).

What I believe however is that by limiting yourself during post-processing to a restricted set of techniques " because I want to respect the data ", laudable as it might be, you might also be missing an opportunity to increase the documentary value of your image , while still respecting your data. And here's the thing. As long as you are increasing the value of your image, you are respecting the data, simply because you're utilizing the data to present an image that not only holds value above purely aesthetics, but it also maximizes what is really worth. It is only when you depart from that documentary value when your respect for the data decreases.

So, if maximizing the possibilities of your data with the purpose of increasing the documentary value of your image is - according to some people - a "disrespect" for your data, what exactly is to NOT maximize it and produce an image with a likely less documentary value?

Of course, some people may not buy this explanation about "documentary value" or may view it differently. If they did, I wouldn't have a reason to write this, now would I? :-)

In any case, if you want your image to possibly miss on that increased documentary value because of the way you value your data or because of your beliefs of what is ethical or not, that's fine. As I said, that can be a valid option. However I have no desire to show complacency to any statement that regards astrophotography as a discipline that should limit itself to a couple of simple techniques during post-processing in order to have value or to be considered ethical , because, as stated, IMHO, where those who think that way believe the value ends, some of us take on and keep on adding the value they ignore, disregard or are simply shortsighted enough to not recognize it.

From this point of view, I don't think minimally processed astroimages have more value than those with a more involved post-processing - to me, often times it's quite the opposite, in fact. And this is without getting into the topic of aesthetic value, because that's another deal, not to be disregarded.

An image is worth a thousand words

Recently I took an image of the Great Square of Pegasus, a 20 panes mosaic. After all the work of putting the mosaic together seamlessly, my first post-processing steps were basic non-linear histogram adjustments. Right before I started to utilize more advanced techniques, the image looked pretty much like this:


That is the equivalent of what some would describe as "minimalistic processing". And I could have stopped there. And that image has some undeniable value. But a strong (linear or not) inverted stretch revealed a lot of faint structures, and I wanted to visually document those structures. Not measure, not analyze, simply trying to produce an image that would be able to show the shape, position and relative surface brightness of those structures, hopefully without destroying the appeal of this starry area of the sky. For that task I knew I had an arsenal of techniques - not tricks - that could aid me in reaching that goal (and for those interested, no, such techniques don't involve the use of the brush, lasso or similar tools). So there was my choice. Should I stop here and present an image of lots of stars, or go further in the post-processing? Well, to me it wasn't even a choice. I knew I wasn't going to stop there. A reduced version of the final image is here:

Now, when you end up with an image like the one above, you have to expect that some people is going to say - or think - that the image has been overprocessed , perhaps even say things like "those clouds of dust look like made out of plastic" and other nasty stuff. Um. How is it possible that supposedly smart people can in fact react with such ignorant comments? Let me tell you upfront that the dust clouds you see above not only exist and are up there, but their shape and position match exactly what you see in the image, at least to the point I was able to capture (not post-process) their signal. All that stuff was in my data, but the only way to make it surface was by using post-processing techniques that those who defend minimally processed images either don't know or at best, don't want to use (more often than not, they really don't know - after all, why learn about something you're not interested anyway?).

Is this all about beauty? If I cared about just beauty, why would I want "my" dust clouds to look like plastic? (that's assuming that's how they really look like). Now I ask you. which photograph better documents what's going on up there? Why should I limit the processing on this image, due to whatever some ethics dictate, and show a patch of the sky with nothing but stars and a few tiny galaxies, when I could greatly increase its value and show all what really is going on, even if that means pushing the data to its very limits? Maybe ethical in this case means we'd rather not see what's behind all those stars? Well, I do.

Everything you've read so far is not meant to justify aesthetics, documentary driven astrophotography or advanced astroimage processing techniques. To me, they're plentifully justified and need no exculpation. This article is simply an attempt to share my views on a much discussed topic, for which I think some people, for whatever reason, tend to disregard or downplay astroimages that include more than a simple non-linear stretch, while, in my very humble opinion, as stated, applying advanced post-processing techniques can be used to increase the documentary value of your data.

Last, let me add. While it's true that some people resort to " easy Photoshop tricks " to post-process their astroimages, advanced image processing techniques aren't what I'd call "easy" tasks, and calling tricks to anything that goes beyond a non-linear stretch often simply denotes either ignorance or arrogance - usually both. Advanced post-processing techniques require study, learning, experimentation, patience and sometimes frustration, unlike minimalist processing which often times doesn't require any of that. You can choose to use them or not, but be respectful with your peers when you state your opinios, otherwise, the only one who may look clueless will be you - although of course, you will never ever think that's the case (back to arrogance and ignorance).

Of course, learning and experimenting with new post-processing techniques and paradigms can too be challenging, rewarding and fun. And who is to tell others how they should have fun? Aren't those some of the most valuable reasons for which we embarked in this journey after all?


Homophobes Might Be Hidden Homosexuals

Homophobes should consider a little self-reflection, suggests a new study finding those individuals who are most hostile toward gays and hold strong anti-gay views may themselves have same-sex desires, albeit undercover ones.

The prejudice of homophobia may also stem from authoritarian parents, particularly those with homophobic views as well, the researchers added.

"This study shows that if you are feeling that kind of visceral reaction to an out-group, ask yourself, 'Why?'" co-author Richard Ryan, a professor of psychology at the University of Rochester, said in a statement. "Those intense emotions should serve as a call to self-reflection."

The research, published in the April 2012 issue of the Journal of Personality and Social Psychology, reveals the nuances of prejudices like homophobia, which can ultimately have dire consequences. [The 10 Most Destructive Human Behaviors]

"Sometimes people are threatened by gays and lesbians because they are fearing their own impulses, in a sense they 'doth protest too much,'" Ryan told LiveScience. "In addition, it appears that sometimes those who would oppress others have been oppressed themselves, and we can have some compassion for them too, they may be unaccepting of others because they cannot be accepting of themselves."

Ryan cautioned, however, that this link is only one source of anti-gay sentiments.

Hidden homosexuality
In four studies, the researchers looked at the discrepancies between what people say about their sexual orientation and their implicit sexual orientation based on a reaction-time test. The studies involved college students from Germany and the United States.

For the implicit measure, students had to categorize words and pictures flashed onto a computer screen into "gay" or "straight" groups. Words included "gay," "straight," "homosexual" and "heterosexual," while the pictures showed straight and gay couples. Before each trial, participants were primed with the word "me" or "others" flashed momentarily onto a computer screen. The researchers said quicker reaction time for "me" and "gay," and a slower association of "me" with "straight" would indicate said an implicit gay orientation. [Why Gay Parents May Be the Best Parents]

In another experiment, the researchers measured implicit sexual orientation by having participants choose to browse same-sex or opposite-sex photos on a computer screen.

Questionnaires also teased out the parenting style the participants were exposed to, with students asked how much they agreed or disagreed with statements such as: "I felt controlled and pressured in certain ways" and "I felt free to be who I am." To gauge homophobia in a household, students responded to items such as, "It would be upsetting for my mom to find out she was alone with a lesbian" or "My dad avoids gay men whenever possible."

Participants indicated their own level of homophobia, both overt and implicit in word-completion tasks, students wrote down the first three words that came to mind when prompted with some of the words' letters. Students were primed at some point with the word "gay" to see how that impacted the amount of aggressive words used.

Controlling parents
In all of the studies, participants who reported supportive and accepting parents were more in touch with their implicit sexual orientation, meaning it tended to jibe with their outward sexual orientation. Students who indicated they came from authoritarian homes showed the biggest discrepancy between the two measures of sexual orientation.

"In a predominately heterosexual society, 'know thyself' can be a challenge for many gay individuals," lead author Netta Weinstein, a lecturer at the University of Essex in the United Kingdom,said in a statement. "But in controlling and homophobic homes, embracing a minority sexual orientation can be terrifying." [5 Ways to Foster Self-Compassion in Your Child]

Those participants who reported their heterosexuality despite having hidden same-sex desires were also the most likely to show hostility toward gay individuals, including self-reported anti-gay attitudes, endorsement of anti-gay policies and discrimination such as supporting harsher punishments for homosexuals.

The research may help to explain the underpinnings of anti-gay bullying and hate crimes, the researchers note. People in denial about their own sexual orientation, perhaps a denial fostered by authoritarian and homophobic parents, may feel a threat from other gay and lesbian individuals. Lashing out may ultimately be an indicator of the person's own internal conflict with sexual orientation.

This inner conflict can be seen in some high-profile cases in which anti-gay public figures are caught engaging in same-sex acts, the researchers say. For instance, evangelical preacher and anti-gay-marriage advocate Ted Haggard was caught in a gay sex scandal in 2006. And in 2010, prominent anti-gay activist and co-founder of conservative Family Research Council George Rekers was reportedly spotted in 2010 with a male escort rented from Rentboy.com. According to news reports, the escort confirmed Rekers is gay.

"We laugh at or make fun of such blatant hypocrisy, but in a real way, these people may often themselves be victims of repression and experience exaggerated feelings of threat," Ryan said. "Homophobia is not a laughing matter. It can sometimes have tragic consequences," as was the case in the 1998 murder of Matthew Shepard, a gay man.

Copyright 2012 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


2. Not focusing your lens carefully

This usually happens when you’re shooting exciting and spectacular transient phenomena like meteor showers, aurorae or eclipses. You set everything up and you rush through all the steps as you’re afraid of missing the show. But always take your time to properly focus your lens: you will not be able to change that in post processing.

If you’re not familiar with focusing a camera lens at night, here’s the procedure:

Open the aperture as much as possible. Set a high value for the ISO (6400-12800). Turn the focus ring to infinity and turn on Live View on your camera. Exposure Simulation should be on.

Then, find a moderately bright star and centre it in the field of view. Magnify the Live View image as much as you are able to on your camera and carefully focus your lens until the stars look at their sharpest. Sharp stars should not show any chromatic aberration haloes around them.

Then, use some gaffer tape and carefully tape down both your focusing ring and zoom ring (if you use a zoom lens). This way, you are sure that you won’t accidentally change the position of the rings during your photo session. And that’s it! Sharp stars.

Of course, don’t forget to change the ISO setting and to choose your desired aperture before starting to shoot. I will say it again: take as much time as you need to focus your lens. Bad focus cannot be corrected in post processing.


Pulley for my GT2 Belt

Oops, yesterday I realized I had made a bit of a mistake when I ordered the PiKon package. I had not checked the type of the belt I had left over from building my 3D printer, I had just assumed it was of the same T2.5 type that the PiKon telescope uses for the camera focus rack system. It turned out that my belt is of the GT2 type, not of the T2.5 type. I assume those numbers are related to the pitch of the teeth, so quite probably the T2.5 pulley included with the PiKon package won't work properly with my GT2 belt.

I thought about contacting the PiKon people again and ordering the T2.5 belt separately, but then I began wondering whether it would be possible to just 3D-print a GT2 pulley instead. If I manage to print one, I should then be able to use my GT2 belt without issues. I started searching ThingiVerse for ready-made GT2 pulleys, and found one that seemed to be just what I needed: 18 Tooth GT2 Pulley by OoiTY. I printed that, but noticed that the teeth were rather shallow, and it did not seem to have a very good grip against my GT2 belt. This may have been caused by some issues with my 3D printer, or the settings I used when slicing te object. However, I decided to modify the object slightly, making the teeth deeper, and then printed the object again. This seemed to work fine, and the pulley seemed to have a good grip against my GT2 belt.


Processing Hubble data presents a host of challenges, and the first of these has nothing to do with processing at all. Before you can think about how to crack the coconut, you must first learn to get the coconut out of its tree. This article is intended to break down some of the barriers you may encounter when using the Hubble Legacy Archive (HLA).

Step 1: Getting into the archive

The Hubble Legacy Archive is your direct connection to high-quality science data from our dearest orbital observatory, the Hubble Space Telescope. Despite appearing simple at first, it can be rather daunting for a novice to approach. I was dumbfounded during my first attempts, and without much guidance it took me months to feel comfortable with it and come up with some search strategies. Below I will describe two of those strategies, go over some of the cameras, and explain some of the most common acronyms and jargon.

Aspiring archive raiders can find the HLA at the following URL, which I suggest you keep open in a separate tab for reference: http://hla.stsci.edu/hlaview.html

Step 2a: Finding a target, reprocess an old image (easier)

If you are used to observing with your own telescope, you may initially be tempted to search for something very easy to view, such as Andromeda. That would be a confusing mistake. Imagine wanting a photo of someone’s face, but instead you are served up the individual hairs, pores, mites, and cells from some minuscule patch of nostril. That is what you’ll see in the Hubble archive—Andromeda’s face mites. Try it. Those are interesting, but perhaps not at all what we were expecting.

So, how do we go about finding something with more visual appeal? One tactic I have used is to browse old press release images. Twenty years ago, processing software and techniques were not as refined as they are today. This is not to say they were bad at it, of course. Remember Photoshop 4.0 and 800𴨰 CRT displays? Good old 1996. It’s easy to take for granted all the new features and processing power we now have at our disposal. Anyway, try finding a press release image that you don’t like the processing of. A fun task can be to try and find your own take on an old image, perhaps by adding newer observations to it or combining the filters in a way you find personally appealing.

Left: A press release image of NGC 7027 from 1998.
Right: Processing some of the same data with modern software.

Step 2b: Finding a target, searching for hidden treasures (harder)

Another option is an all sky search. This is useful if you don’t have anything in particular in mind and want to browse. Type 0 0 r=180 into the search field and use the advanced search option to untick all but one instrument. This will yield all available data from that instrument. Fair warning: The HLA becomes quite slow to load because of this, requiring some patience. Of the browsers I’ve tried, Firefox and IE are fine. For reasons unknown to me, Chrome is noticeably slower.

I like to sort the results by date, filter them down to one year, switch to 100 results per page, and browse by thumbnail. I also write down what page I’m on so that I may come back later where I left off. Filtering the results to only include a single year allows the page to load much more quickly. The fits2web image viewer, accessible via the Interactive Display link under each thumbnail, allow one to brighten and preview anything that looks interesting, saving oodles of time that might otherwise be wasted downloading what turn out to be huge duds.

Step 3: Understanding the Hubble imaging instruments

One of the most confusing aspects of the HLA (indeed, any science archive) is confronting a multitude of new acronyms to learn. Hopefully I can explain away some of the confusion. Knowing what each means can make it feel a little more comfortable. I will focus on the four imaging cameras for now.

ACS (Advanced Camera for Surveys)

Some of the newer instruments have a number of imaging channels. To start with, let’s take the ACS (Advanced Camera for Surveys) for example. It has three channels, the WFC (Wide Field Channel), HRC (High-Resolution Channel), and SBC (Solar Blind Channel). These would be denoted in short as ACS/WFC, ACS/HRC, and ACS/SBC. Of these, most of the data you can find from the ACS comes from the WFC. The HRC and SBC have more specific uses, and the HRC was disabled a while back by an electrical fault.

WFC3 (Wide Field Camera 3)

Next up, the WFC3 (Wide Field Camera 3) has two straightforward channels to consider: UVIS (Ultraviolet and Visible) and IR (Infrared). You can find them using the same shortened forms WFC3/UVIS and WFC3/IR and by now you should easily spot the pattern. Instrument first, followed by a slash and the channel.

NICMOS (Near Infrared Camera and Multi-Object Spectrometer) and WFPC2 (Wide Field Planetary Camera 2)

The final two imaging cameras, NICMOS (Near Infrared Camera and Multi-Object Spectrometer) and WFPC2 (Wide Field Planetary Camera 2), are older and come with significant aesthetic challenges to deal with. NICMOS in particular has a lineup of anomalies and artifacts that are especially difficult to deal with and WFPC2 (pronounced wiff-pick two, if you are wondering) has its special staircase or stealth bomber shape which can also be a major turnoff. The PC part of the WFPC2 can also be searched for individually via the search form.

Old is not useless

Whatever you find in the archive, just remember that nearly everything Hubble has imaged was and likely still is of high scientific value, so old doesn’t automatically mean useless—quite the contrary. If you have the time, it is worthwhile to check out each observation of a given object. You never know what you might find. I once found a supernova in some old WFPC2 data that managed to slip past astronomers. Due to the time frame, it wasn’t very useful scientifically, but it was a fun and unexpected find regardless.

An accidental picture of a supernova in NGC 3597.

Step 4: Learn about the filters used on the Hubble instruments

Understanding the abbreviated syntax used in the Spectral_Elt (spectral element) column is rather simple. Here are some common examples of what you may see while searching the archive:

  • F658N – 658 nm, narrow filter
  • F547M – 547 nm, medium filter
  • F555W – 555 nm, wide filter

At first it may not seem obvious what these mean, but the pattern is easy to read once you know that F means filter, the numbers in the middle are the wavelength in nanometers, and the end denotes the general width of the filter in this case, N = narrow, M = medium, and W = wide. You will also frequently encounter the word detection in your results, which refers to a stack of all exposures for any given observation, combining all filters into a single FITS file.

More information on an individual filter can usually be found within the instrument handbooks:

  • WFC3/UVIS: http://www.stsci.edu/hst/wfc3/documents/handbooks/currentIHB/c06_uvis06.html
  • WFC3/IR: http://www.stsci.edu/hst/wfc3/documents/handbooks/currentIHB/c07_ir06.html
  • ACS: http://www.stsci.edu/hst/acs/documents/handbooks/current/c10_ImagingReference01.html
  • WFPC2: http://documents.stsci.edu/hst/wfpc2/documents/handbooks/cycle17/appendix_a_passbands2.html

Note the WFC3 list in particular is very easy to read and contains an abundance of information all on one page. Oftentimes the information on that page is more than enough for you to know for any given instrument.

One important caveat regarding the filter abbreviations to remember is that for certain infrared observations, the wavelength exceeds 1000 nanometers, but only 3 digits are ever used. For example, F128N is actually a Paschen β (like Hα’s cousin, but you can see it through dust!) filter which peaks at 1280 nm, not some tiny wavelength ultraviolet filter.

Step 5: Download the data

When you’ve found your target and decided what spectral data from the instrument (camera) you want to get, click the FITS link in the Retrieve column to add the files to your “shopping cart.”

After you’ve added all the files and datasets you want to your shopping cart, click the file/datasets tab, and choose either a ZIP file with everything in it or the option download each file sequentially, since some of them can be pretty big. Then click the Fetch HLA Data button to commence downloading.


Types of Conditional Sentences

Conditional sentences are constructed using two clauses—the if (or unless) clause and the main clause. There are five types of conditional sentences. It is important to understand each because each conveys a different meaning. Some conditional sentences refer to the general truths and others to hypothetical situations.

  • Zero conditional sentences refer to the general truth about a situation. These sentences state that one condition always results in the same outcome. For example:

If I don’t turn on my air conditioner, my house is hot.

Note that the both clauses are in the present tense.

  • First conditional sentences present a situation in which a future outcome is not ensured. For example:

If you eat your broccoli, you will feel great.

Note that the present tense is used in the if clause and the future tense in the main clause.

  • Second conditional sentences express if clauses and results that are extremely unlikely, such as those we “wish for.” For example:

If I had control over the food sources, I would end world hunger.

Note the use of the simple past tense in the if clause and the verb (i.e., would, could, should) in the main clause.

  • Third conditional sentences are a bit different. They suggest that the result would be different had the past been different. For example:

If you had told me you were hungry, I would have bought food for you.

Note that the conditions did not happen. The past perfect tense (had + past participle form of the verb) is used in the if clause and the verb (would) plus “have” plus the past participle of the verb was used in the main clause.

  • Mixed type conditional sentences refer to something in the past but continuing into the present however, the past condition and the results are not real. For example,

If I had learned to ride sooner, I would be a top rodeo star by now.

Note that the past perfect verb is used in the if clause and the present conditional verb is used in the main clause.

Punctuating these conditional sentences is simple. Use a comma to separate the if clause from the main clause when the if clause comes first.


Deep-sky image processing for science: an in-depth guide

Take your deep-sky astrophotography to the next level with our guide to capturing scientifically valuable images of targets in distant space.

This competition is now closed

Published: March 12, 2020 at 1:35 pm

Deep-sky photography can be a difficult area of astrophotography to associate with scientific observation. This is because it attracts a huge following of imagers who often produce results optimised for visual rather than scientific appreciation. There are several reasons for this.

Deep-sky objects are very distant and there’s a belief that they don’t change appearance significantly over time.

In addition, these often very beautiful objects nurture a desire to present them at their best.

Treating a deep-sky image as a scientific recording seems inappropriate when large professional telescopes can do it so much better.

But there is in fact much scientific work that can be done by astrophotographers.

Their images can lead to a better understanding of the nature of the objects being photographed, reveal surprises and bring an overall richer appreciation for the workings of the Universe.

Best of all, there is room for both aesthetic and scientific presentation using the same image data.

Tools required

  • Telescope
  • Equatorial mount
  • Autoguider
  • Camera (DSLR, cooled astronomical CCD, high frame rate)
  • Laptop
  • PixInsight
  • Photoshop
  • GIMP
  • AutoStakkert!
  • StarTools
  • DeepSkyStacker
  • APT
  • Sequence Generator Pro
  • Maxim DL

Strike a balance between aesthetics and scientific value

The term ‘deep sky’ generally relates to anything that lies outside the Solar System.

This encompasses stars in single, multiple or clustered collections along with a whole host of objects both internal and external to our own Milky Way Galaxy.

The form, colour, variety and beauty of these objects are vast and imaging them can become a compulsive pursuit, requiring a set of skills unique to this area of astrophotography.

The question of image manipulation is perhaps most relevant in deep-sky imaging and the type of manipulation can strip or enhance an image of scientific merit.

Unedited images may look visually unappealing, but they potentially contain the most scientific worth.

Ultimately it’s up to you as an imager how you want to present your results. A good strategy is to create an archive to hold the original images in case they are required for further scrutiny at a later date.

Most amateur astrophotos are manipulated to a degree. Stacking, noise reduction, brightness stretching and more, all produce results that are based on the original recorded data but have been adjusted to produce something new.

But how much manipulation is too much manipulation? The answer is open to subjective interpretation.

Adjusting an object’s colour may produce a correct-looking result but this could easily be incorrect, simply reflecting a stereotype colour gained from looking at professional images or the work of other amateurs.

Few deep-sky objects are bright enough to provide a definitive visual colour reference to work toward.

For greatest visual impact it’s common to want to pull out every last detail contained within the image.

Here too caution must be applied because producing a high-dynamic-range (HDR) end result may show all of the detail recorded, but there will be a degree of subjectivity involved that ultimately relies on the processing and compositional skills used to create the final image.

Image annotation and presentation for science

Deep-sky images are typically recorded with one-shot colour cameras or mono cameras and filters.

One-shot colour cameras remove a lot of the hard work, but mono cameras with filters provide more scope for scientific imaging.

A common method of processing and combining mono, colour-filtered images is to concentrate on producing a high-quality luminance image to which colour-filtered images can be added for a full-colour result.

In addition, speciality filters can be used to record specific wavelengths produced by certain elements. Common examples include H-alpha, H-beta, SII and OIII.

The results can be presented in a number of ways. A familiar example is the so-called Hubble palette.

This is typically achieved by substituting RGB components with those taken through SII, Ha and OIII.

The end result does not represent what you’d see visually but it does produce an image in which it’s possible to see the contributions of sulphur (SII), hydrogen (Ha) and oxygen (OIII).

In addition, some mono results may be recombined into the luminance component, or replace it completely.

A common example is to use an H-alpha filtered image of a nebula to further enhance the luminance part.

This makes the luminance sharper and more defined than that produced by a conventional multi-wavelength luminance filter.

An issue arises as to how this information is conveyed to the viewer. Date and time stamps are very important as well as details of the telescope, camera and any other optical elements used in the imaging train.

For multi-filtered images, filters should be identified along with the exposure times and number of sub-frames used for each filter.

Finally, orientation markers, scale lines and star identification can be added to make interpretation easier. Your name and location should also be recorded.

As long as the original calibrated but unprocessed components have been archived for easy retrieval, the scientific elements of the image can still be analysed should further investigation be required.

Image calibration

Image calibration removes elements from an image that shouldn’t be there. There are various calibration steps that can be applied but care and understanding is required to use them correctly.

Basic calibration involves processes known as dark frame subtraction, flat-field correction and bias field correction.

In addition, more advanced processes can be employed to apply some of these calibration processes to the calibration frames themselves before applying to the final image.

In imaging parlance, image frames are often referred to as ‘light frames’. Images taken using the same settings but with the front of the camera covered are known as ‘dark frames’ or ‘darks’.

The data recorded by a dark frame is dependent on temperature, so it’s important to take dark frames at a similar time to when you’re collecting your light frames.

Some astrophotographers choose to create a library of darks made at specific temperatures for this purpose.

However, it’s important to note that camera characteristics can change over time, so a library should be periodically updated.

Every image has an element of random noise. This can be reduced by taking several light frames and averaging them together. A similar process can be applied to calibration frames.

Noise reduces by the square root of the number of images stacked: four images reduce the noise to half strength, nine reduce it to one-third strength.

Flat fields (flats) are taken through the same optical setup used to collect the light frames but with the instrument pointing at an evenly illuminated target.

This may be a specially constructed light panel or even a clear, evenly lit sky. Typically,
an image saturation of one-third to half the camera’s full saturation is ideal.

Bias frames are the shortest exposures possible with the camera aperture covered. They represent the base state of the camera’s pixels, which typically have small, non-zero values even when no image is present.

For more deep-sky image processing, read our guide to deep-sky photometry and spectroscopy.

Pete Lawrence is an experienced astrophotographer and a co-host ofThe Sky at Night. This article originally appeared in the February 2019 issue ofBBC Sky at Night Magazine.


3 Answers 3

An 8" telescope was state of the art in 1686 (even though a modern amateur instrument is certainly much better than Huygens' lenses were) and a normal DSLR sensor has only slightly better properties than plates that were used in astronomy up to the 1980s, so there isn't much gain there, in terms of instrument performance, over fairly old equipment. it's just MUCH more convenient to use. You can, however,re-live old discoveries with it, if you like and you could also discover new things with relative modest instruments. That, however, may quickly become a full-time job, irrespective of what instrument you use.

If you want to understand what Copernicus and Kepler did, it's probably best to read their books, first. They may give you an idea of just how much intellectual and observational effort it took to amass the knowledge that we are teaching in high school, today.

The real problem in observational astronomy is that a lot of what astronomers do is not just linked to the performance of their instruments, but to the total amount of time that is needed to perform high quality research. If you look at some of the finest amateur astronomy imagery, you will find that the "amateur" has spent months or years waiting for near optimum conditions (alternatively you can move to Hawaii and camp out on the volcano. just like the professionals), took dozens if not hundreds of frames and then spent days stacking and processing them with the same tools that the professionals are using.

How about comet hunting? Does it sound like fun to be out there every night that seeing permits to get the necessary observation time for a one in a hundred (or is it thousand?) chance for a first discovery? To me it sounds like that the "amateur" label is not a good one for many of these folks. Plenty of them are just as driven as professionals, they merely never got a job title called "astronomer", but I am sure they would do great work in a professional environment just as well.

Yes, you can do all of that with an 8" telescope. or with a 12" with a cooled astronomy CCD that will be your next purchase (who needs a new car, right?). But would you? Would you spend a couple years measuring the positions of Mars and Venus at least once a week to prove that Kepler was right, after all? We know that Kepler was right. We also know how long it took him to get the calculations done without a computer. Would you use your computer to calculate the orbital data or would you do it by hand, to be historically "more accurate"?

As for the distance to planets. that, I am afraid, is not going to happen in your lifetime, again. The next Venus transit will be in 2117, you just missed the opportunity of two lifetimes back in 2012.

And then there is the aspect that many professional astronomers actually never lay hand on an instrument themselves. They are part of collaborations of professional engineers and scientists specialized on the art of instrument building and/or they rely on the operators of the large telescopes and the satellites/probes they get access to once or twice a year (or once a lifetime like the folks who just flew by Pluto!) to make the measurements for them, and then they sit in their offices crunching the data for a year or two, eventually publishing their papers. The most ubiquitous substance you will find in any science office is paper. Some folks have stacks of scientific publications of colleagues all around them and all they do all day long is to read them. That, like it or not, is what many of the world's best scientists are doing: they collect clues in other people's work. A lot of that (and the raw data) is now on the internet. Nothing stops you from looking at it until you discover something that nobody else has seen, so far. It's definitely out there. And when you do, all you have to do is to write a science paper, submit it, get it peer reviewed and maybe you will even be published. The rest is rinse and repeat.

Or you can do what I do. I grab my $12 binoculars, I go out on the porch and I look at the Pleiades, the Orion Nebula or Andromeda, the Moon, Venus and Jupiter. Occasionally I lug my $20 kiddy telescope out there to see Jupiter's moons or Saturn's rings (barely). That is fun, in my books. Driving fifty miles just to get out of the light pollution that surrounds me. that wouldn't be.


A decision on "Remote" data

After much work and deliberation, the Mod team has decided to amend the rule about data from paid telescope services such as Itelescope and Deep Sky West. We figure that while this is technically your data as you pay for the service, it is not quite the same as data collected by someone who owns their own equipment and collects it themselves in the field.

That is why we have created a new amendment to Rule II, which has to do with how you can only post images that you yourself have captured and processed. We are now going to classify images taken via paid services to be considered "Remote" data, which we have created a new flair for.

As part of this minor rule change, all posts that are of what we consider "Remote" data require that you mention these two things along with the normal equipment, acquisition, and processing in the details section of your post:

It is paid data taken remotely

Which service you used (Itelescope, DSW, etc)

Additionally, we ask that you mention it in the post's title as well.

We know this doesn't affect the majority of users here, but we believe it is an important distinction to make.

edit: You're all taking this way too seriously. We're just trying to make it so posts like this can show up better in search, and as a way of classifying these images as their own "type" of image. Calling it "Remote" is just a over-simplified way of saying "Data collected in a way that doesn't conform to the idea of the average APɾr", aka someone who (to loosely quote /u/Windston) if problems arise in the equipment they are using, it is their job to fix it. Getting "data" like this is not the same as someone who has gone out and is imaging in the middle of nowhere, their backyard, their own private observatory, or the like. Another way to look at it is this: Is the bulk of the equipment I am using my own? (not including rented lenses or things that you personally are using) If it isn't, then while the data is technically yours since you paid for it, the means of collecting it are not yours.

The simple distinction we are trying to make is this: Did you pay a company to get this data? or did you build the system yourself? Thats it.

The obvious loophole here is that people can simply buy those companies. (edit: /s)

Well if I was to buy DSW, then all of the equipment would be mine wouldnt it? In that case it would no longer be what /u/twoghouls called a "robotic scope", since it is me running my own equipment.

You guys give way to many fucks about what other people are doing with their posts.

It's not meant to solve a problem. We just thought it would be cool to be able to sort these types of posts out, or to be able to see only these types of posts. Asking that people say "Deep Sky West" "DSW" "Itelescope" or whatever other method similar to these is so they show up in search results in the sidebar. When you look up the above you can only ever find a handful of posts due to reddits garbage search ability.

Plus, it has the benefit of showing newcomers and regulars that there are ways of doing AP without having to go out and do it yourself.

We're also not trying to say that getting data this way isn't AP.

So few words have never summed up so well exactly the point I was trying to get across.

I think a lot of people find a problem with remote data because it takes a good portion of the skill out (barring processing), and this sub is really meant as a resource (see all our other strict posting rules for example.)

Remote data is pretty, but it hold little value other than processing details, to people trying to learn how to do this stuff themselves. It's important for someone who's new to this to be able to look at a post and know if it was done by a human, or if it was done by a computer using fancy professional gear.

Wanted to throw my opinion out there. I missed this post when it was hot, but I still think more users should share their input.

It seems unnecessary. I wouldn't say it's good or bad.

I'm also a bit confused about the intention of the rule. To make "remote" images show up better in a search? That doesn't provide a worthwhile benefit to any user here. It literally only helps people find /u/idontlikecock's images better, since he is pretty much the only person commonly posting well received remotely acquired/locally processed images.

As far as more unforeseen or hidden results of this change - on one hand, I think it helps temper the expectations of new or curious astrophotographers that aren't aware of how DSW and other sites work, and who may otherwise assume that those datasets are 'typical'. It may educate them on that option and get them straight into processing, which would fit a lot of peoples' lifestyle better anyway.

On the other hand, it feels to me like a 'scarlet letter' for people that don't acquire their own data, particularly /u/idontlikecock. It seems a bit weird that this rule is going into place, given that most users that don't take their own data usually explicitly say so in the comments and acquisition details. Even if they don't, it would be pretty obvious and they could be called out.

Iɽ vote to remove the rule, personally.

Can we also get some transparency on items like this that the mod team are tackling?

I would support creating a separate sub for roboscopes/purchased data. Failing that, creating a sub where those of us who run our own rigs can post and discuss processing/acquisition.

r/spaceonly is exactly what you seem to be after :)

No remote imaging, and it's always super in depth in terms of processing and acquisition

This sub is meant for all types of AP. We're just trying to make a way to sort these types of posts into their own category.

Since I seem to be the one most impacted by this rule since I am the most active poster from a remote imaging service, I thought I should chime in here with a few questions I had about the rule. For the record though- I have no issue including DSW in my title. I have always included that my images before this rule when an image I posted was captured at Deep Sky West. It's always right in the "acquisition" section of my posts where I talk about how the image was acquired. Never have I tried to hide it since I see no issue with it. I felt that was important to make clear. I have no issue with tagging my posts as DSW.

My main question with this flair is when it will be implemented, and when it would not be.

My response to this video (it was more geared towards at what point do the images not become yours, I edited it some to try and spark a discussion here though and make it more relevant):

Basically I think the first comment summed it up perfectly. What does ownership have to do with anything? Especially this is how it is done exactly in the scientific community. The majority of this debate however only generally becomes pretty heated when someone with "rented" equipment gets awarded for their image.

So starting with the most basic, someone who makes images for Hubble, or uses something like Google's network of observatories, or even Adam Block. Are they not astrophotographers? I would say yes, and I would think most people would agree that anyone who uses Hubble or a huge observatory like one in Chile to take images is about as great as they come as far as astrophotographers come. They are professionals at this. Should their images be tagged?

What about the next level down. Someone such as Mark Hanson who is one of my favorite astrophotographers. He captured his data through an automated process, even if he doesn't have access to a huge observatory compared to someone like Adam, however, he owns his equipment (scratch this, he may own some, but not all). Would his images not need to be tagged, solely due to ownership of the equipment?

So let's move on to the next example- how about someone who has a private observatory in their backyard that has been completely automated. They own the equipment, and they sit in their home every night while it images away in the backyard. Are they not astrophotographers? I feel like it would be silly not to call them one, and again, I feel as though most people would agree that this person is an astrophotographer and is what a nice chunk of the best of the best astrophotographers out there do. Should their images be tagged?

Moving on, how about someone like Tolga. He uses DSW with a 14” CDK telescope. However, he owns the telescope and all of the equipment (I am fairly certain he does at least, even if he doesn't, imagine he does for this example). Is he no longer an astrophotographer? What changes between him and the person above? Only one thing- one person can walk into their backyard and watch their telescope take images, the other cannot physically touch his immediately. If you think the person above is an astrophotographer, but Tolga is not, I feel like the hoops you are jumping through are ridiculous to discredit someone solely due to the location of their telescope. Would his images need to be tagged as remote since his observatory is further away?

Lastly, there have been posts on this sub about collaborative imaging. A specific example would be when Ron posted an image he that he edited from data captured by a colleague of his from the Southern Hemisphere. Would his image need to tagged as remote? Or would he not be allowed to posts this at all? This one I think is especially confusing since it is so common in the astrophotography community among the larger names it seems.

Hopefully these examples help you make a more refined version of this tagging rule so you can use more examples other than just DSW and iT.

Warning: this may be a bit of a ramble these are just my initial thoughts and not meant to offend anyone. I am very interested in the discussion, and I don’t think the astro-imaging channel video hit on everything that was interesting about these questions.

Especially this is how it is done exactly in the scientific community.

I actually don't think the comparison to how things work in the scientific community is apt. The work of your average amateur astrophotographer and the professional astronomer is just too different. While we both can call what we collect “data”, the product in the case of the amateur is the pretty picture created from that data, while the product for the professional is publication in a peer-reviewed journal. The data (even if captured in the visible spectrum) is not typically stretched and presented in a way that a layperson would appreciate. The process in the case of the professional is not about the gear/data/acquisition/processing, but about the research, an intellectual pursuit to create new knowledge or discover something. There are very few amateur astrophotographer who have the intent goal of creating new scientific knowledge or discovery. Some amateurs are lucky, some do try a bit more, and while I am sure we would all wish to be the next Nicolas Outters or Dave Jurasevich, if that was the “point” we wouldn’t be imaging M31 and M45 over and over.

What does ownership have to do with anything?

I have heard the argument that once you figure out the gear/acquisition part of the hobby, it becomes rote, and there is nothing new to try, because there is a “right” way. I totally disagree with this sentiment. There are always novel approaches to gear/acquisition because this hobby attracts people at all price points. Lots of people in my club are all about open source everything, building their own electronics, improving software, etc. But even if you are not as much a DIY-er there are always new cameras/mount/optics coming out that often can be used in new “novel” ways (for example we can now do lucky imaging on planetary nebulae!). I guess what it comes down to is I really enjoy this part of the hobby, and I think there can be “creativity” in the gear/acquisition side of the hobby that is worth discussing. I am not saying you have done this, but I have seen people online defending the DSW approach claim that ALL the creative aspects of astrophotography lie in the processing side. I think that is wrong.

What about the next level down. Someone such as Mark Hanson who is one of my favorite astrophotographers. He captured his data through an automated process, even if he doesn't have access to a huge observatory compared to someone like Adam, however, he owns his equipment (scratch this, he may own some, but not all). Would his images not need to be tagged, solely due to ownership of the equipment?

Yes, I think so. If you look at the categories for the Insight Astronomy Photographer of the Year award, they recognize that remote imaging should be it’s own category. They call it “robotic scope”. Mark Hanson has won in that category despite owning much of his gear. I don’t see it as a diss or that it makes him any less of an astrophotographer. It’s just a way of calling attention to the advantages of that type of setup.

Lastly, there have been posts on this sub about collaborative imaging. A specific example would be when Ron posted an image he that he edited from data captured by a colleague of his from the Southern Hemisphere. Would his image need to tagged as remote?

No, in this case, my understanding is one should use the “Processing” flair. That is a different case than DSW since the data did not belong to Ron, but was borrowed with a colleague's permission. Same thing when using Hubble, Deep Sky Survey, or other public data pools to show off your processing. Again, that doesn’t make one any less of an astrophotographer, it is just about transparency within this little community.

The majority of this debate however only generally becomes pretty heated when someone with "rented" equipment gets awarded for their image.

I am not gonna touch that, but as aside on “awards”: I don’t like the mod-chosen “sticky” thread because I think it is a weird, arbitrary award system that has not met it’s goals as outlined 9 months ago. I thought the point was to reward high effort posts at all levels. Lately, I only see pics taken with high-end gear stickied, and often from the same 1/2 dozen folks, while high effort, high quality stuff from others continues to be ignored. Here are a few examples from the past month that I think should have been stickied: N America Mosaic by /u/RFTinkerer, Western Veil Nebula by /u/chickenmeister, Lagoon and Trifid by /u/brent1123.