App Tip #1: Dark Sky Meter

From time to time, I will find apps that are useful for astrohotography. Some are well known but others maybe not so much. In the “App Tip” series, I will highlight some of these apps that I find useful in the hopes I can help other people capture better images. I use an iPhone, so I will be highlighting apps that are for iPhone, even if they are not available on Android.

Some apps (such as Clear Outside) will give you a “SQM” value for your sky conditions. The range is rather odd (16.00-22.00 with higher numbers darker skies).

I’ve always wondered how accurate these values are since they come from map data. The best way to find out is to buy a SQM meter and measure.

But if you don’t have one, there is an alternative called “Dark Sky Meter“. It’s pretty easy to use. Press “1” to take a dark (after covering your camera lens) then hold it up to the night sky and press “2” to analyze. Select your sky conditions (From Clear all the way up to foggy) and you will get a readout.

You can also choose to share with “Scientists across the world” – which makes the $1.99 price a little frustrating since I am doing all the work here, but I did it anyway

Here is the result for my skies

It even has a nice red UI so as not to upset your astro-friends

My sky had a few clouds, and was measured at 19.64 (Bright Suburban Sky). Which it is.

“Clear Outside” reports 19.22 (Bortle 6) with 42% cloud cover, so it seems reasonably close but only a single data point.

This app is probably a little more useful if you take images in a variety of locations and honestly, I am not convinced it is worth the money but it was interesting to check it out.

It does have a map and a clouds feature (which lists seeing and transparency, which Clear Outside does not seem to report for me), as well as temp and moon phase.

The help section says that the data is used to keep Dark Sky Meter up to date, so that seems like an important thing to contribute to !

Nikon D5300 – Lossy NEF Compression (AKA Nikon Concentric ring problem)

So, one of the huge benefits to shooting RAW on a DSLR is the increase in image quality over JPEG – you are recording exactly what the sensor sees, as compared to a “visually lossless” image such as a JPEG.

Of course, JPEG was mostly designed around regular images of the world around us and not extremely dark images such as our astrophotography sub-frames.
Also, as we stretch and manipulate our images, the JPEG data can easily break down, exposing compression artifacts and flaws.

So enter RAW, the perfect solution, right ? Well, maybe. There are a number of cameras that perform a number of manipulations to RAW files. Far too many to go into all of them here (and I am far from an expert).

Some can be turned off (Long-Exposure Noise Reduction – where the camera essentially takes a dark for every frame and subtracts it before giving you the RAW data).

Some can only be turned off by modding the firmware (i.e. the Nikon D5100 can be modified with custom firmware that enables a “pedastal” to ensure that dark frames and dark subs do not get clipped – cameras like the Nikon D5300 do this automatically)

However, there are some cameras (such as the Nikon D5300) who’s firmware doesn’t (and probably will never) have a modded version. Some of these cameras do very odd things to RAW data under certain circumstances, and sometimes the best we can do is to try to avoid it or develop software to correct it.

The Nikon D5300 (and a number of other models – see below) actually performs “visually lossless” compression on both 12 and 14-bit RAW frames and there is currently no way to turn it off. This is far far better than JPEG compression, but can still be an issue under certain cases (one such is theorized to be very dark exposures — exactly like our astrophotography sub-frames).

Various theories have been extended

  • expose sub-frames more and flat frames less
  • The moon is part of the cause
  • It doesn’t happen as much at a dark site
  • It happens less with less vignette (so using a refractor with a wider image circle may help)

Non of these are foolproof, and so sometimes mitigation is the only remedy.

A very enterprising Cloudy Nights member has led the charge and done extensive analysis and has discovered that the compression results in histogram “stairstepping” in the Red and Blue channels but the green is intact (lossless compression). This led him to implement software to correct the issue.

The correction is not perfect but offers a vast improvement over the default compression. The software works on DNG files, so you must download and install Adobe DNG converter (free)

The process is then very simple

  1. Use Adobe converter to convert flats & light frames to DNG (I also did darks but I do not think this is necessary)
  2. Run the windows program (see below) to process the DNGs and remove the rings
  3. Calibrate and integrate the processed DNG files

Here is the link to Cloudy Nights with extensive research and details

Here is the link on Cloudy Nights to the member (Mark) who did the bulk of the work to correct the images and develop the correction software

Adobe DNG Converter: https://helpx.adobe.com/photoshop/using/adobe-dng-converter.html

The correction software: (V3 is the current): https://drive.google.com/file/d/14IHauD0sMecsjrOCTZ8TBatLUO_Xs17a

List of cameras affected: https://www.cloudynights.com/topic/746131-nikon-coloured-concentric-rings/?p=10867894

Here is an example of a before and after on one of my images. Quite an astounding improvement

[IMAGE] – NGC1499 (California Nebula)

Primary Name: NGC1499
Alternate names: California Nebula
Object Type: Emission Nebula
Constellation: Perseus
Distance: 1000 Light years
Coordinates: RA: 04h 03m 18.00s , Dec: +36° 25′ 18.0″
Links: WikiSky, Wikipedia, NED, Telescopious, AstroBin 

Capture Details:
Dates:Nov. 7, 2020
Frames: 49x180"
Integration: 2.5 hours
Avg. Moon age: 21.09 days
Avg. Moon phase: 61.12%

Equipment:
Imaging telescopes or lenses: Nikon 300mm f4.5 AI Nikon 300mm f4.5 AI
Imaging cameras: Nikon d5300
Mounts: iOptron Skyguider Pro
Guiding telescopes or lenses: William Optics UniGuide 32mm
Guiding cameras: ZWO ASI 120mm mini
Software: APT - Astro Photography Tool Apt, PixInsight

Session Details:
This was the night I thought I would get my auto-guiding working but sadly only after I had imaged this target. This is a pretty dim object, especially with an unmodified camera so I was not expecting much. So it was quite a surprise when I plate-solved my way to the target and saw a dim shape in the test shot. 

Individual subs however, didn't reveal much and I was not really expecting to get much of an image. This is definitely not my best image and there is a lot to improve, but for one of my first images (also unguided) with an un-modified camera, I will take it
 
AstroBin: https://www.astrobin.com/hn3vw4/B/?nc=user

Quick Tip #3: [Telescope Simulator]

Following on from the previous Quick Tip on N.I.N.As Manual rotator.

What if you want to make use of plate solving and the manual rotator but you don’t have a GOTO mount? What’s to be done for us owners of SkyGuider Pro’s and Star Adventurers ?

Well obviously with no GOTO, you will have to manually move the mount to the correct RA/DEC coordinates by taking a series of images, plate-solving and adjusting RA/DEC

However, once you are close enough, it would be great to make use of the manual rotator. N.I.N.A though will ONLY use the rotator if you have “Center Target” enabled in your sequence.

“Center Target” can (currently) only be enabled when “Slew to target” is enabled. Slew to target needs a connected scope and the trackers don’t support ASCOM etc. This is frustrating as it makes multi-night projects even harder than they already are for us star tracker owners.

There is a solution though. In fact a few takes on the same solution. There exist “Telescope simulators” These allow N.I.N.A to connect to a virtual telescope and issue slew commands. The virtual scope will then report back to N.I.N.A that it has done so, thus enabling you to then use “Center Target” to use the rotator

This is pretty easy to setup. Simply go to the Equipment | Telescope section and pick a virtual telescope from the dropdown, then “Connect” it as you would a regular mount/scope

Three sources of free virtual scopes I am aware of

  • Telescope Simulator for .NET. This comes as part of the ASCOM download
  • “Telescope Simulator” – this is installed in my N.I.N.A I believe also as part of ASCOM (although it is a different simulator)
  • Green Swamp Server

I think all three of these will do the job. The Green Swamp server has (for me) the nicest dashboard and will show a 3D view of where your mount is pointed but I don’t really use that

You can see here, I am connected to the Green Swamp scope

Once connected, you can go to the Framing wizard, load up a target and if you have the dashboard for your Simulator on-screen, hit “slew” (in N.I.N.A) and watch the scope pretend to slew to target.

Once this is working, you can enable “Center target” in your Sequence and use the manual rotator. Of course, you should MANUALLY slew to your target first since the scope wont actually move.

[IMAGE] – M45 (The Pleiades)

Primary Name: M45
Alternate names: Messier 45,CR42, Mel22, The Pleiades, Seven Sisters, Subaru, Mutsuraboshi (Japan)
Type: Reflection Nebula
Constellation: Taurus
Distance: 444 Light years
Coordinates: RA: 03h 47m 24s, Dec: +24° 07′ 00″
Links: WikiSky, Wikipedia, NED, Telescopious, AstroBin 

Capture Details:
Dates:Nov. 8, 2020
Frames: 22x180"
Integration: 1.1 hours
Avg. Moon age: 22.09 days
Avg. Moon phase: 50.67%

Equipment:
Imaging telescopes or lenses: Nikon 300mm f4.5 AI Nikon 300mm f4.5 AI
Imaging cameras: Nikon d5300
Mounts: iOptron Skyguider Pro
Guiding telescopes or lenses: William Optics UniGuide 32mm
Guiding cameras: ZWO ASI 120mm mini

Session Details:
This was my first (somewhat successful) night of guiding. My initial target was the California nebula. I was unable to get guiding to work (this was because I did not know to connect the virtual mount in PHD2 for on-camera guiding)

So I ended up shooting the California Nebula unguided. Frustrated, I then played with PHD2 and finally got it to work. 
I just happened to look up in the sky and saw ... M45. I was amazed that I could even recognize it by eye in my light polluted driveway (Bortle 6) but I didn't waste any time and lined up and started capturing.

It was already around 1 a.m. so I was only able to get around an hour of data. Given this, I am pretty pleased with the result
 
AstroBin: https://www.astrobin.com/v5c7t8/C/?nc=user

Stacking data from multiple nights

This is a question that I see all the time on forums, so I thought it would be useful to share some thoughts. I am not an expert on this, so some of this information will be based on (lots) of reading of people who know far more than I do. I will update it as I continue to learn over time.

There are a few reasons (at least) that you might have multiple sets of data to integrate

  • A multi-night project that you continue to add data too
  • You are shooting a target like M42 or M31 which has a high range of brightness (high dynamic range)
  • You are shooting with different filters

In the HDR case, you would probably have different length subs (from as short as a second, upwards toward minutes). In the longer subs, much of the bright data might be clipped, and you would use HDR techniques to blend this into a single image that contains the best of the data to generate an image that is not possible with a single set of subs.

In the case of different filters, you would integrate each filter into its own master light (i.e. all the Red, or all the Ha) and could use the techniques below to do this.

There are two main ways

  1. Integrate subs for each session into one master. Integrate the masters
  2. Calibrate all subs (per filter), then integrate into a single master (per filter)

My understanding is #2 usually gives the best signal and it is the one I am going to describe.

The other benefit(s) of #2

  1. You can weight all your images together and discard bad subs globally. This might be a benefit if you had some less-than-great data in one of your first nights and wanted to remove that and replace with new subs.
  2. Since you will likely be integrating a larger number of subs, you might be able to use better pixel rejection algorithms and get a cleaner stack – in DSS, it will recommend the best algorithms to use and one of the things that influences this is the number of light frames you are integrating.

Method #2 does require you to either keep calibrated subs around for long periods, or keep the original subs and all calibration frames. Not a problem for me as I am a hoarder, but that might not be the case for everyone

First, I will cover the basic process, then give some tips for Deep Sky Stacker on how to do this.

The basic technique is. For each session:

Acquisition:

  • I recommend creating a completely separate directory per session (nest them below a top-level directory if you want to keep them all together)
  • Shoot your light frames as normal
  • Shoot your calibration frames as per your normal workflow (Darks, Flats, Flat Darks)
    • Some people like to re-use flats or darks over multiple sessions. I re-take them but both ways can work.
  • If you shoot calibration frames each session, I add them to a “Calibration” directory in the same place as the lights. If you only shot one set, it might make sense to put them at the top level
  • If you don’t have your Bias frames, shoot those and store them somewhere

Processing:

In DSS, this is now quite simple, but the user interface is not immediately obvious at first.

Here is where your files would usually show up (in the Main Group) – Lights, Bias, Darks, Flats etc.

What many people (myself included) miss is the “Main Group” tab – what could that mean ? Since most of us start off very simply, we put all our files in there and move on. This is a fine way to integrate one set of frames.

But now look what happens if I add an image. I am going to add a single bias frame to make this easy to see, and also there is a reason I picked a bias.

Here is how the UI looks now

Look inside the red box and we see Group1 – well that looks interesting. We can also see our single bias frame in the Main Group.

Lets select Group 1 and see what happens

With Group1 selected, we see we now have an empty file list that can have its own Lights, Darks and Bias frames. Easy, right ?

Well, there is a gotcha (which I did not know about the first time I did this). It’s documented but the UI doesn’t make this clear.

When you have multiple tabs, you do indeed put each sessions images (both lights and calibration images) in its own tab.

However, the Main Group is special! This is what I missed the first time. Any frames in the Main Group will be SHARED across all the other tabs. What use is that you might think? I think it is definitely confusing and the UI could warn you or give you guidance.

The primary reason is so that you don’t have to add shared calibration files in multiple tabs. Now, what kind of calibration files might be shared ?

For me it’s bias frames only (if I am using them). But if you have a darks library (very common for people with cooled astro-cams), then it can be darks as well. Also if you are comfortable not taking flats every night, then it might be flats also. Just bear a few things in mind

  • Bias can change over time, so if your data spans multiple sets of Bias frames, you should add them to every tab (EXCEPT Main Group!)
  • If your subs are different exposures, then your darks will also likely be different exposures. To be safe, you should add the darks into each group.
  • The same goes for flats if they are different exposures (common if you take sky flats for instance)

So the process is

  • Add any shared calibration frames to Main Group. As you do, the Group 1 tab will appear
  • Add your session-specific calibration frames to Group 1, as well as your lights. The Group 2 tab will appear as soon as you have added your first file to Group 1
  • Repeat until all your subs are added
  • You will need to stack the data separately for each filter ! As far as I know, DSS doesn’t know about filters

Now you can integrate as usual. The one frustration here is that I don’t know of a place to see details such as image score for all subs in one place, so you need to click each tab.

Once you have added everything and integrated, you will get your master light. Do that once per filter (if appropriate) and then you can process as normal.

There is one caveat with the HDR mode (only a small number of DSO targets usually need this). If you do HDR then there are two ways to do it

  • Stack each exposure separately (i.e. make a 1s, 10s and 180s stack) – these could still have multiple nights of data
  • Use DSS’s HDR mode

The first way (multiple master lights) is best if you will use another tool such as Photoshop or PixInsight to HDR combine the images.

DSS can also do this for you (although you wont get any control over how it does, and I don’t know how good of a job it does). To do this, click the “Stacking parameters” when you go to register and stack your images

Then select the “Lights” Tab

You can then select the “Entropy Weighted Average” option. I like to make multiple masters in this case and combine them elsewhere but this is an option if you want a very simple process

DSS can also output the calibrated (or calibrated and star aligned) lights (i.e. one file for each sub). However, it’s not clear if DSS can stack some images that are calibrated with new images that are uncalibrated.

There is also an important caveat here (which DSS mostly hides, but other tools may not). If you stack calibrated frames from multiple nights by using Pre-calibrated AND REGISTERED frames, you MUST make sure they are all aligned to the same reference image or they wont align. For DSS, I recommend just not using the intermediate files and keeping it simple. For other tools, there are ways to address that.

I suspect if you simply add those files to a tab Group with no calibration files in the Main group or that group, then it might save you re-calibrating older data but I am not sure and I’d probably not take the risk. If you do want these files, just select the Intermediate files tab in Stacking Parameters and enable the options.

Hopefully this will help some of you as you begin your journey into astrophotography. As always, I am happy to correct any errors in my posts or add new content, so please let me know if you spot any errors or have suggestions.

This is definitely only an introduction and I am sure I will learn more as I get better. I have moved to Pixinsight for all of my stacking and processing, so at some point I will probably write up how this can work, although there are many people who are far more expert than me in Pixinsight.

Quick Tip #2: [N.I.N.A] – Manual Rotator

There are many times when you need to put the scope to a very specific place and orientation in the sky.

Some common cases are:

  • Multi-night projects
  • If you need to slew off to refocus on a bright star
  • Changing filters
  • For mosaics

Getting the camera back to the same location in the sky can pretty easily be done with plate solving but that doesn’t always handle the rotation.

Hardware rotators exist but they are expensive, one more thing to go wrong and can add weight, so they are not always the best solution (although if you are going for 100% automation, then they become mandatory).

N.I.N.A has a very ingenious tool that makes this super easy even without a real rotator, but it’s not super obvious unless you dig into the manual.

You can add a “Manual Rotator” in the Equipment section of N.I.N.A (it is available in the dropdown)

You still do have to connect the rotator as though you had a real one.

The way you can use this is in the Sequence window (including saved sequences) and from the Framing Wizard

In the Sequence window, you can simply set the Rotation parameter

Here, it is set to 45.2 degrees. Then when you start your sequence (you need to have Slew to target and center target enabled),

after plate solving, N.I.N.A will pop up a window and prompt you to rotate the camera. It then takes an image and compares, and lets you repeat this until you get the angle where it needs to be.

Here is the link to the official N.I.N.A documentation: https://nighttime-imaging.eu/docs/master/site/tabs/equipment/rotator/

The rotation can also be set in the Framing Wizard (very useful for multi-night projects when you load in a frame from a previous night to plate solve, or for mosaics)

Just set the rotation (or use what was in the file, or generated by loading a frame) and then use one of the options to add/replace this info into a Sequence

You might wonder if you can use this without a GOTO mount since you need to enable “Slew to target” (for those of us with older mounts or tracker)

The answer is Yes! I will cover this in another tip, but the basics are that ASCOM and .NET come with a telescope simulator (there is also the Green Swamp server). This allows N.I.N.A to “slew” the scope successfully and then move on to the rotation, so even though your scope won’t physically move, N.I.N.A believes it did

Hopefully this helps you be more successful in your imaging.

[IMAGE] – Andromeda Galaxy (M31)

Primary Name: Andromeda Galaxy (M31)
Alternate names: NGC 224, Andromeda Nebula
Type: Barred spiral galaxy
Coordinates: RA: 00h 42m 44.3s, Dec: +41° 16′ 9″
Links: WikiSky, Wikipedia, NED, Telescopious, AstroBin 

Capture Details:
Date(s): Aug. 25, 2020
Light Frames: 59x60"
Integration: 1.0 hours
Avg. Moon age: 7.11 days
Avg. Moon phase: 47.12%

Equipment:
Imaging telescopes or lenses: Nikon 70-300mm f4.5-f5.6 @ 200MM
Imaging cameras: Nikon D5300
Mounts: iOptron Skyguider Pro
Software: PixInsight, APT

Session Details:
This was the third time capturing M31 and my best so far. The above capture details may be a bit approximate as some of this is lost in the mists of time. This is a delicate target ad my early attempts at processing were ... feeble. I finally got a good result by following the excellent M31 tutorial at Light Vortex. This was also my first processing with PixInsight. 

This is definitely a target I need to return to with the refractor and a much better setup. The plan will be to take some longer and shorter exposures and blend them together for a final result.

This one is not going to win and AAPODs but I am still proud of it.

AstroBin: https://www.astrobin.com/94054v/B/?nc=user


Quick Tip #1: [Pixinsight] – Unlinked stretch

There are so many things I’ve learned or picked up along my journey into astrophotography. Many of these things I’ve learned the hard way but also a lot have been through the help of others.

In order to pay some of this forward, I am starting a series on some tips & tricks I have picked up along the way that I thought might help others

The first topic is this:

If you’ve used PixInight, you know what I mean. You go through the entire process of stacking (either manually or in WBPP), then anxiously open the master light expecting to see … well, something. And you get something like the above.


What the heck is going on ? This effect comes from the bayer pattern common to our DSLR’s and OSC (One Shot Color) cameras. Since the most common bayer pattern has 2 Green to 1 Blue and 1 Red, our images come out … well, green.

If you (like me) are used to stacking in something like Deep Sky Stacker, you may get a more natural image as an end result.

Why is that ? By default, DSS will balance the histogram to Green, Red and Blue contribute approximately equally to the final histogram.

PixinSight can do this too but (surprise), it’s not the default. If you simply click the STF auto stretch icon in the toolbar

Then it’s likely you will see something like the above (maybe green, maybe red but probably NOT what you were probably hoping for)

The solution is simple, simply go to “Processes | All Processes | Screen Transfer Function” and you see something like this

Pressing the STF icon is like pressing the “nuclear” icon here

If we want to get a more natural look, all we need to do is select the image, then click the “Link/Unlink” icon

Then press “Nuclear” and … presto

A much more natural image to work with. Hopefully this helps you, and if you have any other questions, comments or corrections, please leave them in the comments and I will try to address them as best as I can.

Clear Skies

My Very (Very) First AstroImage – M31

So my goal for 2020 when I first ordered the SkyGuider Pro was to create one image that I would feel OK with showing to someone not related to or living with me.

Well, it was actually “Of Jupiter” since that’s the first thing I looked at with the Dobsonian scope, but then I did two minutes of research and decided that wasn’t the direction for me

Well, this is NOT that image. But it’s OK, I’m going to show it anyway 🙂

I couldn’t wait for the SkyGuider to arrive so I searched the internet and discovered that it was possible to make astro images with just a DSLR camera/lens and a tripod. Well, that I had!

So I setup in the driveway and started with – drum roll. M31 of course (having no idea it was effectively the target that 87 bazillion other beginners all sweat blood and tears over for their first image)

I spent some time with the “Rule of 500” trying to calculate what exposure times I might be able to get with my plastic 70-300MM Nikon zoom lens set to 200mm and in the end just decided to wing it. Guessed at the ISO (800) for my ancient Nikon D300

Looking back at the images, amazingly I actually had APT hooked up (even though it cannot control BULB mode on the D300) – my subs were short enough that APT was just fine. I even took some (51) Dark and Flat (50) frames. I honestly have no clue what I shot the flats against

By some miracle, I managed to frame M31 using Sky Safari Pro and waving it madly at the sky.

I simply cannot begin to describe the feeling when I caught M31 in the first frame. Even with a few seconds exposure, it was huge compared to the stars and there was no doubt in my mind that I had found it.

I shot 321 5 second subs and integrated with DSS. Edited in GIMP

So … here goes nothing

Picture saved with settings embedded.

Aactually I think I do have an even earlier/worse version of M31 somewhere – maybe I will dig it out sometimes as the Very Very Very first astro-image!