From 428ac612dce288247344f8dc53f9cd1c86724177 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Mon, 18 Mar 2019 20:05:29 +0000 Subject: [PATCH 01/14] add readme for Firefly verification notebooks --- firefly_features/README.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 firefly_features/README.md diff --git a/firefly_features/README.md b/firefly_features/README.md new file mode 100644 index 0000000..9428ab7 --- /dev/null +++ b/firefly_features/README.md @@ -0,0 +1,12 @@ +## Verification noteooks for `lsst.display.firefly` + +The notebooks in this directory are used to test release candidates of the LSST Science Platform Notebook Aspect. + +**Last verified to run:** 2019-03-29 + +**Verified release or release candidate:** 17.0.1 + +1. [afwDisplay_Firefly_docs.ipynb](afwDisplay_Firefly_docs.ipynb) : Nearly all the content from the [module documentation](https://pipelines.lsst.io/modules/lsst.display.firefly), excluding LSST Source Detection Footprints. +2. [Firefly.ipynb](Firefly.ipynb) : Demonstrates main Firefly capabilities +3. [intro-with-globular.ipynb](intro-with-globular.ipynb) : a source detection and crude deblending notebook used in workshops in Summer 2018. +3. [HSC-Footprints.ipynb](HSC-Footprints.ipynb) : Visualizing LSST Source Detection Footprints from Hyper-Suprime Cam data processed with LSST Science Pipelines. \ No newline at end of file From 986c55002c8ce0a48f840aefcc75f3cc26f4765a Mon Sep 17 00:00:00 2001 From: David Shupe Date: Mon, 18 Mar 2019 20:05:53 +0000 Subject: [PATCH 02/14] update intro-with-globular notebook --- firefly_features/intro-with-globular.ipynb | 1156 ++++++++++++++++++++ 1 file changed, 1156 insertions(+) create mode 100644 firefly_features/intro-with-globular.ipynb diff --git a/firefly_features/intro-with-globular.ipynb b/firefly_features/intro-with-globular.ipynb new file mode 100644 index 0000000..20dea9f --- /dev/null +++ b/firefly_features/intro-with-globular.ipynb @@ -0,0 +1,1156 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Processing a (fake) globular cluster with the DM Science Pipelines" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Last verified to run:** 2019-03-29\n", + "\n", + "**Verified science platform release or release candidate:** 17.0.1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook provides an introduction to using some of the most important pieces of the DM Science Pipelines codebase:\n", + "\n", + " - Geometry classes from `lsst.geom`, such as points and boxes.\n", + " - Higher-level astronomical primitives from `lsst.afw`, such as the `Image`, `Exposure`, and `Psf` classes.\n", + " - Our core algorithmic `Task` classes, including those for source detection, deblending, and measurement.\n", + " \n", + "We'll be working with coadded images made from Subaru Hyper Suprime-Cam (HSC) data in the COSMOS field, augmented with a simulated globular cluster. We've taken a recent LSST reprocessing of the HSC-SSP UltraDeep COSMOS field (see [this page](https://confluence.lsstcorp.org/display/DM/S18+HSC+PDR1+reprocessing) for information on that reprocessing, and [this page](https://hsc-release.mtk.nao.ac.jp/doc/) for the data), and added simulated stars from a scaled [SDSS catalog](http://www.sdss.org/dr14/data_access/value-added-catalogs/?vac_id=photometry-of-crowded-fields-in-sdss-for-galactic-globular-and-open-clusters). The result is a very deep image (deeper than the 10-year LSST Deep-Wide-Fast survey, though not as deep as LSST Deep Drilling fields will be) with both a large number of galaxies and region full of stars. As we'll see, that'll present a challenge for the vanilla DM pipelines (at least today), and hence a good excuse to do some custom processing." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Imports and Custom Installs" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll start with some standard imports of both LSST and third-party packages.\n", + "\n", + "These are automatically installed in the LSST Science Platform Notebook Aspect. The `tqdm` package is a nice collection of progress bar widgets for both notebooks and the command-line." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "from tqdm import tqdm_notebook\n", + "from lsst.daf.persistence import Butler\n", + "from lsst.geom import Box2I, Box2D, Point2I, Point2D, Extent2I, Extent2D\n", + "from lsst.afw.image import Exposure, Image, PARENT" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Reading Data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll be retrieving data using the `Butler` tool, which manages where various datasets are stored on the filesystem (and can in principle manage datasets that aren't even stored as files, though all of these are).\n", + "\n", + "We start by creating a `Butler` instance, pointing it at a *Data Repository* (which here is just a root directory). If you're interesting in looked at the original HSC data without the simulated cluster, change the path below to just `/datasets/hsc/cosmos`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "butler = Butler(\"/project/jbosch/tutorials/lsst2018/data\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Datasets managed by a butler are identified by a dictionary *Data ID* (specifying things like the visit number or sky patch) and a string *DatasetType* (such as a particular image or catalog). Different DatasetTypes have different keys, while different instances of the same Dataset Type have different values. All of the datasets we use in this tutorial will correspond to the same patch of sky, so they'll have at least the keys in the dictionary in the next cell (they will also have `filter`, but with different values):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataId = {\"tract\": 9813, \"patch\": \"4,4\"}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now use those to load a set of *griz* coadds, which we'll put directly in a dictionary. The result of each `Butler.get` call is in this case an `lsst.afw.image.Exposure` object, an image that actually contains three \"planes\" (the main image, a bit mask, and a variance image) as well as many other objects that describe the image, such as its PSF and WCS. Note that we (confusingly) use `Exposures` to hold coadd images as well as true single-exposure images.\n", + "\n", + "The DatasetType here is `deepCoadd_calexp` (a coadd on which we've already done some additional processing, such as subtracting the background and setting some mask values), and the extra `filter` argument gets appended to the Data ID." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "coadds = {b: butler.get(\"deepCoadd_calexp\", dataId, filter=\"HSC-{}\".format(b.upper())) for b in \"griz\"}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Making and displaying color composite images" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll start by just looking at the images, as 3-color composites. We'll use astropy to build those as a nice way to demonstrate how to get NumPy arrays from the `Exposure` objects in the `coadds` dict. (LSST also has code to make 3-color composites using the same algorithm, and in fact the Astropy implementation is based on ours, but now that it's in Astropy we'll probably retire ours.)\n", + "\n", + "We'll just use matplotlib to display the images themselves. We'll use Firefly for other image display tasks later, but while Firefly itself supports color-composites, we haven't finished connecting that functionality to the Python client we'll demonstrate here." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from astropy.visualization import make_lupton_rgb\n", + "from matplotlib import pyplot\n", + "%matplotlib inline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll use the following function a few times to display color images. It's worth reading through the implementation carefully to see what's going on." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def showRGB(exps, bgr=\"gri\"):\n", + " \"\"\"Display an RGB color composite image with matplotlib.\n", + " \n", + " Parameters\n", + " ----------\n", + " exps : `dict`\n", + " Dictionary of `lsst.afw.image.Exposure` objects, keyed by filter name.\n", + " bgr : sequence\n", + " A 3-element sequence of filter names (i.e. keys of the exps dict) indicating what band\n", + " to use for each channel.\n", + " \"\"\"\n", + " # Extract the primary image component of each Exposure with the .image property, and use .array to get a NumPy array view.\n", + " rgb = make_lupton_rgb(image_r=exps[bgr[2]].image.array, # numpy array for the r channel\n", + " image_g=exps[bgr[1]].image.array, # numpy array for the g channel\n", + " image_b=exps[bgr[0]].image.array, # numpy array for the b channel\n", + " stretch=1, Q=10) # parameters used to stretch and scale the pixel values\n", + " pyplot.figure(figsize=(20, 15))\n", + " # Exposure.getBBox() returns a Box2I, a box with integer pixel coordinates that correspond to the centers of pixels.\n", + " # Matplotlib's `extent` argument expects to receive the coordinates of the edges of pixels, which is what\n", + " # this Box2D (a box with floating-point coordinates) represents.\n", + " integerPixelBBox = exps[bgr[0]].getBBox()\n", + " bbox = Box2D(integerPixelBBox)\n", + " pyplot.imshow(rgb, interpolation='nearest', origin='lower', extent=(bbox.getMinX(), bbox.getMaxX(), bbox.getMinY(), bbox.getMaxY()))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "showRGB(coadds, bgr=\"gri\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "showRGB(coadds, bgr=\"riz\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Those images are a full \"patch\", which is our usual unit of processing for coadds - it's about the same size as a single LSST sensor (exactly the same in pixels, smaller in terms of area because these use HSC's smaller pixel scale). That's a bit unweildy (just because waiting for processing to happen isn't fun in a tutorial setting), so we'll reload our dict with sub-images centered on the cluster. Note that we can load the sub-images directly with the `butler`, by appending `_sub` to the DatasetType and passing a `bbox` argument." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "coadds = {b: butler.get(\"deepCoadd_calexp_sub\", dataId, filter=\"HSC-{}\".format(b.upper()),\n", + " bbox=Box2I(corner=Point2I(18325, 17725), dimensions=Extent2I(400, 350)))\n", + " for b in \"griz\"}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "showRGB(coadds, \"gri\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Basic Processing" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we'll try the regular LSST processing tasks, with a simpler configuration than we usually use to process coadds, just to avoid being distracted by complexity. This includes\n", + "\n", + " - Detection (`SourceDetectionTask`): given an `Exposure`, find above-threshold regions and peaks within them (`Footprints`), and create a *parent* source for each `Footprint`.\n", + " - Deblending (`SourceDeblendTask`): given an `Exposure` and a catalog of parent sources, create a *child* source for each peak in every `Footprint` that contains more than one peak. Each child source is given a `HeavyFootprint`, which contains both the pixel region that source covers and the fractional pixel values associated with that source.\n", + " - Measurment (`SingleFrameMeasurementTask`): given an `Exposure` and a catalog of sources, run a set of \"measurement plugins\" on each source, using deblended pixel values if it is a child.\n", + "\n", + "We'll start by importing these, along with the `SourceCatalog` class we'll use to hold the outputs." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from lsst.meas.algorithms import SourceDetectionTask\n", + "from lsst.meas.deblender import SourceDeblendTask\n", + "from lsst.meas.base import SingleFrameMeasurementTask\n", + "from lsst.afw.table import SourceCatalog" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll now construct all of these `Tasks` before actually running any of them. That's because `SourceDeblendTask` and `SingleFrameMeasurementTask` are constructed with a `Schema` object that records what fields they'll produce, and they modify that schema when they're constructed by adding columns to it. When we run the tasks later, they'll need to be given a catalog that includes all of those columns, but we can't add columns to a catalog that already exists.\n", + "\n", + "To recap, the sequence looks like this:\n", + "\n", + " 1. Make a (mostly) empty schema.\n", + " 2. Construct all of the `Task`s (in the order you plan to run them), which adds columns to the schema.\n", + " 3. Make a `SourceCatalog` object from the *complete* schema.\n", + " 4. Pass the same `SourceCatalog` object to each `Task` when you run it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "schema = SourceCatalog.Table.makeMinimalSchema()\n", + "\n", + "detectionTask = SourceDetectionTask(schema=schema)\n", + "\n", + "deblendTask = SourceDeblendTask(schema=schema)\n", + "\n", + "# We'll customize the configuration of measurement to just run a few plugins.\n", + "# The default list of plugins is much longer (and hence slower).\n", + "measureConfig = SingleFrameMeasurementTask.ConfigClass()\n", + "measureConfig.plugins.names = [\"base_SdssCentroid\", \"base_PsfFlux\", \"base_SkyCoord\"]\n", + "# \"Slots\" are aliases that provide easy access to certain plugins.\n", + "# Because we're not running the plugin these slots refer to by default,\n", + "# we need to disable them in the configuration.\n", + "measureConfig.slots.apFlux = None\n", + "measureConfig.slots.gaussianFlux = None\n", + "measureConfig.slots.shape = None\n", + "measureConfig.slots.modelFlux = None\n", + "measureConfig.slots.calibFlux = None\n", + "measureTask = SingleFrameMeasurementTask(config=measureConfig, schema=schema)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "list(measureConfig.slots.keys())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The first step we'll run is detection, which actually returns a new `SourceCatalog` object rather than working on an existing one.\n", + "\n", + "Instead, it takes a `Table` object, which is sort of like a factory for records. We won't use it directly after this, and it isn't actually necessary to make a new `Table` every time you run `SourceDetectionTask` (but you can only create one after you're done adding columns so the schema).\n", + "\n", + "`Task`s that return anything do so via a `lsst.pipe.base.Struct` object, which is just a simple collection of named attributes. The only return values we're interested is `sources`. That's our new `SourceCatalog`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "table = SourceCatalog.Table.make(schema)\n", + "detectionResult = detectionTask.run(table, coadds['r'])\n", + "catalog = detectionResult.sources" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's take a quick look at what's in that catalog. First off, we can look at its schema:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "catalog.schema" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that this includes a lot of columns that were actually added by the deblend or measurement steps; those will all still be blank (`0` for integers or flags, `NaN` for floating-point columns).\n", + "\n", + "In fact, the only columns filled by `SourceDetectionTask` are the IDs. But it also attaches `Footprint` objects, which don't appear in the schema. You can retrieve the `Footprint` by calling `getFootprint()` on a row:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "footprint = catalog[0].getFootprint()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`Footprints` have two components:\n", + " - a `SpanSet`, which represents an irregular region on an image via a list of (y, x0, x1) `Spans`;\n", + " - a `PeakCatalog`, a slightly different kind of catalog whose rows represent peaks within that `Footprint`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(footprint.getSpans())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(footprint.getPeaks())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It's worth noting that while the peaks *can* have both an integer-valued position and a floating-point position, they're the same right now; `SourceDetectionTask` currently just finds the pixels that are local minima and doesn't try to find their sub-pixel locations. That's left to the centroider, which is part of the measurement stage.\n", + "\n", + "Before we can get to that point, we need to run the deblender:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "deblendTask.run(coadds['r'], catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`SourceDeblendTask` doesn't actually return anything - all of its outputs are just modifications to the catalog that's passed in. It both sets some columns (`parent` and all those whose names start with `deblend_`) and creates new rows (for the child sources). It does *not* remove the parent rows it created those child rows from, and this is intentional, because we want to measure both \"interpretations\" of the blend family: one in which there is only one object (the parent version) and one in which there are several (the children). Before doing any science with the outputs of an LSST catalog, it's important to remove one of those interpretations (typically the parent one). That can be done by looking at the `deblend_nChild` and `parent` fields:\n", + "\n", + " - `parent` is the ID of the source from which this was deblended, or `0` if the source is itself a parent.\n", + " - `deblend_nChild` is the number of child sources this source has (so it's `0` for sources that are themselves children or were never blended).\n", + " \n", + "Together, these define two particularly useful filters:\n", + "\n", + " - `deblend_nChild == 0`: never-blended object or de-blended child\n", + " - `deblend_nChild == 0 and parent == 0`: never-blended object\n", + " \n", + "The first is what you'll usually want to use; the second is what to use if you're willing to throw away some objects (possibly many) because you don't trust the deblender." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The last processing step for our purposes is running measurement:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "measureTask.run(catalog, coadds['r'])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Inspecting processing outputs with Firefly" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we'll look at the results using Firefly. Because these are the low-level outputs of a particular configuration of some processing `Tasks`, Firefly doesn't know nearly as much about them as it will about the (more \"curated\") data products that will appear in actual LSST Data Releases. The tooling to connect Firefly to Python is also pretty new, so you can expect a lot of improvement in the future in both what Firefly can do with the datasets we send it and how easy it is is to send them.\n", + "\n", + "First some imports and setup:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from lsst.afw.display import setDefaultBackend, Display\n", + "setDefaultBackend(\"firefly\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In LSST Science Platform environments, we have prepopulated environment variables `FIREFLY_URL` and `FIREFLY_HTML` with the Firefly server URL and the landing page or html file on that server. These are picked up automatically when creating the display." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display = Display(frame=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the LSST Science Platform, defining the Display opens a browser tab. A nice way to arrange your Jupyterlab windows is to put the notebooks on the right-hand side, and the Firefly tabs on the left half." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can now display an `Exposure` object:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display.mtv(coadds[\"r\"])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After doing that, take some time to play around with the GUI. See [this help page](http://irsa.ipac.caltech.edu/onlinehelp/finderchart/visualization.html?bd=2018-07-12#imageoptions) for help with the Firefly toolbar. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are a lot of transparent mask overlays that can make it hard to see the image, but the overlays button ![overlays button](http://irsa.ipac.caltech.edu/onlinehelp/finderchart/img/layers.png) on the toolbar gives you very detailed control over them. Mask transparency can also be controlled using `afw.display`, for individual planes or for all." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display.setMaskTransparency(80)\n", + "display.setMaskTransparency(90, 'DETECTED')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These transparency settings are \"sticky\" for each Display, and settable by default with `lsst.afw.display.setDefaultMaskTransparency`. Alternatively, we can set the transparency after each image display." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The scale can also be changed programmatically, or by `afw.display` commands." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display.scale(\"asinh\", \"zscale\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The function below overplots the sources we detected on the image. Firefly itself has much more sophisticated ways of interacting them with catalogs; an example of overlaying a catalog will be shown at the end of the notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def showCatalog(disp, cat, color='orange', sym='+'):\n", + " \"\"\"Display sources from a catalog.\n", + " \n", + " Parameters\n", + " ----------\n", + " disp : `lsst.afw.display.Display`\n", + " Display interface object to send to.\n", + " cat : `lsst.afw.table.SourceCatalog`\n", + " Catalog containing sources to display. Must have populated centroid columns.\n", + " color : `str`\n", + " Color to use for overlaid points (X11 color string).\n", + " sym : `str`\n", + " Symbol; one of \"+\", \"x\", or \"o\".\n", + " \"\"\"\n", + " with disp.Buffering():\n", + " for record in cat:\n", + " if record[\"deblend_nChild\"] != 0: # don't show \"one object\" interpretation of blends\n", + " continue\n", + " disp.dot(sym, record.getX(), record.getY(), size=5, ctype=color)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "showCatalog(display, catalog)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Subtracting Stars" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To see how well that measurement worked, we'll use the PSF model and the measured PSF fluxes to subtract all of the objects. Some of those objects are actually well-resolved galaxies, so we don't expect them to subtract well, but we'll just ignore that for now and focus our attention on the stars when we look at the results.\n", + "\n", + "Once again, it's worth taking some time to read carefully through the function below before just running it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def subtractStars(data, cat, fluxes=None):\n", + " \"\"\"Subtract point sources from an image.\n", + " \n", + " Parameters\n", + " ----------\n", + " data : `lsst.afw.image.Exposure`\n", + " Image from which point sources will be subtracted (not modified).\n", + " cat : `lsst.afw.table.SourceCatalog`\n", + " Catalog providing centroids and possibly fluxes.\n", + " fluxes : `numpy.ndarray`, optional\n", + " If not None, an array of the same length as ``cat`` containing fluxes\n", + " to use. If None, ``cat.getPsfFlux()` will be used instead.\n", + " \n", + " Returns\n", + " -------\n", + " model : `lsst.afw.image.Exposure`\n", + " Image containing just the point source models.\n", + " residuals : `lsst.afw.image.Exposure`\n", + " ``data`` with ``model`` subtracted.\n", + " \"\"\"\n", + " # Make sure no one calls this still-blended sources in the catalog.\n", + " assert (cat[\"deblend_nChild\"] == 0).all()\n", + " # Get the PSF model from the given Exposure object. The returned object\n", + " # can be evaluated anywhere in the image to obtain an image of the PSF at\n", + " # that point.\n", + " psf = data.getPsf()\n", + " # Make a new blank Exposure object with the same dimensions and pixel type.\n", + " model = Exposure(data.getBBox(), dtype=np.float32)\n", + " # Copy the WCS and PSF over from the original Exposure.\n", + " model.setWcs(data.getWcs())\n", + " model.setPsf(psf)\n", + " if fluxes is None:\n", + " fluxes = cat.getPsfInstFlux()\n", + " for flux, record in zip(fluxes, tqdm_notebook(cat)):\n", + " # Obtain a PSF model image at the position of this source\n", + " psfImage = psf.computeImage(record.getCentroid())\n", + " # Make sure the PSF model image fits within the larger image; if it doesn't, clip it so it does.\n", + " psfBBox = psfImage.getBBox()\n", + " if not data.getBBox().contains(psfBBox):\n", + " psfBBox.clip(data.getBBox()) # shrink the bounding box to the intersection\n", + " psfImage = psfImage[psfBBox, PARENT] # obtain a subimage\n", + " # Make a subimage view of `model`, and subtract the PSF image, scaled by the flux.\n", + " # PARENT here sets the coordinate system to be the one shared by all patches in\n", + " # the tract rather than the one in which this patches' origin is (0, 0).\n", + " # PARENT is the coordinate system used by the PSF, and it will soon be the default\n", + " # here too (but isn't yet, so we need to make that explicit).\n", + " model.image[psfBBox, PARENT].scaledPlus(flux, psfImage.convertF())\n", + " # Now that we've made a model image, make a copy of the data and subtract the model from it.\n", + " residuals = data.clone()\n", + " residuals.image -= model.image\n", + " return model, residuals" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Before we can actually subtract the stars, we should remove the not-deblended sources; this function and the next one we write will assume they've been removed.\n", + "\n", + "We can do that with NumPy-style boolean indexing, with one catch:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "deblended = catalog[catalog[\"deblend_nChild\"] == 0].copy(deep=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Column arrays of `SourceCatalogs` can only be accessed when the catalog is stored in a single contiguous block of memory. But unlike Numpy arrays, using boolean indexing on a catalog doesn't automatically make a copy to ensure memory is contiguous. Instead it creates a view to the selected rows. That can be useful or more efficient in some cases, but it also prevents us from accessing columns. To fix that, we immediately make a deep copy of the catalog, which copies it into a new block of contiguous memory.\n", + "\n", + "We can now run our `subtractStars` function. Note the nice `tqdm` progress bar in action!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model, residuals = subtractStars(coadds['r'], deblended)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To look at the results, we'll make two more display frames to show the residuals (frame 2) and model (frame 3).\n", + "\n", + "Use the \"WCS Match\" and single-frame view GUI options to blink between them and the original image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display2 = Display(frame=2)#, name=channel)\n", + "display2.setMaskTransparency(80)\n", + "display2.setMaskTransparency(90, 'DETECTED')\n", + "display2.mtv(residuals, title=\"residuals\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display3 = Display(frame=3)#, name=channel)\n", + "display3.mtv(model, title=\"model\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The isolated objects seem to be subtracted well, and *some* of the deblended ones are too. But others really didn't subtract well at all; the deblender is probably mangling those. The current deblender depends a lot on having at least some sides of an object not blended with a neighbor, and that's manifestly untrue in the center of the cluster. We're currently working on a new deblender ([Scarlet](https://github.com/fred3m/scarlet)) that we expect to handle this better.\n", + "\n", + "In any case, if we look at the overplotted positions, it looks like the centroids aren't too bad, even if the fluxes frequently are. It seems our centroider is pretty robust to whatever failure modes the deblender is exhibiting." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Fitting Stars" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For better photometry, let's try fitting the stars ourselves, simultaneously. We can use the same PSF object we used to subtract stars to instead construct a model for the entire image, and then we can use basic linear least squares from NumPy to fit it.\n", + "\n", + "Once again, this will go poorly for galaxies, and we'll ignore that. We could also imagine adding terms to fit small offsets in the positions, or using sparse matrices to make this scale better. But this works well enough for this tutorial.\n", + "\n", + "This is another function worth reading through carefully." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def fitStars(data, cat):\n", + " \"\"\"\n", + " Fit all sources in the given catalog simultaneously, assuming they are all point sources.\n", + " \n", + " Parameters\n", + " ----------\n", + " data : `lsst.afw.image.Exposure`\n", + " Image to fit and subtract model from.\n", + " cat : `lsst.afw.table.SourceCatalog`.\n", + " Catalog providing centroids.\n", + " \n", + " Returns\n", + " -------\n", + " fluxes : `numpy.ndarray`\n", + " Array of best-fit flux values.\n", + " model : `lsst.afw.image.Exposure`\n", + " Image containing just the point source models.\n", + " residuals : `lsst.afw.image.Exposure`\n", + " ``data`` with ``model`` subtracted.\n", + " \"\"\"\n", + " assert (cat[\"deblend_nChild\"] == 0).all()\n", + " bbox = data.getBBox()\n", + " psf = data.getPsf()\n", + " # Dimensions of the problem:\n", + " N = len(cat)\n", + " W = bbox.getWidth()\n", + " H = bbox.getHeight()\n", + " M = W*H\n", + " # To solve a lineat-least squares problem, we need to construct a matrix that maps\n", + " # parameters (fluxes) to data (pixel values). That'd have dimensions M x N.\n", + " # Instead, we'll construct an array with dimensions N x H x W, which we'll later\n", + " # transpose and reshape. It's important that we order the dimensions this way\n", + " # because it makes each nested H x W array have the same memory layout as the\n", + " # LSST Image class, which lets us make image views into those subarrays.\n", + " matrix = np.zeros((N, H, W), dtype=float)\n", + " for n, record in enumerate(tqdm_notebook(cat)):\n", + " # Make an Image view to a nested sub-array. Note that writing to this\n", + " # will modify the parent array.\n", + " matrixView = Image(matrix[n, :, :], xy0=bbox.getMin(), dtype=np.float64)\n", + " # Obtain a PSF image, and clip it to fit the larger image, just as we\n", + " # did in subtractStars().\n", + " psfImage = psf.computeImage(record.getCentroid())\n", + " psfBBox = psfImage.getBBox()\n", + " if not bbox.contains(psfBBox):\n", + " psfBBox.clip(bbox)\n", + " psfImage = psfImage[psfBBox, PARENT]\n", + " # Add the PSF image to the matrix sub-array.\n", + " matrixView[psfBBox, PARENT].scaledPlus(record.getPsfInstFlux(), psfImage)\n", + " # Reshape and transpose the matrix, as promised\n", + " A = matrix.reshape(N, M).transpose()\n", + " # Get an array view to the image we're fitting, and flatten it the same way.\n", + " b = data.image.array.reshape(M)\n", + " # Fit for the fluxes. This does an SVD under the hood.\n", + " fluxes, _, _, _ = np.linalg.lstsq(A, b, rcond=None)\n", + " # Make a model image and subtract it from the data. Because we've already\n", + " # evaluated the matrix, this is much more efficient than calling\n", + " # subtractStars with our flux array.\n", + " model = Exposure(bbox, dtype=np.float32)\n", + " model.setWcs(data.getWcs())\n", + " model.setPsf(psf)\n", + " model.image.array[:, :] = np.dot(A, fluxes).reshape(bbox.getHeight(), bbox.getWidth())\n", + " residuals = data.clone()\n", + " residuals.image -= model.image\n", + " return fluxes, model, residuals" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fluxes, model, residuals = fitStars(coadds['r'], deblended)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Display the new residuals (omitting the masks):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display2.mtv(residuals.image, title=\"residuals\")\n", + "display3.mtv(model, title=\"model\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The results are better than the last subtraction, but still not great.\n", + "\n", + "Note that you may need to set the stretch manually (color stretch icon ![stretch button](http://irsa.ipac.caltech.edu/onlinehelp/finderchart/img/stretch.png)) to give the residuals image the same stretch as the others. For this image, you can set the stretch using \"Data\" bounds of `(-0.05, 2)` with the original image selected, and then blinking to the other frames while the stretch window is open. The lock icon ![lock icon](http://irsa.ipac.caltech.edu/onlinehelp/finderchart/img/lockimages.png) on the Firefly toolbar can be used to lock the stretch for the displayed frames." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The stretch is also settable programmatically:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display.scale('asinh', -0.1, 0.5, Q=8)\n", + "display2.scale('asinh', -0.1, 0.5, Q=8)\n", + "display3.scale('asinh', -0.1, 0.5, Q=8)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can erase the overlays on the first display:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display.erase()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Iterating" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To improve things further, let's try running the detect/deblend/measure `Tasks` on the *residual* image, to pick up those peaks that just weren't included in the first round.\n", + "\n", + "Here's a function that just runs those (relying on us having constructed them all earlier; we don't have to re-construct them to run them again):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def process(data):\n", + " \"\"\"Run detection, deblending, and measurement, returning a new SourceCatalog.\n", + " \"\"\"\n", + " result = detectionTask.run(table, data)\n", + " cat = result.sources\n", + " deblendTask.run(data, cat)\n", + " measureTask.run(cat, data)\n", + " return cat[cat[\"deblend_nChild\"] == 0].copy(deep=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "additional = process(residuals)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To make use of those additional sources, we'll have to merge them with the original ones. We'll do that extremely naively - we won't even worry about whether we have any duplicates (from sources that were partially - but not completely - subtracted by the model).\n", + "\n", + "Here's a function to do that merge. Note that we reserve space in the output catalog before actually concatenating the input catalogs into it. That makes sure the result is contiguous in memory and hence we can get column arrays without doing a deep copy at the end." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def concatenate(*cats):\n", + " \"\"\"Concatenate multiple SourceCatalogs (assumed to have the same schema).\"\"\"\n", + " result = SourceCatalog(table)\n", + " result.reserve(sum(len(cat) for cat in cats))\n", + " for cat in cats:\n", + " result.extend(cat)\n", + " return result" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "combined = concatenate(deblended, additional)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we'll fit all the stars in the combine catalog together, and look at the residuals again:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fluxes, model, residuals = fitStars(coadds['r'], combined)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display2.mtv(residuals.image, title=\"residuals\")\n", + "display3.mtv(model, title=\"model\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Even better! One more iteration *might* make things better, but we're probably getting to the limits of this very simple algorithm (especially since there really are a lot of galaxies in this image)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Uploading the catalog directly to Firefly" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As noted earlier, Firefly has sophisticated capabilities for viewing tables and overlaying catalogs. The `firefly_client` Python package underlies the Firefly backend in `lsst.afw.display`. The `firefly_client.plot` module includes convenience functions that we'll demonstrate here." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import firefly_client.plot as ffplt" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `firefly_client.plot` module should default to the client that is used in the `afw.display` Displays we defined earlier, so long as a browser tab is connected to those displays. To be certain, the next line ensures we use the same setup as our Display instances." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ffplt.use_client(display.getClient())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Upload our combined catalog to Firefly. By default, the catalog will be shown in an interactive table viewer. Since the catalog contains coordinate columns recognized by Firefly, the catalog entries will be overlaid on the images." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "tbl_id = ffplt.upload_table(combined, title='combined catalog')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Firefly includes plotting capabilities using the Plotly.js library. We'll end this section with a histogram of flux values. `ffplt.scatter` can be used to make a scatter plot of two columns for a table. These functions use the table ID of the last uploaded table by default; the `tbl_id` we saved in the last cell can be passed as an optional argument." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ffplt.hist('log10(base_PsfFlux_instFlux)')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The table, plot, and image cells can be resized and moved around in the Firefly viewer." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Extra Credit" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### One More Time\n", + "\n", + "Try doing one more round of detection/deblend/measure, concatenate, and fit. Does it help?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Multi-Band Fitting\n", + "\n", + "While we started this tutorial by making color images, we've only processed the *r*-band data. Run `fitStars` on each band separately, using the positions from detect/deblend/measure processing in *r*. Make *gri* and *riz* color images of the model and residuals." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Deblender Output Inspection\n", + "\n", + "Instead of just guessing that the deblender is mangling things, we could actually look. If you call `getFootprint()` on a row corresponding to a child source, you'll get a `HeavyFootprint`. Use its `getBBox()` and `insert` methods to make images of some `HeavyFootprints`, and display them with matplotlib or Firefly.\n", + "\n", + "Note that if you try this on a source that isn't a deblended child, you'll get a regular `Footprint`, which doesn't have an `insert` method (because it doesn't contain any pixel values). What catalog filter can you apply to get only child objects?\n", + "\n", + "If you're ambitious, import the [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/user_guide.html) package, and use the `interact` function and an `IntSlider` to interactively control which source's `HeavyFootprint` is displayed." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "LSST", + "language": "python", + "name": "lsst" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From b7db17abb03b385cfb5923432a53f5d430ebfe38 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Fri, 29 Mar 2019 17:48:12 +0000 Subject: [PATCH 03/14] add slightly revised Firefly notebook --- firefly_features/Firefly.ipynb | 502 +++++++++++++++++++++++++++++++++ 1 file changed, 502 insertions(+) create mode 100644 firefly_features/Firefly.ipynb diff --git a/firefly_features/Firefly.ipynb b/firefly_features/Firefly.ipynb new file mode 100644 index 0000000..d0f8671 --- /dev/null +++ b/firefly_features/Firefly.ipynb @@ -0,0 +1,502 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Firefly Visualization Demo\n", + "\n", + "**Last verified to run:** 2019-03-29\n", + "\n", + "**Verified science platform release or release candidate:** 17.0.1\n", + "\n", + "This notebook is intended to demonstrate the [Firefly](https://mospace.umsystem.edu/xmlui/handle/10355/5346) interactive interface for viewing image data. It also builds on the pedagogical explanations provided in [Getting started tutorial part 3](https://pipelines.lsst.io/getting-started/display.html) of the LSST Stack v16.0 documentation.\n", + "\n", + "This tutorial seeks to teach you about how to use the LSST Science Pipelines to inspect outputs from `processCcd.py` by displaying images and source catalogs in the Firefly image viewer. In doing so, you’ll be introduced to some of the LSST Science Pipelines’ Python APIs, including:\n", + "\n", + "* Accessing datasets with the `Butler`.\n", + "* Displaying images with `lsst.afw.display`\n", + "* Pass source catalog data directly to the `FireflyClient`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Set up\n", + "\n", + "This tutorial is meant to be run from the `jupyterhub` interface where the LSST stack is preinstalled. It assumes that the notebook is running a kernel with the `lsst_distrib` package set up.\n", + "\n", + "We start by importing packages from the LSST stack for data access and visualization." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# LSST stack imports\n", + "from lsst.daf.persistence import Butler\n", + "import lsst.afw.display as afwDisplay" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Creating a Butler client\n", + "\n", + "All data in the LSST Pipelines flow through the `Butler`. LSST does not recommend directly accessing processed image files. Instead, use the `Butler` client available from the `lsst.daf.persistence` module imported above." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "datadir = '/project/shared/data/Twinkles_subset/output_data_v2'\n", + "butler = Butler(datadir)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `Butler` client reads from the data repository specified with the inputs argument. In this specific case, the data were downloaded from: [here](https://lsst-web.ncsa.illinois.edu/~krughoff/data/twinkles_subset.tar.gz). See the `README.txt` in `/project/shared/data/Twinkles_subset` for more info." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Listing available data IDs in the Butler\n", + "\n", + "To get data from the `Butler` you need to know two things: the dataset type and the data ID.\n", + "\n", + "Every dataset stored by the Butler has a well-defined type. Tasks read specific dataset types and output other specific dataset types. The `processCcd.py` command reads in raw datasets and outputs calexp, or calibrated exposure, datasets (among others). It’s calexp datasets that you’ll display in this tutorial.\n", + "\n", + "Data IDs let you reference specific instances of a dataset. On the command line you select data IDs with `--id` arguments, filtering by keys like `visit`, `raft`, `ccd`, and `filter`.\n", + "\n", + "Now, use the `Butler` client to find what data IDs are available for the `calexp` dataset type:\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "butler.queryMetadata('calexp', ['visit', 'raft', 'sensor'], dataId={'filter': 'r'})" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The printed output is a list of `(visit, raft, ccd)` key tuples for all data IDs where the filter key is the LSST r band. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Get an exposure through the Butler\n", + "\n", + "Knowing a specific data ID, let’s get the dataset with the `Butler` client’s get method:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataId = {'filter': 'r', 'raft': '2,2', 'sensor': '1,1', 'visit': 235}\n", + "calexp = butler.get('calexp', **dataId)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `calexp` is an `ExposureF` Python object. Exposures are powerful representations of image data because they contain not only the image data, but also a variance image for uncertainty propagation, a bit mask image plane, and key-value metadata. They can also contain WCS and PSF model information. In the next steps you’ll learn how to display an Exposure’s image and mask." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create a Display\n", + "\n", + "To display the `calexp` you will use the LSST `afwDisplay` framework. It provides a uniform API for multiple display backends, including DS9, matplotlib, and LSST’s Firefly viewer. The default backend is `ds9`, but since we are working remotely on `jupyterhub` we would prefer to use the web-based Firefly display. A [user guide](https://pipelines.lsst.io/v/daily/modules/lsst.display.firefly/index.html) for `lsst.display.firefly` is available on the [pipelines.lsst.io site](https://pipelines.lsst.io/v/daily)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we create a Firefly display." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afwDisplay.setDefaultBackend('firefly')\n", + "afw_display = afwDisplay.Display(frame=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the LSST Science Platform Notebook aspect, a Firefly viewer tab appears. You may wish to drag it to the right side of the Jupyterlab area." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Display the calexp (calibrated exposure)\n", + "\n", + "We can now build the display and use the `mtv` method to view the `calexp` with Firefly. First we display an image with mask planes and then overplot some sources." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afw_display.mtv(calexp)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As soon as you execute the command a single simulated, calibrated LSST exposure, the `{'filter': 'r', 'raft': '2,2', 'sensor': '1,1', 'visit': 235}` data ID, should appear in the Firefly browser window.\n", + "\n", + "Notice that the image is overlaid with colorful regions. These are mask regions. Each color reflects a different mask bit that correspond to detections and different types of detector artifacts. You’ll learn how to interpret these colors later, but first you’ll likely want to adjust the image display." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Improving the image display\n", + "\n", + "The display framework gives you control over the image display to help bring out image details. For example, to make masked regions semi-transparent, so that underlying image features are visible, try:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afw_display.setMaskTransparency(80)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The setMaskTransparency method’s argument can range from 0 (fully opaque) to 100 (fully transparent).\n", + "\n", + "You can also control the colorbar scaling algorithm with the display’s scale method. Try an asinh stretch with explicit minimum (black) and maximum (white) values:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afw_display.scale(\"asinh\", -1, 30)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also use an automatic algorithm like `zscale` (or `minmax`) to select the white and black thresholds:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afw_display.scale(\"asinh\", \"zscale\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Interpreting displayed mask colors\n", + "\n", + "The display framework renders each plane of the mask in a different color (plane being a different bit in the mask). To interpret these colors you can get a dictionary of mask planes from the `calexp` and query the display for the colors it rendered each mask plane with. For example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "mask = calexp.getMask()\n", + "for maskName, maskBit in mask.getMaskPlaneDict().items():\n", + " print('{}: {}'.format(maskName, afw_display.getMaskPlaneColor(maskName)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the Firefly viewer tab, the overlays button ![overlays button](http://irsa.ipac.caltech.edu/onlinehelp/finderchart/img/layers.png) on the toolbar gives you very detailed control over the mask planes, such as turning individual planes on and off, changing the color and adjusting the transparency. Mask transparency and colors can also be set using `afw.display` commands, for individual planes or for all." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Plotting sources on the display\n", + "\n", + "The LSST processing pipeline also creates a table of the sources it used for PSF estimation as well as astrometric and photometric calibration. The dataset type of this table is `src`, which you can get from the Butler:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "src = butler.get('src', **dataId)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The returned object, `src`, is a `lsst.afw.table.SourceTable` object. `SourceTables` are explored more elsewhere, but you can do some simple investigations using common python functions. For example, to check the length of the object:\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "len(src)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can view an HTML rendering of the `src` table by getting an `astropy.table.Table` version of it:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "src.asAstropy()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we’ll overplot sources from the `src` table onto the image display using the Display’s `dot` method for plotting markers. `Display.dot` plots markers individually, so you’ll need to iterate over rows in the `SourceTable`. Next we display the first 100 sources. We limit the number of sources since plotting the whole catalog is a serial process and takes some time. Because of this, it is more efficient to send a batch of updates to the display, so we enclose the loop in a `display.Buffering` context, like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with afw_display.Buffering():\n", + " for record in src:\n", + " afw_display.dot('o', record.getX(), record.getY(), size=20, ctype='orange')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that the first 100 sources are preferentially located at the bottom of the image. This spatial ordering is likely imprinted by the source detection algorithm; however, it could change due to parallelization. The units o the `size` parameter are believed to be pixels." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Clearing markers\n", + "\n", + "`Display.dot` always adds new markers to the display. To clear the display of all markers, use the erase method:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afw_display.erase()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Using FireflyClient directly." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can also use the Firefly client directly to make plots and add catalogs to the visualization. First retrieve the `FireflyClient` object." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fc = afw_display.getClient()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For uploading a table it is convenient to use the firefly_client.plot module. Import it and ensure it is using the same FireflyClient instance." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import firefly_client.plot as ffplt\n", + "ffplt.use_client(fc)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's select just the sources that were used to fit the PSF. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "psf_src = src[src['calib_psfUsed']]\n", + "print(src['calib_psfUsed'])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Upload a SourceCatalog to Firefly. By default, the catalog is shown in an interactive table viewer." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "tbl_id = ffplt.upload_table(psf_src, title='Source Catalog')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Make a scatter plot." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ffplt.scatter(x_col='base_CircularApertureFlux_12_0_flux/base_GaussianFlux_flux',\n", + " y_col='log10(base_CircularApertureFlux_12_0_flux)',\n", + " size=4,\n", + " color='blue',\n", + " title='test ap flux/model mag vs. log(ap flux)',\n", + " xlabel='Model',\n", + " ylabel='Ap/Model')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Plots are rendered by `plotly`, so follow the same syntax for construction. For info on `plotly` see the [primer](https://plot.ly/python/getting-started/) and [examples](https://plot.ly/python/line-and-scatter/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Wrap up\n", + "In this tutorial we have used the `Butler` to access LSST simulation data and used the LSST Science Pipelines Python API to display images and tables. Here are some key takeaways:\n", + "\n", + "* Use the `lsst.daf.persistence.Butler` class to read and write data from repositories.\n", + "* The `lsst.afw.display` module provides a flexible framework for sending data from LSST Science Pipelines code to image displays. We used the Firefly backend for web-based visualization (`ds9`, `matplotlib`, and `ginga` are other avialable backends).\n", + "* `Exposure` objects have image data, mask data, and metadata. When you display an Exposure, the display framework automatically overlays mask planes.\n", + "* We have accessed and visualized the Table of sources extracted from an image. The `Table.asAstropy` method can be used to view the table as an `astropy.table.Table`.\n", + "* We have sent a subset of objects to directly to the Firefly API for additional plotting and investigation.\n", + "\n", + "\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "LSST", + "language": "python", + "name": "lsst" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 12189e32c1fc721f9c984e5077609f96aa1fffa0 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Fri, 29 Mar 2019 18:05:45 +0000 Subject: [PATCH 04/14] add HSC Footprints notebook --- firefly_features/HSC-Footprints.ipynb | 273 ++++++++++++++++++++++++++ 1 file changed, 273 insertions(+) create mode 100644 firefly_features/HSC-Footprints.ipynb diff --git a/firefly_features/HSC-Footprints.ipynb b/firefly_features/HSC-Footprints.ipynb new file mode 100644 index 0000000..109ab41 --- /dev/null +++ b/firefly_features/HSC-Footprints.ipynb @@ -0,0 +1,273 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Browsing HSC Footprints" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Last verified to run:** 2019-03-29\n", + "\n", + "**Verified science platform release or release candidate:** 17.0.1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook tests new functionality in `lsst.display.firefly` for browsing catalogs, footprints and images." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from lsst.daf.persistence import Butler\n", + "import lsst.afw.geom as afwGeom" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import lsst.afw.display as afwDisplay" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This example uses HSC-reprocessed data from `/datasets/hsc/repo/rerun/RC/w_2018_38/DM-15690/deepCoadd-results`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "butler = Butler('/datasets/hsc/repo/rerun/RC/w_2018_38/DM-15690/')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Define an ID for retrieving data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataId = dict(filter='HSC-R', tract=9813, patch='4,4')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Define a bounding boxes for a region of interest, one for the catalog and a larger one for the image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "footprintsBbox = afwGeom.Box2I(corner=afwGeom.Point2I(16900, 18700),\n", + " dimensions=afwGeom.Extent2I(600,600))\n", + "imageBbox = afwGeom.Box2I(corner=afwGeom.Point2I(16800, 18600),\n", + " dimensions=afwGeom.Extent2I(800,800))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Retrieve a cutout for the corresponding exposure / coadd" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "calexp = butler.get('deepCoadd_calexp_sub', dataId=dataId, bbox=imageBbox)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Retrieve a catalog for the entire region" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "measCat = butler.get('deepCoadd_meas', dataId=dataId)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Select only those records with pixel locations inside the footprints bounding box" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "catSelect = np.array([footprintsBbox.contains(afwGeom.Point2I(r.getX(), r.getY()))\n", + " for r in measCat])\n", + "catalogSubset = measCat.subset(catSelect)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Set up the Firefly display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1 = afwDisplay.Display(frame=1, backend='firefly')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.setMaskTransparency(80)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.scale('asinh', 10, 80, unit='percent', Q=6)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `resetLayout` method sets up the image for upper left and table below, with space for plots at upper right" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.resetLayout()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Display the image" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.mtv(calexp)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Overlay the footprints and accompanying table.\n", + "\n", + "Colors can be specified as a name like `'cyan'` or `afwDisplay.RED`; as an rgb value such as `'rgb(80,100,220)'`; or as rgb plus alpha (opacity) such as `'rgba('74,144,226,0.60)'`.\n", + "\n", + "The `layerString` and `titleString` are concatenated with the frame, to make the footprint drawing layer name and the table title, respectively. If multiple footprint layers are desired, be sure to use different values of `layerString`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.overlayFootprints(catalogSubset, color='rgba(74,144,226,0.50)',\n", + " highlightColor='yellow', selectColor='orange',\n", + " style='outline', layerString='detection footprints ',\n", + " titleString='catalog footprints ')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "LSST", + "language": "python", + "name": "lsst" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 529c4ab675103749234d7e85f8cccaf6901bc10e Mon Sep 17 00:00:00 2001 From: David Shupe Date: Fri, 29 Mar 2019 18:13:21 +0000 Subject: [PATCH 05/14] add module docs notebook --- .../afwDisplay_Firefly_docs.ipynb | 361 ++++++++++++++++++ 1 file changed, 361 insertions(+) create mode 100644 firefly_features/afwDisplay_Firefly_docs.ipynb diff --git a/firefly_features/afwDisplay_Firefly_docs.ipynb b/firefly_features/afwDisplay_Firefly_docs.ipynb new file mode 100644 index 0000000..f0b11f0 --- /dev/null +++ b/firefly_features/afwDisplay_Firefly_docs.ipynb @@ -0,0 +1,361 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Test most of the module docs " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Last verified to run:** 2019-03-29\n", + "\n", + "**Verified Science Platform release or release candidate:** 17.0.1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook contains most of the content of the [module docs](https://pipelines.lsst.io/modules/lsst.display.firefly) for the Firefly backend of the afwDisplay visualization interface." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Initializing a Display instance" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import lsst.afw.display as afwDisplay" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afwDisplay.setDefaultBackend('firefly')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1 = afwDisplay.Display(frame=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Retrieve and display an image" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from lsst.daf.persistence import Butler\n", + "butler = Butler('/project/shared/data/Twinkles_subset/output_data_v2')\n", + "dataId = {'filter': 'r', 'raft': '2,2', 'sensor': '1,1', 'visit': 235}\n", + "calexp = butler.get('calexp', **dataId)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.scale('asinh', 'zscale')\n", + "display1.setMaskTransparency(90)\n", + "display1.mtv(calexp)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Overlay symbols from a catalog" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "src = butler.get('src', **dataId)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with display1.Buffering():\n", + " for record in src:\n", + " display1.dot('o', record.getX(), record.getY(), size=20, ctype='orange')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.erase()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Making a display tab reopen after closing it" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Reinitializing a display tab" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.clearViewer()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Displaying a clickable link" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.getClient().display_url()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mask display and manipulation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.mtv(calexp)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.setMaskPlaneColor('DETECTED', afwDisplay.GREEN)\n", + "display1.setMaskTransparency(30)\n", + "display1.mtv(calexp)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Rescale or restretch the image pixels display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.scale('log', -1, 10, 'sigma')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.scale('asinh', 'zscale')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Zooming and panning" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.pan(1064, 890)\n", + "display1.zoom(4)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.zoom(2, 500, 800)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Overlay regions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with display1.Buffering():\n", + " for record in src:\n", + " display1.dot('o', record.getX(), record.getY(), size=20, ctype='orange')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.line([[100,100], [100,200], [200,200], [200,100], [100,100]], ctype='blue')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display1.erase()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Upload interactive table and chart" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fc = display1.getClient()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import firefly_client.plot as ffplt\n", + "ffplt.use_client(fc)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "tbl_id = ffplt.upload_table(src, title='Source Catalog')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ffplt.hist('log10(base_PsfFlux_flux)')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "LSST", + "language": "python", + "name": "lsst" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 5089d81188e1f21cf014ac0d5853931380a7863f Mon Sep 17 00:00:00 2001 From: David Shupe Date: Wed, 3 Jul 2019 15:42:05 -0700 Subject: [PATCH 06/14] delete duplicate Firefly notebook --- firefly_features/Firefly.ipynb | 502 --------------------------------- 1 file changed, 502 deletions(-) delete mode 100644 firefly_features/Firefly.ipynb diff --git a/firefly_features/Firefly.ipynb b/firefly_features/Firefly.ipynb deleted file mode 100644 index d0f8671..0000000 --- a/firefly_features/Firefly.ipynb +++ /dev/null @@ -1,502 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Firefly Visualization Demo\n", - "\n", - "**Last verified to run:** 2019-03-29\n", - "\n", - "**Verified science platform release or release candidate:** 17.0.1\n", - "\n", - "This notebook is intended to demonstrate the [Firefly](https://mospace.umsystem.edu/xmlui/handle/10355/5346) interactive interface for viewing image data. It also builds on the pedagogical explanations provided in [Getting started tutorial part 3](https://pipelines.lsst.io/getting-started/display.html) of the LSST Stack v16.0 documentation.\n", - "\n", - "This tutorial seeks to teach you about how to use the LSST Science Pipelines to inspect outputs from `processCcd.py` by displaying images and source catalogs in the Firefly image viewer. In doing so, you’ll be introduced to some of the LSST Science Pipelines’ Python APIs, including:\n", - "\n", - "* Accessing datasets with the `Butler`.\n", - "* Displaying images with `lsst.afw.display`\n", - "* Pass source catalog data directly to the `FireflyClient`" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Set up\n", - "\n", - "This tutorial is meant to be run from the `jupyterhub` interface where the LSST stack is preinstalled. It assumes that the notebook is running a kernel with the `lsst_distrib` package set up.\n", - "\n", - "We start by importing packages from the LSST stack for data access and visualization." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# LSST stack imports\n", - "from lsst.daf.persistence import Butler\n", - "import lsst.afw.display as afwDisplay" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Creating a Butler client\n", - "\n", - "All data in the LSST Pipelines flow through the `Butler`. LSST does not recommend directly accessing processed image files. Instead, use the `Butler` client available from the `lsst.daf.persistence` module imported above." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "datadir = '/project/shared/data/Twinkles_subset/output_data_v2'\n", - "butler = Butler(datadir)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `Butler` client reads from the data repository specified with the inputs argument. In this specific case, the data were downloaded from: [here](https://lsst-web.ncsa.illinois.edu/~krughoff/data/twinkles_subset.tar.gz). See the `README.txt` in `/project/shared/data/Twinkles_subset` for more info." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Listing available data IDs in the Butler\n", - "\n", - "To get data from the `Butler` you need to know two things: the dataset type and the data ID.\n", - "\n", - "Every dataset stored by the Butler has a well-defined type. Tasks read specific dataset types and output other specific dataset types. The `processCcd.py` command reads in raw datasets and outputs calexp, or calibrated exposure, datasets (among others). It’s calexp datasets that you’ll display in this tutorial.\n", - "\n", - "Data IDs let you reference specific instances of a dataset. On the command line you select data IDs with `--id` arguments, filtering by keys like `visit`, `raft`, `ccd`, and `filter`.\n", - "\n", - "Now, use the `Butler` client to find what data IDs are available for the `calexp` dataset type:\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "butler.queryMetadata('calexp', ['visit', 'raft', 'sensor'], dataId={'filter': 'r'})" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The printed output is a list of `(visit, raft, ccd)` key tuples for all data IDs where the filter key is the LSST r band. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Get an exposure through the Butler\n", - "\n", - "Knowing a specific data ID, let’s get the dataset with the `Butler` client’s get method:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "dataId = {'filter': 'r', 'raft': '2,2', 'sensor': '1,1', 'visit': 235}\n", - "calexp = butler.get('calexp', **dataId)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `calexp` is an `ExposureF` Python object. Exposures are powerful representations of image data because they contain not only the image data, but also a variance image for uncertainty propagation, a bit mask image plane, and key-value metadata. They can also contain WCS and PSF model information. In the next steps you’ll learn how to display an Exposure’s image and mask." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Create a Display\n", - "\n", - "To display the `calexp` you will use the LSST `afwDisplay` framework. It provides a uniform API for multiple display backends, including DS9, matplotlib, and LSST’s Firefly viewer. The default backend is `ds9`, but since we are working remotely on `jupyterhub` we would prefer to use the web-based Firefly display. A [user guide](https://pipelines.lsst.io/v/daily/modules/lsst.display.firefly/index.html) for `lsst.display.firefly` is available on the [pipelines.lsst.io site](https://pipelines.lsst.io/v/daily)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now, we create a Firefly display." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "afwDisplay.setDefaultBackend('firefly')\n", - "afw_display = afwDisplay.Display(frame=1)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In the LSST Science Platform Notebook aspect, a Firefly viewer tab appears. You may wish to drag it to the right side of the Jupyterlab area." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Display the calexp (calibrated exposure)\n", - "\n", - "We can now build the display and use the `mtv` method to view the `calexp` with Firefly. First we display an image with mask planes and then overplot some sources." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "afw_display.mtv(calexp)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As soon as you execute the command a single simulated, calibrated LSST exposure, the `{'filter': 'r', 'raft': '2,2', 'sensor': '1,1', 'visit': 235}` data ID, should appear in the Firefly browser window.\n", - "\n", - "Notice that the image is overlaid with colorful regions. These are mask regions. Each color reflects a different mask bit that correspond to detections and different types of detector artifacts. You’ll learn how to interpret these colors later, but first you’ll likely want to adjust the image display." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Improving the image display\n", - "\n", - "The display framework gives you control over the image display to help bring out image details. For example, to make masked regions semi-transparent, so that underlying image features are visible, try:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "afw_display.setMaskTransparency(80)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The setMaskTransparency method’s argument can range from 0 (fully opaque) to 100 (fully transparent).\n", - "\n", - "You can also control the colorbar scaling algorithm with the display’s scale method. Try an asinh stretch with explicit minimum (black) and maximum (white) values:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "afw_display.scale(\"asinh\", -1, 30)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can also use an automatic algorithm like `zscale` (or `minmax`) to select the white and black thresholds:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "afw_display.scale(\"asinh\", \"zscale\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Interpreting displayed mask colors\n", - "\n", - "The display framework renders each plane of the mask in a different color (plane being a different bit in the mask). To interpret these colors you can get a dictionary of mask planes from the `calexp` and query the display for the colors it rendered each mask plane with. For example:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "mask = calexp.getMask()\n", - "for maskName, maskBit in mask.getMaskPlaneDict().items():\n", - " print('{}: {}'.format(maskName, afw_display.getMaskPlaneColor(maskName)))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In the Firefly viewer tab, the overlays button ![overlays button](http://irsa.ipac.caltech.edu/onlinehelp/finderchart/img/layers.png) on the toolbar gives you very detailed control over the mask planes, such as turning individual planes on and off, changing the color and adjusting the transparency. Mask transparency and colors can also be set using `afw.display` commands, for individual planes or for all." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Plotting sources on the display\n", - "\n", - "The LSST processing pipeline also creates a table of the sources it used for PSF estimation as well as astrometric and photometric calibration. The dataset type of this table is `src`, which you can get from the Butler:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "src = butler.get('src', **dataId)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The returned object, `src`, is a `lsst.afw.table.SourceTable` object. `SourceTables` are explored more elsewhere, but you can do some simple investigations using common python functions. For example, to check the length of the object:\n", - "\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "len(src)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can view an HTML rendering of the `src` table by getting an `astropy.table.Table` version of it:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "src.asAstropy()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now we’ll overplot sources from the `src` table onto the image display using the Display’s `dot` method for plotting markers. `Display.dot` plots markers individually, so you’ll need to iterate over rows in the `SourceTable`. Next we display the first 100 sources. We limit the number of sources since plotting the whole catalog is a serial process and takes some time. Because of this, it is more efficient to send a batch of updates to the display, so we enclose the loop in a `display.Buffering` context, like this:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "with afw_display.Buffering():\n", - " for record in src:\n", - " afw_display.dot('o', record.getX(), record.getY(), size=20, ctype='orange')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Note that the first 100 sources are preferentially located at the bottom of the image. This spatial ordering is likely imprinted by the source detection algorithm; however, it could change due to parallelization. The units o the `size` parameter are believed to be pixels." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Clearing markers\n", - "\n", - "`Display.dot` always adds new markers to the display. To clear the display of all markers, use the erase method:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "afw_display.erase()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Using FireflyClient directly." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can also use the Firefly client directly to make plots and add catalogs to the visualization. First retrieve the `FireflyClient` object." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "fc = afw_display.getClient()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For uploading a table it is convenient to use the firefly_client.plot module. Import it and ensure it is using the same FireflyClient instance." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import firefly_client.plot as ffplt\n", - "ffplt.use_client(fc)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's select just the sources that were used to fit the PSF. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "psf_src = src[src['calib_psfUsed']]\n", - "print(src['calib_psfUsed'])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Upload a SourceCatalog to Firefly. By default, the catalog is shown in an interactive table viewer." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "tbl_id = ffplt.upload_table(psf_src, title='Source Catalog')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Make a scatter plot." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ffplt.scatter(x_col='base_CircularApertureFlux_12_0_flux/base_GaussianFlux_flux',\n", - " y_col='log10(base_CircularApertureFlux_12_0_flux)',\n", - " size=4,\n", - " color='blue',\n", - " title='test ap flux/model mag vs. log(ap flux)',\n", - " xlabel='Model',\n", - " ylabel='Ap/Model')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Plots are rendered by `plotly`, so follow the same syntax for construction. For info on `plotly` see the [primer](https://plot.ly/python/getting-started/) and [examples](https://plot.ly/python/line-and-scatter/)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Wrap up\n", - "In this tutorial we have used the `Butler` to access LSST simulation data and used the LSST Science Pipelines Python API to display images and tables. Here are some key takeaways:\n", - "\n", - "* Use the `lsst.daf.persistence.Butler` class to read and write data from repositories.\n", - "* The `lsst.afw.display` module provides a flexible framework for sending data from LSST Science Pipelines code to image displays. We used the Firefly backend for web-based visualization (`ds9`, `matplotlib`, and `ginga` are other avialable backends).\n", - "* `Exposure` objects have image data, mask data, and metadata. When you display an Exposure, the display framework automatically overlays mask planes.\n", - "* We have accessed and visualized the Table of sources extracted from an image. The `Table.asAstropy` method can be used to view the table as an `astropy.table.Table`.\n", - "* We have sent a subset of objects to directly to the Firefly API for additional plotting and investigation.\n", - "\n", - "\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "LSST", - "language": "python", - "name": "lsst" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} From fbd9259ea132af61888fdde5918a4eface8ef3a2 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Wed, 3 Jul 2019 15:51:07 -0700 Subject: [PATCH 07/14] refer to top-level Firefly notebook --- firefly_features/README.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/firefly_features/README.md b/firefly_features/README.md index 9428ab7..6355008 100644 --- a/firefly_features/README.md +++ b/firefly_features/README.md @@ -7,6 +7,9 @@ The notebooks in this directory are used to test release candidates of the LSST **Verified release or release candidate:** 17.0.1 1. [afwDisplay_Firefly_docs.ipynb](afwDisplay_Firefly_docs.ipynb) : Nearly all the content from the [module documentation](https://pipelines.lsst.io/modules/lsst.display.firefly), excluding LSST Source Detection Footprints. -2. [Firefly.ipynb](Firefly.ipynb) : Demonstrates main Firefly capabilities -3. [intro-with-globular.ipynb](intro-with-globular.ipynb) : a source detection and crude deblending notebook used in workshops in Summer 2018. -3. [HSC-Footprints.ipynb](HSC-Footprints.ipynb) : Visualizing LSST Source Detection Footprints from Hyper-Suprime Cam data processed with LSST Science Pipelines. \ No newline at end of file +2. [intro-with-globular.ipynb](intro-with-globular.ipynb) : a source detection and crude deblending notebook used in workshops in Summer 2018. +A medium or larger-size container is recommended for this notebook. +3. [HSC-Footprints.ipynb](HSC-Footprints.ipynb) : Visualizing LSST Source Detection Footprints from Hyper-Suprime Cam data processed with LSST Science Pipelines. + +Additionally, the [Firefly notebook](../Firefly.ipynb) demonstrating Firefly capabilities should also be tested +for every release. From a80d348276c40835824b08b7af01b8d9f30e47c5 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Wed, 3 Jul 2019 15:51:32 -0700 Subject: [PATCH 08/14] remove exercises from deblending tutorial --- firefly_features/intro-with-globular.ipynb | 38 ---------------------- 1 file changed, 38 deletions(-) diff --git a/firefly_features/intro-with-globular.ipynb b/firefly_features/intro-with-globular.ipynb index 20dea9f..c4f0eec 100644 --- a/firefly_features/intro-with-globular.ipynb +++ b/firefly_features/intro-with-globular.ipynb @@ -1092,44 +1092,6 @@ "source": [ "The table, plot, and image cells can be resized and moved around in the Firefly viewer." ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Extra Credit" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### One More Time\n", - "\n", - "Try doing one more round of detection/deblend/measure, concatenate, and fit. Does it help?" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Multi-Band Fitting\n", - "\n", - "While we started this tutorial by making color images, we've only processed the *r*-band data. Run `fitStars` on each band separately, using the positions from detect/deblend/measure processing in *r*. Make *gri* and *riz* color images of the model and residuals." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Deblender Output Inspection\n", - "\n", - "Instead of just guessing that the deblender is mangling things, we could actually look. If you call `getFootprint()` on a row corresponding to a child source, you'll get a `HeavyFootprint`. Use its `getBBox()` and `insert` methods to make images of some `HeavyFootprints`, and display them with matplotlib or Firefly.\n", - "\n", - "Note that if you try this on a source that isn't a deblended child, you'll get a regular `Footprint`, which doesn't have an `insert` method (because it doesn't contain any pixel values). What catalog filter can you apply to get only child objects?\n", - "\n", - "If you're ambitious, import the [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/user_guide.html) package, and use the `interact` function and an `IntSlider` to interactively control which source's `HeavyFootprint` is displayed." - ] } ], "metadata": { From 34a1f2e67dccee67c271f21827d3450079325c0d Mon Sep 17 00:00:00 2001 From: David Shupe Date: Wed, 3 Jul 2019 15:54:05 -0700 Subject: [PATCH 09/14] update path to HSC footprints data --- firefly_features/HSC-Footprints.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/firefly_features/HSC-Footprints.ipynb b/firefly_features/HSC-Footprints.ipynb index 109ab41..31c7bd7 100644 --- a/firefly_features/HSC-Footprints.ipynb +++ b/firefly_features/HSC-Footprints.ipynb @@ -55,7 +55,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This example uses HSC-reprocessed data from `/datasets/hsc/repo/rerun/RC/w_2018_38/DM-15690/deepCoadd-results`" + "This example uses HSC-reprocessed data from `/datasets/hsc/repo/rerun/RC/w_2019_22/DM-19244/deepCoadd-results`" ] }, { @@ -64,7 +64,7 @@ "metadata": {}, "outputs": [], "source": [ - "butler = Butler('/datasets/hsc/repo/rerun/RC/w_2018_38/DM-15690/')" + "butler = Butler('/datasets/hsc/repo/rerun/RC/w_2019_22/DM-19244/')" ] }, { From 85626e3912f8eb73a565e7a98e9e31b5cb8de607 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Wed, 9 Oct 2019 13:44:51 +0700 Subject: [PATCH 10/14] remove HSC Footprints notebook --- firefly_features/HSC-Footprints.ipynb | 273 -------------------------- 1 file changed, 273 deletions(-) delete mode 100644 firefly_features/HSC-Footprints.ipynb diff --git a/firefly_features/HSC-Footprints.ipynb b/firefly_features/HSC-Footprints.ipynb deleted file mode 100644 index 31c7bd7..0000000 --- a/firefly_features/HSC-Footprints.ipynb +++ /dev/null @@ -1,273 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Browsing HSC Footprints" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Last verified to run:** 2019-03-29\n", - "\n", - "**Verified science platform release or release candidate:** 17.0.1" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook tests new functionality in `lsst.display.firefly` for browsing catalogs, footprints and images." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import numpy as np" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from lsst.daf.persistence import Butler\n", - "import lsst.afw.geom as afwGeom" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import lsst.afw.display as afwDisplay" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This example uses HSC-reprocessed data from `/datasets/hsc/repo/rerun/RC/w_2019_22/DM-19244/deepCoadd-results`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "butler = Butler('/datasets/hsc/repo/rerun/RC/w_2019_22/DM-19244/')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Define an ID for retrieving data" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "dataId = dict(filter='HSC-R', tract=9813, patch='4,4')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Define a bounding boxes for a region of interest, one for the catalog and a larger one for the image." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "footprintsBbox = afwGeom.Box2I(corner=afwGeom.Point2I(16900, 18700),\n", - " dimensions=afwGeom.Extent2I(600,600))\n", - "imageBbox = afwGeom.Box2I(corner=afwGeom.Point2I(16800, 18600),\n", - " dimensions=afwGeom.Extent2I(800,800))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Retrieve a cutout for the corresponding exposure / coadd" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "calexp = butler.get('deepCoadd_calexp_sub', dataId=dataId, bbox=imageBbox)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Retrieve a catalog for the entire region" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "measCat = butler.get('deepCoadd_meas', dataId=dataId)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Select only those records with pixel locations inside the footprints bounding box" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "catSelect = np.array([footprintsBbox.contains(afwGeom.Point2I(r.getX(), r.getY()))\n", - " for r in measCat])\n", - "catalogSubset = measCat.subset(catSelect)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Set up the Firefly display" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display1 = afwDisplay.Display(frame=1, backend='firefly')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display1.setMaskTransparency(80)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display1.scale('asinh', 10, 80, unit='percent', Q=6)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The `resetLayout` method sets up the image for upper left and table below, with space for plots at upper right" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display1.resetLayout()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Display the image" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display1.mtv(calexp)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Overlay the footprints and accompanying table.\n", - "\n", - "Colors can be specified as a name like `'cyan'` or `afwDisplay.RED`; as an rgb value such as `'rgb(80,100,220)'`; or as rgb plus alpha (opacity) such as `'rgba('74,144,226,0.60)'`.\n", - "\n", - "The `layerString` and `titleString` are concatenated with the frame, to make the footprint drawing layer name and the table title, respectively. If multiple footprint layers are desired, be sure to use different values of `layerString`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "display1.overlayFootprints(catalogSubset, color='rgba(74,144,226,0.50)',\n", - " highlightColor='yellow', selectColor='orange',\n", - " style='outline', layerString='detection footprints ',\n", - " titleString='catalog footprints ')" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "LSST", - "language": "python", - "name": "lsst" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} From 12cee87880ee673312c7edab4a91c66412e78be0 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Wed, 9 Oct 2019 13:48:45 +0700 Subject: [PATCH 11/14] remove troublesome tqdm progres bar --- firefly_features/intro-with-globular.ipynb | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/firefly_features/intro-with-globular.ipynb b/firefly_features/intro-with-globular.ipynb index c4f0eec..8ee7305 100644 --- a/firefly_features/intro-with-globular.ipynb +++ b/firefly_features/intro-with-globular.ipynb @@ -11,9 +11,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Last verified to run:** 2019-03-29\n", + "**Last verified to run:** 2019-10-07\n", "\n", - "**Verified science platform release or release candidate:** 17.0.1" + "**Verified science platform release or release candidate:** 18.1.0" ] }, { @@ -42,7 +42,7 @@ "source": [ "We'll start with some standard imports of both LSST and third-party packages.\n", "\n", - "These are automatically installed in the LSST Science Platform Notebook Aspect. The `tqdm` package is a nice collection of progress bar widgets for both notebooks and the command-line." + "These are automatically installed in the LSST Science Platform Notebook Aspect." ] }, { @@ -52,7 +52,6 @@ "outputs": [], "source": [ "import numpy as np\n", - "from tqdm import tqdm_notebook\n", "from lsst.daf.persistence import Butler\n", "from lsst.geom import Box2I, Box2D, Point2I, Point2D, Extent2I, Extent2D\n", "from lsst.afw.image import Exposure, Image, PARENT" @@ -653,7 +652,7 @@ " model.setPsf(psf)\n", " if fluxes is None:\n", " fluxes = cat.getPsfInstFlux()\n", - " for flux, record in zip(fluxes, tqdm_notebook(cat)):\n", + " for flux, record in zip(fluxes, cat):\n", " # Obtain a PSF model image at the position of this source\n", " psfImage = psf.computeImage(record.getCentroid())\n", " # Make sure the PSF model image fits within the larger image; if it doesn't, clip it so it does.\n", @@ -697,7 +696,7 @@ "source": [ "Column arrays of `SourceCatalogs` can only be accessed when the catalog is stored in a single contiguous block of memory. But unlike Numpy arrays, using boolean indexing on a catalog doesn't automatically make a copy to ensure memory is contiguous. Instead it creates a view to the selected rows. That can be useful or more efficient in some cases, but it also prevents us from accessing columns. To fix that, we immediately make a deep copy of the catalog, which copies it into a new block of contiguous memory.\n", "\n", - "We can now run our `subtractStars` function. Note the nice `tqdm` progress bar in action!" + "We can now run our `subtractStars` function." ] }, { @@ -808,7 +807,7 @@ " # because it makes each nested H x W array have the same memory layout as the\n", " # LSST Image class, which lets us make image views into those subarrays.\n", " matrix = np.zeros((N, H, W), dtype=float)\n", - " for n, record in enumerate(tqdm_notebook(cat)):\n", + " for n, record in enumerate(cat):\n", " # Make an Image view to a nested sub-array. Note that writing to this\n", " # will modify the parent array.\n", " matrixView = Image(matrix[n, :, :], xy0=bbox.getMin(), dtype=np.float64)\n", @@ -1096,9 +1095,9 @@ ], "metadata": { "kernelspec": { - "display_name": "LSST", + "display_name": "Python 3", "language": "python", - "name": "lsst" + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -1110,7 +1109,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.6" + "version": "3.7.2" } }, "nbformat": 4, From 72f626dc45ec7f0ade11de21140d6fd6bf5c1ad5 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Wed, 9 Oct 2019 13:51:33 +0700 Subject: [PATCH 12/14] update readme for 18.1.0 --- firefly_features/README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/firefly_features/README.md b/firefly_features/README.md index 6355008..e781896 100644 --- a/firefly_features/README.md +++ b/firefly_features/README.md @@ -2,14 +2,13 @@ The notebooks in this directory are used to test release candidates of the LSST Science Platform Notebook Aspect. -**Last verified to run:** 2019-03-29 +**Last verified to run:** 2019-10-07 -**Verified release or release candidate:** 17.0.1 +**Verified release or release candidate:** 18.1.0 1. [afwDisplay_Firefly_docs.ipynb](afwDisplay_Firefly_docs.ipynb) : Nearly all the content from the [module documentation](https://pipelines.lsst.io/modules/lsst.display.firefly), excluding LSST Source Detection Footprints. 2. [intro-with-globular.ipynb](intro-with-globular.ipynb) : a source detection and crude deblending notebook used in workshops in Summer 2018. A medium or larger-size container is recommended for this notebook. -3. [HSC-Footprints.ipynb](HSC-Footprints.ipynb) : Visualizing LSST Source Detection Footprints from Hyper-Suprime Cam data processed with LSST Science Pipelines. Additionally, the [Firefly notebook](../Firefly.ipynb) demonstrating Firefly capabilities should also be tested for every release. From df8b735f9e7720cdb509a2caea0f7072a6240344 Mon Sep 17 00:00:00 2001 From: David Shupe Date: Tue, 15 Oct 2019 13:00:47 +0000 Subject: [PATCH 13/14] add footprint overlay demo --- firefly_features/intro-with-globular.ipynb | 42 +++++++++++++++++++--- 1 file changed, 38 insertions(+), 4 deletions(-) diff --git a/firefly_features/intro-with-globular.ipynb b/firefly_features/intro-with-globular.ipynb index 8ee7305..2909176 100644 --- a/firefly_features/intro-with-globular.ipynb +++ b/firefly_features/intro-with-globular.ipynb @@ -11,7 +11,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Last verified to run:** 2019-10-07\n", + "**Last verified to run:** 2019-10-15\n", "\n", "**Verified science platform release or release candidate:** 18.1.0" ] @@ -597,6 +597,40 @@ "showCatalog(display, catalog)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Specific to the `lsst.display.firefly` backend is facility for overlaying LSST source detection footprints. An example is [shown in the module documentation](https://pipelines.lsst.io/modules/lsst.display.firefly/viewing-footprints.html)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "display.overlayFootprints(catalog,\n", + " color='rgba(74,144,226,0.50)',\n", + " highlightColor='yellow', selectColor='orange',\n", + " style='outline', layerString='detection footprints ',\n", + " titleString='catalog footprints ')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "A table viewer for the catalog appears in the Firefly tab. It is possible to filter the `category` column to show only 'deblended child' footprints." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After experimenting with footprint viewer, you can close the 'catalog footprints 1' tab to carry on with the rest of this tutorial." + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -1095,9 +1129,9 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3", + "display_name": "LSST", "language": "python", - "name": "python3" + "name": "lsst" }, "language_info": { "codemirror_mode": { @@ -1113,5 +1147,5 @@ } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } From 9b5d2ba329de13aa55cf40282da067de8828589f Mon Sep 17 00:00:00 2001 From: David Shupe Date: Tue, 15 Oct 2019 13:07:46 +0000 Subject: [PATCH 14/14] update afwDisplay nb to 18.1.0 --- firefly_features/afwDisplay_Firefly_docs.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/firefly_features/afwDisplay_Firefly_docs.ipynb b/firefly_features/afwDisplay_Firefly_docs.ipynb index f0b11f0..7550c03 100644 --- a/firefly_features/afwDisplay_Firefly_docs.ipynb +++ b/firefly_features/afwDisplay_Firefly_docs.ipynb @@ -11,9 +11,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Last verified to run:** 2019-03-29\n", + "**Last verified to run:** 2019-10-15\n", "\n", - "**Verified Science Platform release or release candidate:** 17.0.1" + "**Verified Science Platform release or release candidate:** 18.1.0" ] }, { @@ -353,9 +353,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.6" + "version": "3.7.2" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 }