Skip to content

Commit

Permalink
v0.73 (alpha) release version
Browse files Browse the repository at this point in the history
  • Loading branch information
jmetz committed Nov 7, 2017
0 parents commit fd2bde6
Show file tree
Hide file tree
Showing 60 changed files with 4,753 additions and 0 deletions.
5 changes: 5 additions & 0 deletions CONTRIBUTORS.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
- Jeremy Metz
Project coordinator

- Ashley Smith
Core developer
56 changes: 56 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Introduction

This project aims to provide an automated framework to detect, track, and analyse bacteria in a "Mother Machine" microfluidic setup.

# Prerequisites
In order to run momanalysis you must ensure that you have python installed on your computer.
For windows we recommend either Anaconda or WinPython
For Mac, Unix or Linux we recommend Anaconda

WinPython: https://winpython.github.io
Anaconda: https://conda.io/docs/user-guide/install/index.html

# Installing momanalysis

Installing momanalysis is simple.

Just open Terminal, Anaconda prompt or WinPython command prompt and type the following:

> pip install -U "momanalysis link here!!!"
# Starting momanalysis

Once the package is installed, you can start the user interface by typing:

> momanalysis
![Alt text](/docs/momanalysis_gui.png "User Interface")

# Running momanalysis

Running momanalysis is easy. You can manually add the file(s)/folder path, select it using the button provided or simple "drag and drop".
The various input options and the respective parameters required on the user interface are listed below.
N.B. If running in batch mode on a folder of images (can include fluorescence) then see the next section

No fluorescence
* Single brightfield or phase contrast image - add the file and hit "start analysis"
* stack of brightfield or phase contrast images - add the file and hit "start analysis"

Fluorescence
* Alternating stack of brightfield/phase contrast images and their matching fluorescent images (e.g. BF/Fluo/BF/FLuo) - add the file, select "combined fluorescence" and hit "start analysis"
* Corresponding fluorescent image(s) are in a separate file (stacks if multiple frames) - add the files, select "separate fluorescence" and hit "start analysis"

# Folder (Batch mode)
momanalysis also has the capacity to run in batch mode on a data set containing multiple areas. In order to do this the files within the folder must be in the following format:
"a_b_c.tif" where:
* a = an identifier for the area in which the image was taken. E.g. "01" for all images in the first area of the mother machine, "02" for the second, etc.
* b = a time stamp - this allows momanalysis to stack images from the same timepoint in chronological order
* c = any further identifiers of your choice

Once you have a suitable folder, you simply add it to the interface as described and select "batch run".
If it is just brightfield/phase then you can hit "start analysis", but if your folder includes fluorescent images (see note below) then select "combined fluorescence" before starting the analysis

Note: When including fluorescent images in a folder, the timestamps should alternate between BF/Phase and the matching fluorescent image (like alternating stack above)

# Additional options
You also have the option of specifying where you want the output of the analysis to be saved. If this is undefined then it will be in the same folder as the input
Binary file added docs/momanalysis_gui.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file added momanalysis/__init__.py
Empty file.
90 changes: 90 additions & 0 deletions momanalysis/__main__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
#
# FILE : __main__.py
# CREATED : 22/09/16 12:48:59
# AUTHOR : J. Metz <[email protected]>
# DESCRIPTION : CLI module interface - invoked using python -m momanalysis
#

import argparse
from momanalysis.main import run_analysis_pipeline
from momanalysis.main import batch

from momanalysis.utility import logger

logger.setLevel("DEBUG")

# Parse command line arguments

parser = argparse.ArgumentParser(
description="MOther Machine ANALYSIS module")
parser.add_argument("filename", nargs="+", help="Data file(s) to load and analyse")
parser.add_argument("-t", default=None, type=int, help="Frame to run to for multi-frame stacks")
parser.add_argument("--invert", action="store_true", help="Invert Brightfield image")
parser.add_argument("--brightchannel", action="store_true", help="Detect channel as bright line instead of dark line")
parser.add_argument("--show", action="store_true", help="Show the plots")
parser.add_argument("--limitmem", action="store_true", help="Limit memory to 1/3 of system RAM")
parser.add_argument("--debug", action="store_true", help="Debug")
parser.add_argument("--loader", default="default", choices=['default', 'tifffile'], help="Switch IO method")
parser.add_argument("--channel", type=int, default=None, help="Select channel for multi-channel images")
parser.add_argument("--tdim", type=int, default=0, help="Identify time channel if not 0")
parser.add_argument("--output", default=None, help ="name of output file, if not specified it will be the same as input")
parser.add_argument("--fluo", default=None, help ="stack of matching fluorescent images")
parser.add_argument("-f", action="store_true", help="stack of images is contains alternating matching fluorescent images")
parser.add_argument("-ba", action="store_true", help="a folder containing multiple image areas to run at the same time")
args = parser.parse_args()


if args.limitmem:
#------------------------------
# Memory managment
#------------------------------
import resource
import os
gb = 1024.**3
mem_bytes = os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES')
mem_gib = mem_bytes/gb # e.g. 3.74
logger.debug("MANAGING MEMORY USAGE...")
logger.debug("SYSTEM MEMORY: %0.2f GB" % mem_gib)
logger.debug("LIMITING PROGRAM TO %0.2f GB" % (mem_gib/3))
lim = mem_bytes//3
rsrc = resource.RLIMIT_AS
soft, hard = resource.getrlimit(rsrc)
resource.setrlimit(rsrc, (lim, hard)) #limit
# Run appropriate analysis

if args.ba == True:
batch(
args.filename,
output=args.output,
tmax=args.t,
invert=args.invert,
show=args.show,
debug=args.debug,
brightchannel=args.brightchannel,
loader=args.loader,
channel=args.channel,
tdim=args.tdim,
fluo=args.fluo,
fluoresc=args.f,
batch=args.ba
)

elif args.ba == False:
run_analysis_pipeline(
args.filename,
output=args.output,
tmax=args.t,
invert=args.invert,
show=args.show,
debug=args.debug,
brightchannel=args.brightchannel,
loader=args.loader,
channel=args.channel,
tdim=args.tdim,
fluo=args.fluo,
fluoresc=args.f,
batch=args.ba
)



Binary file added momanalysis/__pycache__/__init__.cpython-36.pyc
Binary file not shown.
Binary file added momanalysis/__pycache__/__main__.cpython-36.pyc
Binary file not shown.
Binary file not shown.
Binary file added momanalysis/__pycache__/detection.cpython-36.pyc
Binary file not shown.
Binary file added momanalysis/__pycache__/gui.cpython-36.pyc
Binary file not shown.
Binary file added momanalysis/__pycache__/io.cpython-36.pyc
Binary file not shown.
Binary file added momanalysis/__pycache__/main.cpython-36.pyc
Binary file not shown.
Binary file not shown.
Binary file added momanalysis/__pycache__/output.cpython-36.pyc
Binary file not shown.
Binary file added momanalysis/__pycache__/plot.cpython-36.pyc
Binary file not shown.
Binary file not shown.
Binary file added momanalysis/__pycache__/tracking.cpython-36.pyc
Binary file not shown.
Binary file added momanalysis/__pycache__/utility.cpython-36.pyc
Binary file not shown.
72 changes: 72 additions & 0 deletions momanalysis/_filters.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
import numpy as np
import scipy.ndimage as ndi

def gaussian_kernel(sigma, window_factor=4):
# Method suggested on scipy cookbook
""" Returns a normalized 2D gauss kernel array for convolutions """


linrange = slice(-int(window_factor*sigma), int(window_factor*sigma)+1)
x, y = np.mgrid[linrange,
linrange]
g = np.exp(-(0.5*x**2/sigma**2 + 0.5*y**2/sigma**2))
return g / g.sum()

"""
# Method I used in the 1d gaussian filter
sd = float(sigma)
# make the length of the filter equal to 4 times the standard
# deviations:
lw = int(window_factor * sd + 0.5)
weights = [0.0] * (2 * lw + 1)
weights[lw] = 1.0
sum = 1.0
sd = sd * sd
# calculate the kernel:
for ii in range(1, lw + 1):
tmp = np.exp(-0.5 * float(ii * ii) / sd)
weights[lw + ii] = tmp
weights[lw - ii] = tmp
wsum += 2.0 * tmp
for ii in range(2 * lw + 1):
weights[ii] /= wsum
"""

def _eig2image(Lxx,Lxy,Lyy):
"""
TODO: Acknowldege here
"""

tmp = np.sqrt((Lxx - Lyy)**2 + 4*Lxy**2)
v2x = 2*Lxy
v2y = Lyy - Lxx + tmp

# Normalize
mag = np.sqrt(v2x**2 + v2y**2)
i = (mag != 0)
v2x[i] = v2x[i]/mag[i]
v2y[i] = v2y[i]/mag[i]

# The eigenvectors are orthogonal
v1x = -v2y
v1y = v2x

# Compute the eigenvalues
mu1 = 0.5*(Lxx + Lyy + tmp)
mu2 = 0.5*(Lxx + Lyy - tmp)

# Sort eigen values by absolute value abs(Lambda1)<abs(Lambda2)
check=np.abs(mu1)>np.abs(mu2)

Lambda1=mu1
Lambda1[check]=mu2[check]
Lambda2=mu2
Lambda2[check]=mu1[check]

Ix=v1x
Ix[check]=v2x[check]
Iy=v1y
Iy[check]=v2y[check]

return Lambda1,Lambda2,Ix,Iy
106 changes: 106 additions & 0 deletions momanalysis/bacteria_tracking.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
from skimage.measure import regionprops
import matplotlib.pyplot as plt
import itertools
from functools import reduce
import numpy as np
import matplotlib.pyplot as plt

prob_div = 0.01 #we need a way of determining these but will hard code for now
prob_death = 0.5 #we need a way of determining these but will hard code for now
prob_no_change = 0.95 #may need this to be independent of prob_death/prob_div
av_bac_length = 18 #average back length - may want to code this in - actual bacteria nearer 18



def find_changes(in_list, option_list, well, new_well):
measurements_in = {}
measurements_out = {}
change_dict = {}
for i, region in enumerate(regionprops(well)):
measurements_in[i] = [region.centroid[0], region.area] #find y centr coord and area of old each bac
for j, region2 in enumerate(regionprops(new_well)):
measurements_out[j] = [region2.centroid[0], region2.area] #find y centr coord and area of each new bac
for option in option_list: #each option is a potential combination of bacteria lineage/death
in_options_dict = {}
for in_num, in_options in enumerate(in_list):
out_bac_area = []
out_bac_centr = []
num_divs = (option.count(in_options))-1 #determine the number of divisions/deaths
for l, op in enumerate(option):
if op == in_options: #if the values match append the new centr/areas
out_bac_area.append(measurements_out[l][1])
out_bac_centr.append(measurements_out[l][0])
if sum(out_bac_area) < (measurements_in[in_num][1]): #need to divide by biggest number (so prob < 1)
area_chan = sum(out_bac_area)/(measurements_in[in_num][1]) #find relative change in area compared to original
else:
area_chan = (measurements_in[in_num][1])/sum(out_bac_area) #find relative change in area compared to original
if len(out_bac_centr) is not 0:
centr_chan = abs(((sum(out_bac_centr))/(len(out_bac_centr)))-(measurements_in[in_num][0])) #find the average new centroid
else:
centr_chan = 0
#assign the values to the correct 'in' label
in_options_dict[in_options] = [num_divs, area_chan, centr_chan]
change_dict[option] = in_options_dict #assign the changes to the respective option
return change_dict

def find_probs(change_dict):
prob_dict = {}
temp_dict = {}
if len(change_dict) == 0:
most_likely = None
return most_likely
for option, probs in change_dict.items():
probslist = []
for p in probs:
divs_deaths = probs[p][0] #find the potential number of deaths/divisions for each bac
relative_area = probs[p][1] #find the relative area change
change_centr = probs[p][2] #find the number of pixels the centroid has moved by
if divs_deaths<0: #if the bacteria has died:
prob_divis = prob_death #probability simply equals that of death
prob_centr = 1 #the change in centroid is irrelevant so set probability as 1
prob_area = 1 #the change in area is irrelevant so set probability as 1
if divs_deaths == 0: #if the bacteria hasn't died/or divided
prob_divis = prob_no_change #probability of division simply equals probability of no change
prob_area = relative_area #the area will be equal to the relative area change - may need adjusting
if change_centr == 0: #if there is no change then set prob to 1 (0 will cause div error)
prob_centr = 1
else:
prob_centr = 1/(abs(change_centr)) #the greater the change the less likely
if divs_deaths > 0: #if bacteria have divided:
if relative_area < divs_deaths: #need to make sure we divide by biggest number to keep prob < 1
prob_area = relative_area/divs_deaths #normalise relative area to the number of divisions
else:
prob_area = divs_deaths/relative_area #normalise relative area to the number of divisions
prob_divis = prob_div**(divs_deaths*divs_deaths) #each division becomes more likely - need to think about it
#for each division the bacteria centroid is expected to move half the bac length
prob_centr = 1/abs(((divs_deaths*(av_bac_length/2))-(change_centr)))
probslist.append(prob_area*prob_divis*prob_centr) #combine the probabilities for division, area and centroid
temp_dict[p] = prob_area*prob_divis*prob_centr #same as probslist but makes output more readable during development
prob_dict[option] = reduce(lambda x, y: x*y, probslist) #multiply the probabilities across all bacteria
most_likely = max(prob_dict, key=prob_dict.get)
return most_likely

def label_most_likely(most_likely, new_well, label_dict_string):
out_well = np.zeros(new_well.shape, dtype=new_well.dtype)
if most_likely is None:
return out_well, label_dict_string #if there is no likely option return an empty well
new_label_string = 0
smax = max(label_dict_string, key=int)
for i, region in enumerate(regionprops(new_well)):
if most_likely.count(most_likely[i]) == 1:
out_well[new_well==region.label] = most_likely[i]
else:
smax+=1
out_well[new_well==region.label] = smax
if i > 0:
last_label_start = label_dict_string[most_likely[i-1]]
else:
last_label_start = label_dict_string[most_likely[i]]
new_label_start = label_dict_string[most_likely[i]]
if new_label_start != last_label_start:
new_label_string = 0
new_label_string += 1
add_string = "_%s" % (new_label_string)
label_dict_string[smax] = new_label_start+add_string
return out_well, label_dict_string

Loading

0 comments on commit fd2bde6

Please sign in to comment.