Skip to content

Commit

Permalink
Merge pull request #37 from ilastik/main
Browse files Browse the repository at this point in the history
sync up with fork
  • Loading branch information
k-dominik authored Feb 14, 2024
2 parents 6b79e73 + fd7342d commit 100546e
Show file tree
Hide file tree
Showing 94 changed files with 5,460 additions and 3,974 deletions.
56 changes: 0 additions & 56 deletions .circleci/config.yml

This file was deleted.

35 changes: 35 additions & 0 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
name: deploy

on:
push:
tags:
- '*'


jobs:
deploy-to-ilastik-forge:
# noarch build - build on linux, only
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/checkout@v3
with:
repository: ilastik/ilastik-conda-recipes
path: ilastik-conda-recipes
- uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
auto-activate-base: true
activate-environment: ""
miniforge-variant: Mambaforge
use-mamba: true
- name: linux conda build and upload
shell: bash -l {0}
env:
ANACONDA_API_TOKEN: ${{ secrets.ANACONDA_TOKEN }}
run: |
mamba install -n base -c conda-forge boa setuptools_scm anaconda-client -y
mamba config --set anaconda_upload yes
conda mambabuild -c ilastik-forge -c conda-forge -m ilastik-conda-recipes/ilastik-pins.yaml --user ilastik-forge conda-recipe
40 changes: 40 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
name: test

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
test-w-conda-recipe:
strategy:
fail-fast: false
matrix:
os: [macos-latest, windows-latest, ubuntu-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ""
auto-activate-base: true
auto-update-conda: true
miniforge-variant: Mambaforge
use-mamba: true
- name: install build deps
run: mamba install -n base -c conda-forge boa setuptools_scm -y
- name: linux conda build test
if: matrix.os == 'ubuntu-latest'
shell: bash -l {0}
run: conda mambabuild -c ilastik-forge -c conda-forge conda-recipe
- name: osx test
if: matrix.os == 'macos-latest'
shell: bash -l {0}
run: conda mambabuild -c ilastik-forge -c conda-forge conda-recipe
- name: windows conda-build
if: matrix.os == 'windows-latest'
shell: cmd /C CALL {0}
run: conda mambabuild -c ilastik-forge -c conda-forge conda-recipe
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
CMakeCache*
*.pyc
*_test
__pycache__
tags*
Testing
CMakeFiles
Expand Down
4 changes: 3 additions & 1 deletion Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@

By Carsten Haubold, Steffen Wolf, Letitia Parcalabescu, Bernhard Kausler, Martin Schiegg, Jaime I. Cervantes, Janez Ales and more.

* Build status: [ ![Circle CI](https://circleci.com/gh/chaubold/hytra.png?style=shield&circle-token=27b4fff289dfdb41575cecfab8e865c7cac6a099) ](https://circleci.com/gh/chaubold/hytra)

* build status: [![test](https://github.com/ilastik/hytra/actions/workflows/test.yml/badge.svg)](https://github.com/ilastik/hytra/actions/workflows/test.yml)
* conda: ![last updated](https://anaconda.org/ilastik-forge/hytra/badges/latest_release_date.svg) ![latest-version](https://anaconda.org/ilastik-forge/hytra/badges/version.svg)
* Usage documentation can be found in this [Google document](https://docs.google.com/document/d/1jxkYGlTEUCPqH03pip03eDBBX2pVYEhPGHHvbegHiWw/edit?usp=sharing)
* [API Docs](http://chaubold.github.io/hytra/hytra/index.html)
* Run tests using `nosetests tests` from the root folder
8 changes: 1 addition & 7 deletions conda-recipe/conda_build_config.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,2 @@
networkx:
- 1.11
python:
- 3.6


pin_run_as_build:
python: x.x
- 2
17 changes: 8 additions & 9 deletions conda-recipe/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,17 @@ build:

requirements:
build:
- python {{ python }}
- python >=3.6
- pip

run:
- python >=2.7
- dpct
- networkx <={{ networkx }}
- yapsy
- vigra
- scikit-learn
- scikit-image
- h5py
- networkx >={{ networkx }}
- python >=3.6
- scikit-image
- scikit-learn
- vigra
- yapsy

test:
source_files:
Expand All @@ -50,6 +49,6 @@ test:
- nosetests tests

about:
home: https://github.com/chaubold/hytra
home: https://github.com/ilastik/hytra
license: MIT
summary: 'Python tracking framework developed at the IAL lab @ University of Heidelberg'
19 changes: 19 additions & 0 deletions dev/environment-dev.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
name: hytra-dev
channels:
- ilastik-forge
- conda-forge
- defaults
dependencies:
- black
- configargparse
- dpct
- h5py
- jinja2
- networkx >=2.2
- nose
- pre_commit
- python >=3.7
- scikit-image
- scikit-learn
- vigra
- yapsy
8 changes: 4 additions & 4 deletions empryonic/filter.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def filterFeaturesByIntensity(h5In, h5Out, threshold = 1500):
h5Out: output path; file will be overwritten if already existing
'''
def intensityFilter(labelGroup):
intMaximum = labelGroup[intminmax].value[1]
intMaximum = labelGroup[intminmax][()][1]
return (intMaximum >= threshold)
filterFeaturesByPredicate(h5In, h5Out, intensityFilter)

Expand Down Expand Up @@ -81,17 +81,17 @@ def filterFeaturesByPredicate(h5In, h5Out, predicate):

# supervoxels
print "labelcount = ", labelcount
outFeaturesGroup.create_dataset(labelcount, data=featuresGroup[labelcount].value)
outFeaturesGroup.create_dataset(labelcount, data=featuresGroup[labelcount][()])

# featureconfig
outFeaturesGroup.create_dataset(featureconfig, data=featuresGroup[featureconfig].value)
outFeaturesGroup.create_dataset(featureconfig, data=featuresGroup[featureconfig][()])

# labels
for labelGroup in validLabelGroups:
outFile.copy(labelGroup, outFeaturesGroup)

# labelcontent
inLabelcontent = featuresGroup[labelcontent].value
inLabelcontent = featuresGroup[labelcontent][()]
outLabelcontent = np.zeros(inLabelcontent.shape, dtype=inLabelcontent.dtype)

validLabels = filter(lambda item: item.isdigit(), outFeaturesGroup.keys())
Expand Down
32 changes: 16 additions & 16 deletions empryonic/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def __loadDataset( filename, h5path):
h5path: location of the data inside the hdf5 file
'''
f = h5py.File( filename, mode='r' )
data = f[h5path].value
data = f[h5path][()]
f.close()
return data

Expand Down Expand Up @@ -214,25 +214,25 @@ def update_moves( self, mov_pairs ):

def get_moves( self ):
if self.has_tracking() and _path.basename(self.mov_ds) in self[self.track_gn].keys():
return self[self.mov_ds].value
return self[self.mov_ds][()]
else:
return np.empty(0)

def get_mergers( self ):
if self.has_tracking() and _path.basename(self.merg_ds) in self[self.track_gn].keys():
return self[self.merg_ds].value
return self[self.merg_ds][()]
else:
return np.empty(0)

def get_multiFrameMoves( self ):
if self.has_tracking() and _path.basename(self.multi_ds) in self[self.track_gn].keys():
return self[self.multi_ds].value
return self[self.multi_ds][()]
else:
return np.empty(0)

def get_move_energies( self ):
if _path.basename(self.mov_ener_ds) in self[self.track_gn].keys():
e = self[self.mov_ener_ds].value
e = self[self.mov_ener_ds][()]
if isinstance(e, np.ndarray):
return e
else:
Expand All @@ -243,7 +243,7 @@ def get_move_energies( self ):

def get_divisions( self ):
if self.has_tracking() and _path.basename(self.div_ds) in self[self.track_gn].keys():
return self[self.div_ds].value
return self[self.div_ds][()]
else:
return np.empty(0)

Expand All @@ -255,7 +255,7 @@ def update_divisions( self, div_triples ):

def get_division_energies( self ):
if _path.basename(self.div_ener_ds) in self[self.track_gn].keys():
e = self[self.div_ener_ds].value
e = self[self.div_ener_ds][()]
if isinstance(e, np.ndarray):
return e
else:
Expand All @@ -265,7 +265,7 @@ def get_division_energies( self ):

def get_disappearances( self ):
if self.has_tracking() and _path.basename(self.dis_ds) in self[self.track_gn].keys():
dis = self[self.dis_ds].value
dis = self[self.dis_ds][()]
if isinstance(dis, np.ndarray):
return dis
else:
Expand All @@ -281,7 +281,7 @@ def update_disappearances( self, dis_singlets ):

def get_disappearance_energies( self ):
if _path.basename(self.dis_ener_ds) in self[self.track_gn].keys():
e = self[self.dis_ener_ds].value
e = self[self.dis_ener_ds][()]
if isinstance(e, np.ndarray):
return e
else:
Expand All @@ -292,7 +292,7 @@ def get_disappearance_energies( self ):

def get_appearances( self ):
if self.has_tracking() and _path.basename(self.app_ds) in self[self.track_gn].keys():
app = self[self.app_ds].value
app = self[self.app_ds][()]
if isinstance(app, np.ndarray):
return app
else:
Expand All @@ -308,7 +308,7 @@ def update_appearances( self, app_singlets ):

def get_appearance_energies( self ):
if _path.basename(self.app_ener_ds) in self[self.track_gn].keys():
e = self[self.app_ener_ds].value
e = self[self.app_ener_ds][()]
if isinstance(e, np.ndarray):
return e
else:
Expand Down Expand Up @@ -336,7 +336,7 @@ def rm_disappearance( self, id ):

def get_ids( self ):
features_group = self[self.feat_gn]
labelcontent = features_group["labelcontent"].value
labelcontent = features_group["labelcontent"][()]
valid_labels = (np.arange(len(labelcontent))+1)[labelcontent==1]
return valid_labels

Expand Down Expand Up @@ -365,16 +365,16 @@ def cTraxels( self, as_python_list=False, prediction_threshold=None ):
def _cTraxels_from_objects_group( self , as_python_list = False, prediction_threshold=None):
objects_g = self["objects"]
features_g = self["objects/features"]
ids = objects_g["meta/id"].value
valid = objects_g["meta/valid"].value
ids = objects_g["meta/id"][()]
valid = objects_g["meta/valid"][()]
prediction = None
if "prediction" in objects_g["meta"]:
prediction = objects_g["meta/prediction"]
elif prediction_threshold:
raise Exception("prediction_threshold set, but no prediction dataset found")
features = {}
for name in features_g.keys():
features[name] = features_g[name].value
features[name] = features_g[name][()]

if as_python_list:
ts = list()
Expand Down Expand Up @@ -404,7 +404,7 @@ def _cTraxels_from_objects_group( self , as_python_list = False, prediction_thre

def _cTraxels_from_features_group( self ):
features_group = self[self.feat_gn]
labelcontent = features_group["labelcontent"].value
labelcontent = features_group["labelcontent"][()]
invalid_labels = (np.arange(len(labelcontent))+1)[labelcontent==0]

# note, that we used the ctracklet_from_labelgroup() here before, but had
Expand Down
4 changes: 2 additions & 2 deletions empryonic/learning/optimal_matching.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,8 +135,8 @@ def _formulate_associations( graph, solved_ilp_variables ):
for id, var in solved_ilp_variables.items():
if var.value() == 1:
match = graph.edges[id]
lhs = graph.lhs[match.id_lhs].value
rhs = graph.rhs[match.id_rhs].value
lhs = graph.lhs[match.id_lhs][()]
rhs = graph.rhs[match.id_rhs][()]
if lhs != None:
assoc['lhs'][lhs] = rhs
if rhs != None:
Expand Down
Loading

0 comments on commit 100546e

Please sign in to comment.