-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Pipeline class #2140
Add Pipeline class #2140
Conversation
8041e5d
to
c0ce9ae
Compare
750ad55
to
55fcd6c
Compare
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## develop #2140 +/- ##
===========================================
+ Coverage 36.24% 36.29% +0.04%
===========================================
Files 477 478 +1
Lines 42570 42617 +47
===========================================
+ Hits 15428 15466 +38
- Misses 27142 27151 +9
|
@@ -116,8 +116,7 @@ def native_quantize_impl( | |||
advanced_parameters=advanced_parameters, | |||
) | |||
|
|||
graph = GraphConverter.create_nncf_graph(model) | |||
quantized_model = quantization_algorithm.apply(model, graph, dataset=calibration_dataset) | |||
quantized_model = quantization_algorithm.run(model, calibration_dataset) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quantization_pipeline.run
? For the consistency.
calibration_dataset: Dataset, | ||
validation_fn: Callable[[Any, Iterable[Any]], Tuple[float, Union[None, List[float], List[List[TTensor]]]]], | ||
subset_size: int, | ||
initial_metric_results: MetricResults, | ||
quantized_metric_results: MetricResults, | ||
): | ||
""" | ||
:param algorithm_cls: Class of algorithm. | ||
:param pipeline_cls: Class of pipeline. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pipeline
or StepwisePipeline
after all?
|
||
algorithm = self._algorithms[best_combination_key] | ||
result_model = algorithm.apply(model, initial_graph, self._statistic_points) | ||
# TODO(andrey-churkin): Show final best settings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do the final best_settings
apply to the model?
if bias_correction_params.threshold is not None: | ||
threshold = bias_correction_params.threshold | ||
|
||
pipeline_steps[-1].append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why -1
?
PipelineStep = List[Algorithm] | ||
|
||
|
||
def get_statistic_points(pipeline_step: PipelineStep, model: TModel, graph: NNCFGraph) -> StatisticPointsContainer: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not keep these methods in the StepwisePipeline
class?
# Run current pipeline step | ||
step_model = run_pipeline_step(pipeline_step, step_statistics, step_model, step_graph) | ||
|
||
step_graph = None # We should rebuild the graph for the next pipeline step |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't we drop it and rebuild NNCFGraph
in each step? Any optimizations here?
for algorithm in pipeline_step[:-1]: | ||
current_model = algorithm.apply(current_model, current_graph, pipeline_step_statistics) | ||
current_graph = NNCFGraphFactory.create(current_model) | ||
current_model = pipeline_step[-1].apply(current_model, current_graph, pipeline_step_statistics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that determining of the major algorithms such as MinMax
quantization based on their position is a good idea. We should always keep in mind the position of all algorithms, even outside of the PostTrainingQuantization
.
algo = PostTrainingQuantization(target_device=target_device) | ||
min_max_algo = algo.algorithms[0] | ||
pipelines = PostTrainingQuantization(target_device=target_device) | ||
min_max_algo = pipelines.pipeline_steps[-1][0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This index hardcode doesn't explain clearly, how the MinMax
algorithm should be get.
A good example of the increased pipeline complexity and the need for the MinMax
as the independent unit.
cc @alexsu52
run openvino pre-commit tests |
55fcd6c
to
75d81f1
Compare
@@ -175,7 +180,7 @@ def find_best_combination( | |||
return best_combination_key | |||
|
|||
|
|||
class HyperparameterTuner: | |||
class HyperparameterTuner(Pipeline): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't find which interface I should implement to support my algorithm with the HyperparameterTuner algorithm.
75d81f1
to
69732b9
Compare
71b7f84
to
96e8d76
Compare
f56b9f7
to
d3d90a1
Compare
d3d90a1
to
37c8666
Compare
Changes
The primary goals of that pull request are to
Reason for changes
Related tickets
Ref: 117471
Tests