Trouble Visualizing Evaluation Metrics with TensorFlow Model Analysis and Fairness Indicators

29 views Asked by At

I'm currently working on evaluating machine learning models using TensorFlow Model Analysis (TFMA) and Fairness Indicators. I have written code to visualize the evaluation results, including slicing metrics and fairness indicators. However, despite following the documentation and examples closely, I'm encountering an issue where the visualization is not showing up as expected.

Here's a summary of what I've done:

Visualitation Metric Result

I've ensured that the output_path variable points to the correct location where the evaluation results are saved. Additionally, I've confirmed that the evaluation results are indeed present in the specified output path. I've also checked for any errors or exceptions, but none are being raised.

Could anyone provide insights into why the visualization might not be showing up? Are there any common pitfalls or additional configurations that I might be missing? Any help or suggestions would be greatly appreciated. Thank you!

Here is my complete code on Model Analysis and Validation:

model_resolver = Resolver(
    strategy_class= LatestBlessedModelStrategy,
    model = Channel(type=Model),
    model_blessing = Channel(type=ModelBlessing)
).with_id('Latest_blessed_model_resolver')
interactive_context.run(model_resolver)

eval_config = tfma.EvalConfig(
    model_specs=[tfma.ModelSpec(label_key='label_xf')],
    slicing_specs=[tfma.SlicingSpec()],
    metrics_specs=[
        tfma.MetricsSpec(metrics=[
            tfma.MetricConfig(class_name='BinaryAccuracy',
                threshold=tfma.MetricThreshold(
                    value_threshold=tfma.GenericValueThreshold(
                        lower_bound={'value':0.5}),
                    change_threshold=tfma.GenericChangeThreshold(
                        direction=tfma.MetricDirection.HIGHER_IS_BETTER,
                        absolute={'value':0.0001})
                    )
            )
        ])
    ]
)
evaluator = Evaluator(
    examples=transform.outputs['transformed_examples'],
    model=trainer.outputs['model'],
    baseline_model=model_resolver.outputs['model'],
    eval_config=eval_config
)
interactive_context.run(evaluator)

# Visualize the evaluation results
output_path = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(output_path)
tfma.view.render_slicing_metrics(tfma_result)
tfma.addons.fairness.view.widget_view.render_fairness_indicator(
    tfma_result
)
0

There are 0 answers