Performance evaluator¶
API reference¶
- class doatools.performance.evaluator.PerformanceResult(snr, n_snapshots, n_monte_carlo)[source]¶
Bases:
objectEncapsulates performance evaluation results.
- crb_values¶
Dictionary containing different types of CRB values, where keys are CRB types and values are CRB values.
- Type:
- estimator_results¶
Results for different estimators, where keys are estimator names and values are dictionaries containing different metrics.
- Type:
- add_estimator_result(estimator_name, metric_results, sample_estimates=None, computation_time=0.0)[source]¶
Adds estimator results.
- Parameters:
estimator_name (str) – Name of the estimator.
metric_results (dict) – Metric results, where keys are metric names and values are metric values.
sample_estimates (array, optional) – Array of sample estimates with shape (n_runs, n_sources).
computation_time (float, optional) – Computation time for this estimator in seconds.
- class doatools.performance.evaluator.DOAPerformanceEvaluator(array, sources, snr, n_snapshots, n_monte_carlo, estimators, crb_types=None, metrics=None, save_sample_estimates=False, n_jobs=1)[source]¶
Bases:
objectDOA performance evaluator for assessing DOA estimation algorithms under different conditions.
This evaluator accepts user-specified parameters, performs Monte Carlo simulations, computes evaluation metrics (MSE, RMSE) and theoretical performance bounds (CRLB), and returns the results along with computation time.
- evaluate(custom_metrics=None, verbose=0)[source]¶
Performs performance evaluation.
- Parameters:
custom_metrics (dict, optional) – Dictionary of custom evaluation metrics, where keys are metric names and values are functions that accept two parameters (estimated locations and true locations) and return a metric value. For example: {‘mae’: lambda est, true: np.mean(np.abs(est - true))}
verbose (int, optional) – Verbosity level. 0 for silent, 1 for progress bar.
- Returns:
Evaluation result object.
- Return type:
- doatools.performance.evaluator.evaluate_performance(array, sources, snr, n_snapshots, n_monte_carlo, estimators, crb_types=None, metrics=None, custom_metrics=None, save_sample_estimates=False, verbose=0, n_jobs=1)[source]¶
Performance evaluation function for quickly assessing DOA algorithm performance.
This function is a simplified interface for the DOAPerformanceEvaluator class, designed for easy direct use by users.
- Parameters:
array (ArrayDesign) – Array design.
sources (FarField1DSourcePlacement) – Source locations.
snr (float) – Signal-to-noise ratio in dB.
n_snapshots (int) – Number of snapshots.
n_monte_carlo (int) – Number of Monte Carlo simulations.
estimators (dict or list or object) – DOA estimator instance, list of instances, or dictionary.
crb_types (list or str, optional) – List of CRLB types or a single type. Possible values: ‘sto’ (stochastic CRB), ‘det’ (deterministic CRB), ‘stouc’ (stochastic uncorrelated CRB). Default value is [‘sto’].
metrics (list or str, optional) – List of evaluation metrics or a single metric. Possible values: ‘bias’ (bias), ‘mae’ (mean absolute error), ‘mse’ (mean squared error), ‘rmse’ (root mean squared error). Default value is [‘mse’].
custom_metrics (dict, optional) – Dictionary of custom evaluation metrics.
metrics
custom_metrics
save_sample_estimates (bool, optional) – Whether to save sample estimates. Default value is False.
verbose (int, optional) – Verbosity level. 0 for silent, 1 for progress bar.
n_jobs (int, optional) – Number of parallel jobs to run. Default is 1 (no parallelism). If -1, use all available CPUs.
- Returns:
Evaluation result object.
- Return type: