ABRA
Auditory Brainstem Response Analyzer
ABRA is a machine-learning-driven analysis pipeline designed to automate and standardize the interpretation of Auditory Brainstem Responses (ABRs)— a key electrophysiological signal used in hearing and neuroscience research.
What ABRA Does
Traditionally, ABR analysis requires manual marking of waveform features and subjective judgment to determine hearing thresholds. ABRA replaces this with a reproducible, data-driven workflow that preprocesses raw ABR recordings from multiple file formats, uses two convolutional neural networks (CNNs) to locate the Wave I peak in each waveform and estimate the auditory detection threshold based on stacked waveform responses, and delivers structured outputs including peak latencies, amplitudes, and threshold estimates for review, export, and further analysis.
How It Works
- Input & Preprocessing - Raw data (e.g., .arf, .tsv, .asc, or standardized .csv) are normalized, aligned, and cleaned to a consistent format that the machine-learning models expect. This includes harmonizing level and frequency scales and optional filtering.
- Machine Learning Models - Peak-finding CNN: Trained to identify the key ABR waveform feature (Wave I) across noisy and variable data. Thresholding CNN: Learns to predict the lowest stimulus level at which an auditory response is present.
- Interactive Analysis & Export - A Streamlit web app lets users batch-upload data, view model outputs, manually override predictions, and export results for downstream use.
Benefits
- Automates a traditionally manual, subjective task, reducing time and variability.
- Improves reproducibility across datasets and laboratories.
- Integrates human-in-the-loop control, allowing corrections where needed.
- Supports multiple common ABR data formats out of the box.
Tradeoffs & Future Directions
Using learned models offers consistency and speed, but individual predictions are less interpretable than rule-based systems. To balance this:
- ABRA keeps preprocessing explicit and format-aware.
- The web interface supports manual edits and quality control.
- Future work may include confidence estimates, expanded format support, and deeper model architectures.