CURRENT PAGE: https://auditory.solutions/as/Publications.aspx

Toso Pankovski
Screening For Dichotic Acoustic Context And Headphones In Online Crowdsourced Hearing Studies
https://jcaa.caa-aca.ca/index.php/jcaa/article/view/3403
Canadian Acoustics - Canadian Acoustical Association / Association Canadienne d'Acostique
July 2021
Demo video presentation of the method
Summary: Experimental evidence suggests that crowdsourced online experiments, where suitable, may produce better data than in-lab studies. The absence of a reliable screening method for headphones and dichotic auditory context (perfect separation of the stereo channels) is one of the main reasons why online crowdsourcing is rarely possible for auditory studies. As the evidence demonstrates, the responses to the questions “Are you wearing headphones” and “Are the headphones stereo channels separated” are providing unreliable and even conflicting results. Here we show that the interference beating phenomenon can be used as a screening method for dichotic context with satisfactory accuracy. We collected data through an in-lab experiment to test the method’s performance against the reference (the truth), avoiding the uncontrolled biases of the online experiments, achieving Cohen’s Kappa of 0.79 (95% CI, [0.52, 1.06], p<0.001), yielding “Substantial agreement” when calculated over the whole sample, and Cohen’s Kappa of 1 (95% CI, [1, 1], p=0.001), yielding “Almost perfect agreement” when calculated only over the true dichotic cases. Also, we collected data by using the only other method found in the literature that attempts screening for headphones usage, to compare both methods over the same participants and auditory contexts. The usage of the new method is tested in a crowdsourced setting, involving over 2000 online participants. The in-lab and online results suggest that the method introduced in this study is suitable, and therefore, an enabler of auditory online crowdsourced studies. 
Achieved Cohen Kappa



Toso Pankovski & Eva Pankovska
Emergence of the consonance pattern within synaptic weights of a neural network featuring Hebbian neuroplasticity
doi:10.1016/j.bica.2017.09.001
Elsevier - Biologically Inspired Cognitive Architectures
October 2017
Cited by:
     - doi:10.3389/fpsyg.2018.00381
     - doi:10.1007/978-3-030-01198-7_11
     - doi:10.1038/s41598-018-35873-8
     - http://hdl.handle.net/1946/33991
Summary: Consonance is a perception phenomenon that evokes pleasant feelings when listening to complex sounds. Since Pythagoras, people have attempted to explain consonance and dissonance, using diverse methodological means, with limited success and without providing convincing underlying causes. We demonstrate that a specific auditory spectral distribution caused by non-linearities, as a first phenomenon, and the Hebbian neuroplasticity as a second, are sufficient set of phenomena a system should possess in order to generate the consonance pattern — the actual two-tone interval list ordered by consonance. The emergence of this pattern is explained in a step-by-step manner, utilizing an artificial neural network model. In an reverse engineering manner, our simulations are testing all the possible spectral distributions of auditory stimuli (within particular precision scales and applying certain abstractions) and reveal those that produce a result with a pattern perfectly matching the consonance ordered two-tone interval list, the one that is widely accepted in the Western musical culture. The results of this study suggest that the consonance pattern should be an expected outcome in any system containing the asserted set of phenomena. The intent of this study is not to realistically model the human auditory system, but to demonstrate a set of features an abstract and generic system should poses in order to produce the consonance pattern. 
Consonance Pattern



Toso Pankovski
Conditioned Standard BCI CODEC – A theoretical proposal
doi:10.1016/j.bica.2018.04.004
Elsevier - Biologically Inspired Cognitive Architectures
April 2018
Explanatory video for non-experts
Explanatory video for experts
Summary: Despite recent technological advances in Brain-Computer Interfaces, we continue to struggle with acquiring high-resolution (spatial and temporal) deep brain neural activity patterns. Making sense of the measured data regarding its correlation to the higher-level cognitive processes is an even bigger challenge. The author of this study proposes a standardized invasive in vivo method for (a) extracting and (b) decoding deep brain neural dynamics using multielectrode array implants. Based on neuroplasticity, the method would enforce neural conditioning, which would decode and convey neural correlates’ activities onto the multielectrode array pins. Applied inversely, it would allow encoding and eliciting desired sensations in the subject’s brain. A simple unsupervised recurrent neural network model is built to demonstrate the method. This study is theoretical, and an experimental animal trial is proposed to test the demonstrated method. 
Conditioned Standard BCI CODEC



Toso Pankovski
Fast calculation algorithm for discrete resonance-based band-pass filter
doi:10.1016/j.aej.2016.06.017
Elsevier - Alexandria Engineering Journal
July 2016
Summary: Inner ear (cochlear) simulation research triggered the creation of this fast-calculation algorithm for a novel discrete resonance-based time-to-frequency transformation method. The presented stand-alone calculation algorithm related to this filter produces its output with a delay of just one sampling period. The algorithm’s calculation cost is only 3 multiplications and 3 additions per sample, and does not require long memory buffers. The presented transformation does not surpass the precision of the Discrete Fourier and Discrete Wavelet Transformations. However, it may prove essential when the noise-artifacts of the near-real-world simulation are necessary in order to produce some specific auditory-perception phenomena.
Audio Spectral Analisys with DRBF