top of page

Ariel Tankus, Ph.D

Department of Neurology and Neurosurgery
School of Medicine
and
Sagol School of Ne
uroscience
Tel Aviv University

AND

Functional Neurosurgery Unit
Tel Aviv Sourasky Medical Center ("Ichilov")

Ariel Tankus, Ph.D

About Me

I study the representation of speech in the human brain at the single-neuron level, in health and disease, and machine-learning and deep-learning algorithms for decoding the neuronal activity to infer speech contents.  These algorithms are aimed at Brain-Machine Interfaces (BMIs) for restoration of speech faculties in completely paralyzed persons ("locked-in") to allow them to communicate again using artificial speech.  In my research, I conduct intraoperative experiments with neurosurgical patients with movement disorders, such as Parkison's disease or essential tremor, implanted with deep brain stimulators (DBS) or undergoing radiofrequency (RF) lesioning.  With neurosurgical epilepsy patients I develop Speech Brain-Machine Interfaces.
All neurosurgeries are conducted by neurosurgeon Dr. Ido Strauss at Tel Aviv Sourasky Medical Center ("Ichilov").

See Bio

Latest Publications

A Speech Neuroprosthesis in the Frontal Lobe and Hippocampus: Decoding High-Frequency Activity into Phonemes.
Neurosurgery, 2024.
(co-authors: Einat Stern, Guy Klein, Nufar Kaptzon; Lilac Nash, Tal Marziano, Omer Shamia, Guy Gurevitch, Lottem Bergman, Lilach Goldstein, Firas Fahoum, and Ido Strauss)

Loss of speech due to injury or disease is devastating. Here, we report a novel speech neuroprosthesis that artificially articulates building blocks of speech based on high-frequency activity in brain areas never harnessed for a neuroprosthesis before: anterior cingulate and orbitofrontal cortices, and hippocampus.

A 37-year-old male neurosurgical epilepsy patient with intact speech, implanted with depth electrodes for clinical reasons only, silently controlled the neuroprosthesis almost immediately and in a natural way to voluntarily produce 2 vowel sounds.

During the first set of trials, the participant made the neuroprosthesis produce the different vowel sounds artificially with 85% accuracy. In the following trials, performance improved consistently, which may be attributed to neuroplasticity. We show that a neuroprosthesis trained on overt speech data may be controlled silently.

This may open the way for a novel strategy of neuroprosthesis implantation at earlier disease stages (eg, amyotrophic lateral sclerosis), while speech is intact, for improved training that still allows silent control at later stages. The results demonstrate clinical feasibility of direct decoding of high-frequency activity that includes spiking activity in the aforementioned areas for silent production of phonemes that may serve as a part of a neuroprosthesis for replacing lost speech control pathways.

Machine Learning Decoding of Single Neurons in the Thalamus for Speech Brain-Machine Interfaces.
Journal of Neural Engineering, 21 036009, 2024.
(co-authors: Noam Rosenberg, Oz Ben-Hamo, Einat Stern and Ido Strauss)

In this paper, we decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we also characterize the amount of thalamic neurons necessary for high accuracy decoding. For this, we intraoperatively recorded single neuron activity in the left Vim of eight neurosurgical patients undergoing implantation of deep brain stimulator or RF lesioning during production, perception and imagery of the five monophthongal vowel sounds. Our Spade decoder, outperformed all algorithms compared with, speech production, perception and imagery, and obtained accuracies of 100%, 96%, and 92%, respectively (chance level: 20%). The accuracy was logarithmic in the amount of neurons for all three aspects of speech. 

Neuronal Encoding of Speech Features in the Human Thalamus in Parkinson’s Disease and Essential Tremor Patients.
Neurosurgery, 94(2):307-316, 2024.
(co-authors: Yael Lustig, Guy Gurevitch, Achinoam Faust-Socher, Ido Strauss)

In this study, we intraoperatively recorded single neuron activity in the left Vim of 8 neurosurgical patients with PD or ET undergoing implantation of DBS or RF lesioning while they articulated the five monophthongal vowel sounds.

We report that single neurons in the left Vim encode individual vowel phonemes mainly during speech production, but also during perception and imagery. They mainly employ one of two encoding schemes: broad or sharp tuning, with similar percentage of units each. Sinusoidal tuning has been demonstrated in almost half of the broadly-tuned units.  PD patients had a lower percentage of speech-related units than had ET patients in each aspect of speech (production, perception and imagery), a significantly lower percentage of broadly-tuned units, and a significantly lower median firing rates during speech production and perception, but significantly higher rates during imagery.

Machine Learning Algorithm for Decoding Multiple Subthalamic Spike Trains for Speech Brain-Machine Interfaces.
Journal of Neural Engineering, 18:066021, 2021.
(co-authors: Lior Solomon, Yotam Aharony, Achinoam Faust-Socher and Ido Strauss)

In this study, we decode the electrical activity of single neurons in the human subthalamic nucleus (STN) to infer the speech features that a person articulated, heard or imagined. Our decoder reaches 100% accuracy for speech production, 96% for perception, and 88% for imagery.
We also evaluate the amount of subthalamic neurons required for high accuracy decoding suitable for real-life speech brain-machine interfaces (BMI) which is of supreme importance for a neurosurgeon planning the implantation of a speech BMI.

bottom of page