New research suggests that AI companionship can offer social interaction and help people practice social skills, breaking the cycle of loneliness.
Author: Neuroscience News
Repetition in the brain gives rise to two peculiar phenomena: déjà vu and its lesser-known counterpart, jamais vu.
Repetition in the brain gives rise to two peculiar phenomena: déjà vu and its lesser-known counterpart, jamais vu.
Researchers developed a novel, wireless, skin-interfaced olfactory feedback system capable of releasing various odours.
On a cellular level, the marmoset's hippocampal regions show selectivity for 3D view and head direction, suggesting that gaze, not place, is key to their spatial navigation.
EPFL researchers have developed a novel machine learning algorithm called CEBRA, which can predict what mice see based on decoding their neural activity. The algorithm maps brain activity to specific frames and can predict unseen movie frames directly from brain signals alone after an initial training period. CEBRA can also be used to predict movements of the arm in primates and to reconstruct the positions of rats as they move around an arena, suggesting potential clinical applications.
Artificial intelligence (AI) systems can process signals similar to how the brain interprets speech, potentially helping to explain how AI systems operate. Scientists used electrodes on participants' heads to measure brain waves while they listened to a single syllable and compared that brain activity to an AI system trained to learn English, finding that the shapes were remarkably similar, which could aid in the development of increasingly powerful systems.
Listening to music could reduce symptoms of cybersickness, which can cause dizziness, nausea and headaches when using virtual reality devices. The study discovered that joyful music significantly decreased the overall intensity of cybersickness, while both joyful and calming music substantially decreased nausea-related symptoms. The study, also found that cybersickness caused a temporary reduction in verbal working memory test scores, and a decrease in pupil size, slowed reaction times and reading speed.
Researchers have developed a wearable interface called EchoSpeech, which recognizes silent speech by tracking lip and mouth movements through acoustic-sensing and AI. The device requires minimal user training and recognizes up to 31 unvocalized commands. The system could be used to give voice to those who are unable to vocalize sound or communicate silently with others.
Combining novel virtual reality imaging with machine learning, researchers were able to accurately detect mouse models of ASD compared to wild-type mice based on cortical functional network dynamics while the animal was in motion.