🎄🌟 🎉 Wishing our readers a Merry Christmas and a Happy New Year filled with new possibilities! 🎄🌟 🎉
🎄🌟 🎉 Wishing our readers a Merry Christmas and a Happy New Year filled with new possibilities! 🎄🌟 🎉

Lenovo in collaboration with Innovations in Dementia launches first photorealistic AI Avatar

15th October, 2024

For people living with Alzheimer’s and dementia

image credit- shutterstock

image credit- shutterstock

Lenovo, in collaboration with Innovations in Dementia, launched Alzheimer’s Intelligence, a photorealistic 3D avatar with custom AI based on the lived experiences of people with dementia and Alzheimer’s. This proof-of-concept project gives people and families navigating a diagnosis of dementia 24-hour access to a conversational avatar that offers curated advice that prioritizes accuracy, privacy, and compassion.


In this first-of-its-kind application, Lenovo pioneered the use of AI made possible by its comprehensive portfolio of technology solutions, from individual to enterprise. The combined technology aggregates firsthand experience and advice from hundreds of real people living with dementia and Alzheimer’s into a responsive, photoreal 3D avatar capable of having an unscripted, natural conversation. With this proof of concept, a person diagnosed with dementia is just a click away from a real-time conversational resource and aid. The avatar was created from a composite of images of 10 people living with dementia and Alzheimer’s. From the initial pose of each image, generative AI extrapolated faces for several thousand other expressions and angles inspired by the subject. These were then aggregated into one dataset, which blends with the face of a filmed performer to create the image of "Liv," the AI avatar.


A large language model (LLM) dataset of the advice Liv can impart was created using data that came directly from experiences of people living with dementia, including entries from Innovations in Dementia’s Dementia Diaries project and in-depth panel interviews. Each time a user asks a question (via Speech-to-text), the LLM queries the dataset to provide text-based answers to the user's question, expressed using the language of the "persona" created for Liv, and based on the knowledge base built for Liv. The replies are then voiced using a vocal synth created for the project. "Sentiment analysis" is used to analyse the underlying feeling of the reply, which is reflected in the facial expression of the AI when it speaks. Finally, real time 4k visual AI is used to allow the speech generated by the vocal synth to appear to be spoken in real time.



 

© 2023 MM Activ Sci-Tech Communications. All rights reserved | Disclaimer