The History & Science Behind Social-Buddy

The growing need for senior care in the Netherlands and unresolved concerns prompted Jack Jagt to create Social Buddy. Recognizing the scarcity of resources for elders to keep healthy, active, and social, our artificial intelligence technology bridges this gap by providing elders with a kind and helpful companion to assist them with various tasks and activities. We’ve designed a brilliant and empathic AI to aid and support seniors daily using these tools.

Primary features of Social Buddy currently being reviewed:

Personalized interaction
The Buddy can recognize the person it interacts with and change its information and speech accordingly – enabling a more focused and efficient engagement.

Open-source hardware and software
All our hardware and software are open-source, making it easy to assemble and program without the need for expert knowledge and easy to implement in any home.

Voice and Speech Recognition
People with or without physical limitations can communicate with Social Buddy using voice commands, and hand gestures, thanks to speech recognition and pose detection.

Emotion Detection
Social Buddy has emotion recognition technologies to understand and react to human emotions. This makes it easier and more personal for a user to engage with our AI—resulting in a more intuitive and natural conversation flow. Emotional expression Social Buddy is programmed to display emotions and behaviors, allowing for a more realistic and spontaneous connection with users.

Face Recognition
Social Buddy can recognize and track faces, allowing our AI to differentiate between persons in the room. This feature is helpful in elder care.

Object Recognition
Due to its object recognition capabilities, Social Buddy is a helpful aid in managing prescription drugs. By recognizing prescription containers and monitoring usage, Social Buddy can assist users in taking the right medication at the right time.

Connectivity
Social Buddy is designed to connect to other Buddies in the cloud, allowing them to share data and collaborate. Using this capability, Social Buddy can pick up knowledge from other Buddies, enhancing its abilities and expanding its range of activities over time, making it a long-term partner.

Social Buddy’s tech stack
We’ve implemented a variety of cutting-edge technologies to train our models.

Google Tensorflow
This is a powerful open-source machine learning platform to improve our AI continously.

Google Dialogflow
with Google Assist integration For natural language processing and voice recognition.

Google’s Machine Learning Engine and Natural Language API
This helps our AI to perceive and understand human emotions and respond accordingly.

Google Vision API and Google Visionkit for visual recognition
To identify and recognize items in the user’s environment

OpenMv
This is a hardware-based facial recognition program to protect our users’ privacy and security by preventing images or personal data from being sent to the cloud.

Google’s Speech API
To provide natural and smooth communication so all users can speak to Social Buddy using their natural voice.

Pose detection
AI technology to facilitate individuals with disabilities in bypassing touchscreens. This innovative feature mainly benefits elderly individuals who encounter challenges in operating touch screens or are physically unable to access the screen due to limitations.

Touch & pose detection

A list of studies supporting the use of our technology

“Social Robots for Elderly Depression: Exploring Acceptance and Usefulness” by Corina Sas, Nadia Berthouze, and Arosha K. Bandara. The results in this study show that robots are accepted by participants and are helpful in providing emotional support.

“Exploring User Experience and Technology Acceptance for a Smart Home Monitoring System in a Rehabilitation Setting” by Eunjae Lee and Hee-Sung Lim. This study investigated user experience and technology acceptance of a smart home monitoring system in rehabilitation. The results showed that the system effectively improved patient outcomes and was well-received by patients and healthcare professionals.

“Speech-Based Emotion Recognition Using a Multi-Layer Fusion Approach” by Xuan Zhang, Jun Wan, and Dongmei Jiang. This study explored the use of speech-based emotion recognition to improve communication between humans and robots. The results showed that the approach effectively detected emotions with high accuracy.

“The Effectiveness of Robots in the Care of Older People: A Systematic Review” by Joanna L. Robinson, Ray B. Jones, and Wendy Moncur. This study reviewed the effectiveness of robots in the care of older people. The results showed that robots effectively improved the quality of life and reduced loneliness among older people.

“New research reveals how touchscreens leaves 5.6 million elderly behind in the UK.” Read here. 

“Social Robots are perceived as pets.” Read here.

Reach out to us

Would you like to know more and/or collaborate in our research and development?
Contact us below.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google