Facial Emotion Classification and Collaborative Robotics on KR260


Collaborative robotics and facial emotion classification are two related fields that can benefit from each other. Collaborative robotics aims to create robots that can work safely and efficiently with humans in a shared environment. Facial emotion classification aims to analyze human facial expressions and recognize the underlying emotions. By combining these two fields, we can create robots that can not only perform tasks with humans but also understand and respond to their emotions. This can enhance the naturalness and empathy of the human-robot interaction, and improve the performance and satisfaction of both parties.

In this article, we will present a demo of a facial emotion classification system that runs on the Kria KR260 Robotics Starter Kit, a scalable and out-of-the-box development platform for robotics. We will describe the main features and benefits of the Kria KR260, the hardware and software components of our system, and the results of our experiments. Our goal for this article is to demonstrate how the Kria KR260 can be used to develop and deploy adaptive and intelligent robotic applications using PYNQ and MuseBox.

Why Collaborative Robotics?

Collaborative Robotics is a growing field in different key markets, such as Industrial, Healthcare, and Consumer. Although the growth in the market, the demands for new functionalities come along with the necessity of new powerful hardware. MakarenaLabs is AMD's partner and one of its specializations is to design robotic applications for the new Kria KR260.

Furthermore, MakarenaLabs provides Musebox, a framework that provides comprehensive APIs and ML stack to fasten the time to market when dealing with AI applications. In particular, collaborative robotics benefits from the new advancing in Segmentation and Scene Understanding, fundamental tasks in decision-making, and Human-Robot Collaboration.


The facial emotion classification system consists of two main components:

  1. The MuseBox platform is used to infer different AI tasks using the frames from a USB camera.
  2. The PYNQ platform is used to interface with the FPGA (Kria SOM KR260).


hardware configuration


Therefore, to use this system, you will need:

  • a KR260 FPGA by AMD
  • A PYNQ image based on Petalinux for the KR260 (provided here)
  • The MuseBox library provided here (already installed on the image provided above)


Facial Emotion Classification Demo

After we have everything set up, we should be able to open the Jupyter Notebooks on <board_ip>:9090. We provide a Jupyter Notebook which is ready to interact with by clicking the Start and Stop buttons as below:




  1. Github repository


Facial emotion classification is important for many reasons. It can help us improve our own emotional awareness and regulation, which can enhance our well-being and mental health. By analyzing human facial expressions and recognizing the underlying emotions, robots can adapt their behavior and feedback according to human emotions, such as speed, force, distance, difficulty, challenge, reward, or intervention.