Cyber-Physical Smart Manufacturing

In recent years, we have embarked on innovative pursuits to enhance smart manufacturing environments through advanced assembly activity recognition systems. Leveraging a diverse set of modalities including video analysis through red-green-blue (RGB) and hand skeleton frames, and Inertial Measurement Unit (IMU) data captured through wearable devices, our research spans an array of strategies and mechanisms aimed at augmenting the recognition and prediction of fine-grained assembly activities in real time.

We have explored the profound potentials of multi-modal approaches, employing both visual cameras and smart armbands to meticulously capture intricate workers’ activities, fostering a human-centered intelligent manufacturing system that promises efficiencies as high as 100%. Throughout our work, the integration of convolutional neural networks (CNNs) stands as a testament to our commitment to pushing the boundaries of technological advancement, with individualized systems seeing recognition accuracies reaching up to 94%.

Importantly, our methodologies are not confined to single models, extending to dynamic, two-stage networks that synergize scene-level and temporal-level activity features to attain remarkable accuracy metrics. Further, we have championed the augmentation of real-time assistance through innovative tools such as augmented reality, crafting environments that significantly reduce error rates and time spent on complex tasks, showcasing reductions by over 30% in both realms.

We invite you to delve into our research, as we stand on the cusp of a new era in smart manufacturing, a rich tapestry of knowledge that amalgamates deep learning, transfer learning, and fog computing frameworks to craft solutions that are not only groundbreaking but deeply attuned to the intricacies of human behavior and ergonomic needs in a manufacturing setup. Our explorations mark not just a step, but a giant leap towards an efficient, human-centric, and intelligent manufacturing future.

Recent Publications

2023

Chen, Haodong, Niloofar Zendehdel, Ming C. Leu, and Zhaozheng Yin. “Fine-grained activity classification in assembly based on multi-visual modalities.” Journal of Intelligent Manufacturing (2023): 1-19.

2022

Al-Amin, Md, Ruwen Qin, Wenjin Tao, David Doell, Ravon Lingard, Zhaozheng Yin, and Ming C. Leu. “Fusing and refining convolutional neural network models for assembly action recognition in smart manufacturing.” Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 236, no. 4 (2022): 2046-2059.

2021

Al-Amin, Md, Ruwen Qin, Md Moniruzzaman, Zhaozheng Yin, Wenjin Tao, and Ming C. Leu. “An individualized system of skeletal data-based CNN classifiers for action recognition in manufacturing assembly.” Journal of Intelligent Manufacturing (2021): 1-17.

2020

Tao, Wenjin, Ming C. Leu, and Zhaozheng Yin. “Multi-modal recognition of worker activity for human-centered intelligent manufacturing.” Engineering Applications of Artificial Intelligence 95 (2020): 103868.

Tao, Wenjin, Md Al-Amin, Haodong Chen, Ming C. Leu, Zhaozheng Yin, and Ruwen Qin. “Real-time assembly operation recognition with fog computing and transfer learning for human-centered intelligent manufacturing.” Procedia Manufacturing 48 (2020): 926-931.

Lai, Ze-Hao, Wenjin Tao, Ming C. Leu, and Zhaozheng Yin. “Smart augmented reality instructional system for mechanical assembly towards worker-centered intelligent manufacturing.” Journal of Manufacturing Systems 55 (2020): 69-81.

2019

Tao, Wenjin, Ze-Hao Lai, Ming C. Leu, Zhaozheng Yin, and Ruwen Qin. “A self-aware and active-guiding training & assistant system for worker-centered intelligent manufacturing.” Manufacturing letters 21 (2019): 45-49.

Al-Amin, Md, Wenjin Tao, David Doell, Ravon Lingard, Zhaozheng Yin, Ming C. Leu, and Ruwen Qin. “Action recognition in manufacturing assembly using multimodal sensor fusion.” Procedia Manufacturing 39 (2019): 158-167.

Tao, Wenjin, Ze-Hao Lai, Ming C. Leu, and Z. Yin. “Manufacturing assembly simulations in virtual and augmented reality.” Augmented, Virtual, and Mixed Reality Applications in Advanced Manufacturing (2019).

2018

Tao, Wenjin, Ming C. Leu, and Zhaozheng Yin. “American Sign Language alphabet recognition using Convolutional Neural Networks with multiview augmentation and inference fusion.” Engineering Applications of Artificial Intelligence 76 (2018): 202-213.

Al-Amin, Md, Ruwen Qin, Wenjin Tao, and Ming C. Leu. “Sensor data based models for workforce management in smart manufacturing.” In Proceedings of the 2018 Institute of Industrial and Systems Engineers Annual Conference. 2018.

Hu, Liwen, Ngoc-Tu Nguyen, Wenjin Tao, Ming C. Leu, Xiaoqing Frank Liu, Md Rakib Shahriar, and SM Nahian Al Sunny. “Modeling of cloud-based digital twins for smart manufacturing with MT connect.” Procedia manufacturing 26 (2018): 1193-1203.

Tao, Wenjin, Ze-Hao Lai, Ming C. Leu, and Zhaozheng Yin. “American sign language alphabet recognition using leap motion controller.” In Proceedings of the 2018 institute of industrial and systems engineers annual conference. 2018.

Tao, Wenjin, Ze-Hao Lai, Ming C. Leu, and Zhaozheng Yin. “Worker activity recognition in smart manufacturing using IMU and sEMG signals with convolutional neural networks.” Procedia Manufacturing 26 (2018): 1159-1166.