This study analyzed acoustic emission (AE) signals generated during ultrasonic machining of SiC cathodes and evaluated classification performances of various machine learning models. AE data were collected in both waveform and hit formats, enabling signal characterization through statistical analysis and frequency domain examination. Various machine learning models, including XGBoost, KNN, Logistic Regression, SVM, and MLP, were applied to classify machining states. Results showed that XGBoost achieved the highest classification accuracy across all sensor positions, particularly at the upper part of the worktable with an accuracy of 98.35%. Additional experiments confirmed the consistency of these findings, highlighting the influence of sensor placement on classification performance. This study demonstrates the feasibility of monitoring AE-based machining state using machine learning and emphasizes the importance of sensor placement and signal analysis in improving classification accuracy. Future research should incorporate defect data and deep learning approaches to further enhance classification performance and process monitoring capabilities.
The collaboration of robots and humans sharing workspace, can increase productivity and reduce production costs. However, occupational accidents resulting in injuries can increase, by removing the physical safety around the robot, and allowing the human to enter the workspace of the robot. In preventing occupational accidents, studies on recognizing humans, by installing various sensors around the robot and responding to humans, have been proposed. Using the LiDAR (Light Detection and Ranging) sensor, a wider range can be measured simultaneously, which has advantages in that the LiDAR sensor is less impacted by the brightness of light, and so on. This paper proposes a simple and fast method to recognize humans, and estimate the path of humans using a single stationary 360° LiDAR sensor. The moving object is extracted from background using the occupied grid map method, from the data measured by the sensor. From the extracted data, a human recognition model is created using CNN machine learning method, and the hyper-parameters of the model are set, using a grid search method to increase accuracy. The path of recognized human is estimated and tracked by the extended Kalman filter.