Submit manuscript...
eISSN: 2574-8092

International Robotics & Automation Journal

Mini Review Volume 8 Issue 1

Industrial robots integration based in object location and component inspection in industry 4.0 using vision guided robot VGR)

Adas Pereira Vitalli Rogério , Arnaldo Ortiz Clemente, Rosário João Mauricio

Department of Robotics Engineering, Campinas State University (UNICAMP), Brazil

Correspondence:

Received: November 19, 2021 | Published: January 13, 2022

Citation: Vitalli AP, Clemente R, Ortiz A, et al. Industrial robots integration based in object location and component inspection in industry 4.0 using vision guided robot (VGR). Int Rob Auto J. 2022;8(1):1-7 DOI: 10.15406/iratj.2022.08.00236

Download PDF

Abstract

Industrial robots equipped with artificial vision system are mainly intended for positioning tasks, inspection of parts or objects. In order to meet the demands of productivity and competitiveness, nowadays intelligent factories that use the concept of Industry 4.0, increasingly need sophisticated techniques for image processing and inspection. In this work we will discuss the implementation of two artificial vision systems that will perform two types of mechatronic applications of "VGR (Vision Guided Robot)". The first application concerns a Guidance system for moving a robotic device from the image location. And the second application will consist of manipulating parts carried out by a robot so that a vision system performs an inspection task, identifying possible product failures in different positions. In this work, we will present some results obtained through practical tests using COGNEX artificial vision equipment and robotic manipulators from ABB and KUKA. The research results for the two robots will be analyzed and compared with the objective of being used in applications directed to the automotive, manufacturing, food, beverage and plastic industries, among others.

Keywords: vision-guided robot (VGR), image inspection, fault location, industry 4.0

Introduction

Currently many studies related to the robotic systems sensors are carried out and interconnected to digital manufacturing and the industry 4.0. These researchers focus on the greatest automation of the process, whether in its operation, as well as on quality controls.1,2

Within this trend, there is the implementation of projects aimed at the use of vision-guided industrial robots, which are involved in the manipulation and assembly of components. The opportunities found in this application have a growing level of complexity and difficulty, but with great opportunities to increase productivity and improve the repeatability and quality of the process. In this same direction of development, it is possible to find applications of collaborative robots, which are found in non-confined environments and operate in collaboration of operators.3,4

The challenges verified during robotic application, especially in the variations in the process that can occur in relation to the position of capturing parts, as well as the variability of parts to be used in the process is a relevant factor. Given this direction, recently several researchers are being carried out related to the application of robot guidance through industrial vision systems, which during the period 2016-2018 an amount of approximately $8 billion USD was invested.5 In the state of the art of this field, we find application of biped robots through cameras and sensors for orientation control, in this way, it is possible to observe the range of possibility in guiding robots by computer vision systems.6,7

According to the growing demand and interest in the field of VGR (Vision Guided Robot), we realize that our research project proposes a study involving a robot Kuka Agilus KR 6 R900 six, another robot ABB IRB-120 and a Cognex camera, model IS7200-11 for each type of robotic cell, where the final goal of the proposal will be to propose a particular solution of integration between them, taking into account the safety aspects, as well as quality inspections to the process, locating objects and guiding the robot.

In this article, we will present the main steps necessary for the implementation of an artificial vision in robotics, being subdivided into the following topics:

  1. Robotics Kinematics model and positioning system;
  2. Description of the Vision system used and image processing
  3. Integration in a robotic cell;
  4. Validation of work through case studies

4.1 Vision Guided Robot - Positioning system for moving a robotic device from image location

4.2 Part handling and inspection by a robot through a vision system

  1. Results and discussion
  2. Final Conclusions.

Robotics kinematics model and positioning system

The methodology that became standard for calculating the necessary parameters of the mathematical model is the Denavit-Hartenberg Notation.8 This model allows you to obtain the position and orientation of the tool, as well as completely define the kinematics of the manipulator. The most complex configuration of the degrees of freedom is the one that has 3 degrees to position the final effector and 3 to guide it, with this it is possible to identify the direct kinematics of the robot and thus determine the movements of the robotic manipulator in three-dimensional space.9 Inverse kinematics provides the angles of the joints, and thus the number of degrees of the robot is equal to the sum of the degrees of freedom of its joints. For example, when relative movement occurs on a single axis, the joint has a degree of freedom.8–10

According to10 in relation to mechanical aspects, we find manipulators that are a combination of rigid structural elements (bodies and links), connected to each other through joints (joints). The joints compose the kinematic pair formed by two adjacent links, in our case we analyzed the rotational (Revolution), addressed in many literatures as “A” according to Figures 1 & 2, respectively.

Figure 1 Industrial robot model ABB IRB-120.

Figure 2 Model industrial robot KUKA Agilus KR 6 R900 six.

Description of the vision system used and image processing

During the analysis of kinematic data, we considered the study of robot movement after information of trajectories received from the camera, disregarding the dynamics and clearances involved. On the other hand, the vision systems have the purpose of image processing, with characteristics like human vision. Such systems are composed of the following subsystems:

  • Lighting modules: assist the capture of the image, providing an expected pattern of receipt of the images.
  • Vision sensors: responsible for capturing the image, being cameras and other acquisition devices, having the main role in the success of image processing.
  • Image processing hardware: component responsible for primary processing, which from the images obtained, transforms and digitizes them, performing an arrangement of the image in matrix format, through pixel elements.
  • Programmable logic controller (CPL): component that determines and executes secondary processing algorithms in the obtained image, such processes have the purpose to determine decision making, according to the data obtained and analyzed.

The Figure 3 & 4, respectively, characterize the subsystems of the vision system.

Figure 3 Image capture and processing scheme.

Figure 4 Scanned image processing.

The concepts and premises related to the acquisition of images by industrial vision systems should be evaluated for good efficiency. Among them, we can describe the following:

  • Field of view: this is the field of view that the camera can maintain focus and consequently the sharpness of the image to be acquired.
  • Resolution: is the measure related to the ability of the vision system to reproduce the details to be analyzed, being influenced by factors related to lighting and pixel size, the smaller the image details, the higher the resolution of the acquired image.
  • Contrast: can distinguish the black and white colors, having the lighting a great influence on the against you expected for the operation, besides being an alternative feature if the resolution is below expected.
  • Distortion: If it is the change in the size of the object in relation to the acquired image, it can be a problem if two objects are evaluated at different levels and that share the same pixel/mm calibration. It is a measure that can be compensated in the treatment of the image.
  • Lighting: this premise is one of the most important for the success of the application, because the cameras do not "see” objects, only capture the light reflected in the object and is responsible for the repeatability of the image acquisition process.

According to the main factors to be considered for the design of a vision systems application, we can also write the concepts of light intensity and vision processing for units of measurement. In addition, another aspect intrinsic to this processing, refers to the calibration of the camera, converting pixels into units of real mediated. In relation to applications, in the industrial scope the main uses for vision systems are in inspection and localization, having applications in various industrial sectors. For inspection applications, it is possible to determine product characteristics such as geometric dimensions, presence and absence of the object, orientation of parts, and so on.

In relation to s location applications, the use of industrial robots making use of the VGR, assists in determining the capture position of an object, including with the optimization of secondary processing algorithms. It is possible to evaluate and decide which part should be captured first, optimizing the trajectory to avoid previously known obstacles. For this research project will be used a camera of the brand Cognex, model In-Sight micro 1400 and CODE ISM-1400-11 with integrated LED lighting system, automatic focus adjustment with PatMax, 6mm, Red Light,800 x 600 (SVGA). Resolution of 102 frames per second, IP67 with lens protection, 4 inputs and 4 outputs of 24 V DC and RS-232 connection as shown in Figure 5.

Figure 5 Camera model used: Cognex, In-Sight micro 1400 model.

A Siemens Model S7-1200 code 1212C DC/DC/DC (6ES7 212-1AE40-0XB0) with a ProfiBus CM 1243-5 card (6GK7 243-5DX30-0XE0) will also be used, as shown in Figure 6.

Figure 6 PLC Model applied: Siemens model S7-1200.

The Electrical Panel containing the PLC controls the robot and camera. For I/O control are 32 inputs and 32 outputs all available in post, considering 3 wire sensors and coils of 24Vcc. Security is made by a small category 4 optical barrier and a safety relay that sends the signals to the robot. The panel also has a box with emergency reset buttons, reset barrier, start cycle, end cycle and also a switch wireless to enable the use of tablet as human-machine interface (HMI) as shown in Figure 7.

Figure 7 Electrical Panel for KUKA and ABB manipulator.

The KUKA cell is composed of Industrial Robot model Agilus KR 6 R900 six, with 06 (six) spatial axes, maximum range of 900 mm, maximum load capacity of 6 kg and KRC-4 Control System (compact). The peripherals are: SmartPad with 10 meters long, Interface X51, I/O card with 16 inputs (24Vcc) and 16 digital outputs (24Vcc). Power supply for I/O cards signals, maximum output 24Vcc/4A. KRL language and communication via ProfNet (master). The dedicated software is Software Gripper & SpotTech 3.1 and Micro EMD Mastering Set according to Figure 8.

Figure 8 KUKA Robotic Cell.

The ABB cell is composed model IRB-120 with 06 (six) spatial axes, maximum range of 580mm, maximum load capacity of 4kg (endurable up to 06kg) and IRC-5 Control System (compact). The peripherals are: Flex Pendant with 7 meters in length. I/O card with 16 inputs (24Vcc) and 16 digital outputs (24Vcc). Power supply for I/O cards signals, maximum output 24Vcc/4A. Communication via PCI DeviceNet Lean (master) and Profibus-DP (Slave) Board. Dedicated software is Collision Detection for anti-collision protection of the robot handle (with current spike monitoring system in the robot motors). World Zones, which allows monitoring of the area where the robot's TCP is during the entire execution of the program. Path Recovery, which allows the storage of the trajectory, in case of an interruption in the system, and its recovery, with the robot returning to the point of the trajectory where it was at the time of the stop. PC Interface for communication with PC, via Ethernet. Multitasking for multiprocessing of tasks on the controller as shown in Figure 9, as well as the vision system (Figure 10).

Figure 9 ABB Robotic Cell.

Figure 10 Integrated vision and lighting system.

Integration in a robotic cell

The architecture implemented in the ABB cell connects all components through automation networks. The camera is connected via ProfNet network and the robot via ProfiBus network. In the middle of the two is the PLC that bridges the two networks, as shown in Figure 11.

Figure 11 ABB architecture.

The architecture implemented in the KUKA cell connects all components through automation networks. The camera, PLC and robot are connected via ProfNet network, as shown in Figure 12.

Figure 12 KUKA architecture.

Validation of work through case studies

  1. Vision Guided Robot - Positioning system for moving a robotic device from image location.
  2. Part handling and inspection by a robot through a vision system
  3. Analysis of experimented data.

Results and discussion

One of the objectives of the work is to receive coordinates and orientation of the said object to be picked up in relation to a reference. In addition to the object, we will also detect in the image, the coordinates of a physical reference system.

An example of the image requested by the camera can be seen as shown in Figure 13. In it, we can see the reference coordinate (in red), the detected piece (in green), the position and orientation of the part (in blue). The data to be put around is displayed in Table 1.

Figure 13 Coordinate and reference systems.

Item

Description

Unit

Coordinate X

Horizontal position of the center of the part

mm

Coordinate Y

Vertical position of the part

mm

Orientation

Part rotation

º

Coordinate X

Horizontal position of the reference coordinate

mm

Coordinate Y

Vertical position of the reference coordinate

mm

Orientation

Rotation of the reference coordinate

º

Table 1 Camera acquisition data

Both camera detections are made relative to the top left edge of the image, which is the zero point for the camera.

The X and Y coordinate reference system on the table is detected by the camera. This reference will be fixed for a known robot coordinate system. After these two items are detected, their coordinates will be sent to the PLC that passes them on to the robot. The coordinates to be sent by the camera are a subtraction of the coordinates of the part minus the reference coordinates. Before sending, the result of this subtraction is multiplied by 100 to transmit only integers via the network, this avoids reading errors of floating-point variables (actual), as in Table 2.

Item

Description

Unit

Relative coordinate X

Horizontal position of the center of the part relative to the reference

mm *100

Coordinated reported Y

Vertical position of the part relative to the reference

mm *100

Relative guidance

Rotation of the part relative to the reference

º *100

Table 2 Data sent by camera

Upon receiving the positions mentioned above the robot performs a division by 100 to return the decimal places. The execution of the project and integration will take place as follows, as shown in Figure 14.

Figure 14 System execution flowchart.

At first, the configuration should be performed regarding the calibration of the vision system, in order to align with the robotic manipulator system, so that the coordinates transmitted by the camera are identical.11,12 Thus, it is necessary to perform the calibration of the system, which has several methods of realization, being through the routine of choosing and capturing a "grid", as well as the determination of measures already known during image processing.13,14 For this application, the edge-to-edge method was chosen. When capturing the grid image and starting the calibration routine, the edges between the squares are already recognized, being possible to observe, select them and perform the spacing markup between them. The marking of the edges can be observed in Figure 14 & 15, respectively.

Figure 15 Grid image and border marking.

In the second moment, a validation of the calibration performed was performed, using the function of determining distance in the image (Function: Distance) and a real measurement of the grid, and can be observed in the Figures. 24 and 25, respectively. As observed in the images, taking the possible measurement uncertainties, the values are acceptable when compared, and the calibration method of the vision system is validated.

At the second moment, a validation of the calibration performed was performed, using the function of determining distance in the image (Function: Distance) and a real measurement of the grid, and can be observed in Figure 15 & 16. As observed in the images, taking the possible measurement uncertainties, the values are acceptable when compared, and the calibration method of the vision system is validated.14,15

Figure 16 Spacing marking between selectable edges.

After the calibration of the system was performed, the configuration of camera functions was verified. The following functions were used, PatMax® Pattern, to determine an image pattern and return the position data of both the camera and robot's common reference image and the part itself to be captured. After calibration of the vision system, the operation of the process will occur as follows: the robot makes its approach to the catch region by a predetermined path. When you press a button to start the application, the PLC requests the camera image via network (trigger). At that moment the camera acquires a photo and starts its analysis.16

The camera then searches the image for the reference system and the object. This reference system is axes glued to the table at a fixed point. The camera then passes on to the PLC the difference of the position and orientation values of these 2 items. Position information is sent in millimeters (mm) due to a previous camera calibration. In this calibration, pixels are conversion to millimeters. In possession of this data, the PLC analyzes the angle at which the part is located. If the part is in a position where the robot cannot make the handle, it starts rotating the table. This spin is predetermined at 90º. After turning a new analysis of the camera is requested and the process restarts. After locating a valid handle position, the PLC processes the data and sends the robot the position of the part relative to the reference (difference of coordinates and angle). In the application calibration process, the robot was created a base system (or frame) on the reference system detected by the camera. Thus, the measurements calculated by the camera represent the position of the block to the robot relative to the coordinate system created. With the position and orientation of the part the robot plans the best interpolated approach route to capture the part, that is, which way it must approach to pick up the piece. From then on, calculations are made so that the approximation is made in parallel to the object. The robot then approaches and picks up the object. The robot deposits the part at a predetermined point and makes the movement to finish the cycle according to the objective proposed and achieved in this research project (Figure 17).

Figure 17 Actual distance found.

Final conclusion

The positive impacts of the research project were to hold the handle of an object by the robot, guided by vision system. The system camera performed the reading of the object and sent to the control system its position and orientation. The robot received the data that was processed by the PLC and successfully picked up the object regardless of the new unknown position and orientation it was in. The machine vision gives to the robot the ability to respond to the changing conditions. Vision-guided robots can now locate and capture parts that are incorrectly positioned on a workbench without requiring new programming or reprogramming. We saw that the piece has to be placed on a plane, not too large or high, but enough to ensure the depth and field of view of the camera.

The parts that have been used are relatively small and rest on a flat 2D surface. Our proposed system consists of a suspended Cognex camera that is connected to a machine vision processing system. The camera is "looking" at the workbench. We use two articulated robotic arms (ABB and KUKA), which offers 6 degrees of freedom, being suitable for the application of 2D vision guidance. In the first stage, step of application and operation of the vision-guided robot we calibrated the visual coordinate system with the physical coordinate system of the robot. If the two coordinate systems are the same, the task is simple, but if they do not match, the problem is magnified and complex. Lens distortion, lighting changes and other factors affect how the vision system sees the "real world”. The vision system concatenated the 2D camera coordinate system with the robot's physical coordinate system, placing the camera directly above it, perpendicular to the surface of the bench, using a calibration method (usually involving a calibration "target" of defined dimensions). This allowed machine vision systems to extract important positional data from part images that was placed on the workbench. Then the image was transmitted from the aerial camera to the PLC that ran the image processing program to create a 2d map from the center of the robot or from the object's threshold to successfully perform the tasks.

Acknowledgments

Robotics Advanced Institute – I.A.R. São Paulo. Brazil.

Conflicts of interest

The author declares that there was no conflict of interest.

Funding

None.

References

  1. Galetto M. Advances in large-scale metrology – Review and future trends. CIRP Ann Manuf Technol. 2016;65(2):643–665.
  2. Zhang X, Song Y, Yang Y, et al. Stereo vision based autonomous robot calibration. Rob Auton Syst. 2017;93:43–51.
  3. Kyekyung Kim, Jaemin Cho, Jihyeong Pyo, et al. Dynamic Object Recognition Using Precise Location Detection and ANN for Robot Manipulator, Published in: 2017 International Conference on Control, Artificial Intelligence, Robotics & Optimization (ICCAIRO), IEEE Xplore. 2018.
  4. Maeda Y , Saito Y. Lighting- and occlusion-robust view-based teaching/playback for model-free robot programming. In: Chen W, Wang H, Hosoda K, editors. 14th International Conference on Intelligent Autonomous Systems, IAS 2016, Springer, vol. 2017;531:939–952.
  5. Berger M, G. Bachler G, Scherer S. Vision Guided Bin Picking and Mounting in a Flexible Assembly Cell. Proceedings of the 13th International Conference on Industrial & Engineering Applications of Artificial Intelligence & Expert Systems IEA/AIE2000, New Orleans, Louisiana, USA, 2000:109–118.
  6. Kress S. Machine Vision Makes Its Mark on the Automotive Industry. Automotive Design and Production. 2004.
  7. Lee D, Nixon MS. Vision-based finger action recognition by angle detection and contour analysis. ETRI Journal. 2011;33(3):415–422.
  8. Denavit, Jacques and Hartenberg R, Scheunemann. A kinematic notation for lower-pair mechanisms based on matrices. Trans ASME J Appl Mech. 1955;23:215–221.
  9. Ding Feng, Liu Cong. Applying coordinate fixed Denavit–Hartenberg method to solve the workspace of drilling robot arm. International journal of advanced robotic systems. 2018;15(4).
  10. Alcala R, Arceo ZG, Baterisna JN, et al. Selective Compliance Articulated Robotic Arm (SCARA): Application of Inverse Kinematics on the Control of Pick and Place Using Microsoft Kinect Xbox 360. Gate. 2018.
  11. Zeng A, Song S, Yu KT, et al., Robotic Pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. Arxiv.org, 2018.
  12. Alcala, Z. G. Arceo, J. N. Baterisna, J. O. Morada, and J. Oliver D. Ramirez, “Selective Compliance Articulated Robotic Arm (SCARA): Application of Inverse Kinematics on the Control of Pick and Place Using Microsoft Kinect Xbox 360,” Res. Gate, 2018.
  13. Yousaf RM, Habib HA, Dawood H, et al. A Comparative Study of Various Edge Detection Methods. 14th International Conference on Computational Intelligence and Security (CIS). 2018:96–99.
  14. Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach. Intell. 2000;22( 11);1330–1334.
  15. Zhang S, Wang C, Chan S. A new high resolution depth map estimation system using stereo vision and kinect depth sensing. J Signal Process Syst. 2015;79(1):19–31.
  16. Jabalameli A, Ettehadi N, Behal A. Edge-based recognition of novel objects for robotic grasping. Xiv Prepr. 2018.
Creative Commons Attribution License

©2022 Vitalli, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.