Artificial Intelligence for Robotics

Bin-picking through reinforcement learning

 

What's it good for?
Robot-based bin-picking is no longer trained with practical gripping tests, but only simulated. This significantly reduces the learning time the robots need - an important prerequisite to enable machine learning to be implemented in industry.

What's new?
Robots learn like children or pets: by rewarding them for doing things right and punishing them for doing things wrong. In so-called reinforcement learning, robots are awarded a top position in the ranking for correctly-completed tasks but points are deducted and they are downgraded if they do perform them incorrectly. This method could soon be applied in the "Deep Grasping" research project, in which researchers apply machine learning to bin-picking. The necessary training data is generated in a virtual learning environment. The pre-trained neuronal networks are then transferred to the real robot for subsequent training.

More information about bin-picking

 

Recognizing objects and humans

 

What's it good for?
In order for service robots to move safely in dynamic environments, they must be able to recognize and avoid people and objects. One example is "Paul", alias Care-O-bot 4, which is used in several stores of the electronics retailer Saturn. He welcomes customers, asks them how he can help and leads them to the respective department. In the House of History in Bonn he presents selected exhibits to visitors.

What's new?
Based on local image characteristics, cognitive service robots such as Care-O-bot 4 are now able to reliably identify or differentiate between people and objects. Robots that interact with humans, for example, can estimate the age, gender and mood of their counterparts and react accordingly.

More information about image processing for robots

Sighted assembly robots

 

What's it good for?
With typical assembly tasks such as fixing screws, applying seals or inserting parts with small joining tolerances, considerable time is taken up moving the robot slowly by hand and teaching it the finer points of the joining process with regard to robustness and cycle time. The use of cameras and corresponding image processing during teaching can drastically reduce the time required. Furthermore, in many cases highly-qualified robot experts are not needed for programming.

What's new?
Assembly details such as screw positions, edges or plug connectors are automatically extracted from the image data by special algorithms and made available to the user for manipulation. Commands such as "Align part to edge", "Move towards edge" or "Rotate around edge” are thus possible. The resulting of the commands is displayed directly in the image, enabling even inexperienced robot users to successfully automate assembly tasks with a robot.

Cloud navigation

 

What's it good for?
In industrial plants, the networked and cooperative navigation of automated guided vehicles (AGVs) makes the AGV 'leaner'. Less hardware is required on the respective system, making it more economical and effective. Fleet behavior becomes agile, i.e. it adapts dynamically to the current situation, and the fleet is scalable regardless of the manufacturer.

What's new?
With cloud navigation, all AGVs and stationary sensors are connected to a central cloud infrastructure. The environmental detection and obstacle recognition data from each individual vehicle is thus incorporated into a central model of the environment, which all vehicles use for their path planning. Vehicles already know about obstacles, for example, without having detected them themselves via their sensors, and can plan another route straightaway. This means networked, global path planning for cooperative planning solutions. In certain traffic situations, such as intersections, local paths can be dynamically exchanged and coordinated. By incorporating external sensors, highly accurate sensor information is always available.

Automatic component analysis with NeuroCAD

 

What's it good for?

The web-based software NeuroCAD automatically analyzes specific component properties, such as how easy a component can be separated or picked up. In this way, product designers can already gain information during the planning phase of their products about how “automation-friendly” they are. The software also gives manufacturers of separation devices an internal tool which they can use to prepare quotations or for sales purposes.

What is new?

Up till now, assessing the ability to produce a component automatically was always linked to an expert’s knowledge and experience. The software NeuroCAD is a tool that automates this assessment with the aid of machine learning methods.  Users can upload their STEP files and, on a scale of one to ten, find out within the space of a few seconds how easy or difficult it is to separate a component. In addition, the tool evaluates the gripping surfaces of a part and how easily it can be aligned. Work is currently in progress to obtain further information, such as positioning capability. Besides evaluating components, the neural network also states a probability that its evaluation is correct.

Further Information

 

Bin-picking

 

3D Image Processing

 

Automatic component analysis with NeuroCAD