I'm glad to show you a video with my robot at the real world. It was trained via the ML-Agents, real-time mesh by 6D.AI. Currently the robot can move from one point to another and overcomes hills.
I couldn't find good examples of magnets and I have created magnet myself. My ml-agent recognizes the mesh by ray from the free magnet. And through the ray it attaches to the mesh when the forward distance between the magnet and the mesh is short.
I made my robot following the example Crawler from ML-Agents. But my robot have two brains. One brain is similar to the Crawler brain. It decides how to move the body.
I've made second brain for decisions which decides how to move magnet, that isn't attached to the mesh. After it I got a real good result! This brain receives observations from the rays and decides how to rotate around its leg (for the fast detection of the shortest distance it needs a lot of rays in different sides). After the free magnet completes its task (attach to the mesh) its brain changes control on another magnet.
In order to climb the ladder and snowdrift, the robot should be able to walk on a similar surface in the Unity Editor. However, It was impossible for him to learn how to climb similar to hills right away. At first it learned to walk on a smooth surface and at a short distance. With every increase of the hills' height its learning indicators fell heavily. But the robot didn't give up! Learning the last version lasted for nearly 40 hours. 21 agents trained together.
I don't see difficulties in teaching it another fun actions, type of walk on the ceiling and magnetized together! But it is impossible to continue this work and make some interesting and useful application alone. When I will find the team and the budget I will continue this project. If you are interested in this robot, please contact me, I am open to any suggestions!