Overview
CUDA-Accelerated Physics Simulation
An optimized memory management strategy inspired by Boxi Xia’s work is utilized to accelerate the physics simulation. The property array of the struct is allocated separately in GPU, which enables more efficient memory management and lower memory consumption.
The energy Conservation Test for the simulator shows the accuracy of the simulation.
Implicit Encoding and Neural Evolution
A Multi-Layer Perceptron (MLP) is utilized to implicitly encode the soft robot morphology and control.
Positional Encoding
Utilized the Positional Encoding mechanism to embed spatial information into the MLP, helps the model better understand the morphology of the robot.
CUDA-Accelerated Neural Network Operation
The matrix calculation for the forward propagation in the neural evolution is also accelerated by CUDA.
Large Language Model Supervision
The hyperparameter tuning for the evolutionary algorithm is considered a time-consuming problem. However, the recent Large Language Models (LLMs) like GPT-4 Turbo provide a huge context window that enables the possibility of using LLMs to supervise the evolution process of the evolution algorithm and adjust the hyperparameter of the EAs dynamically.
The experiment is conducted using GPT-4 Turbo API and the results show that the diversity is better maintained.