(Click the image to jump to the corresponding introduction)

Multi-Agent Formation Control

Formation control aims to drive multiple autonomous systems to meet prescribed constraints on their states, so that the overall system exhibits a desired collective behavior as required in specific scenarios. The main challenge in distributed formation control is how to design the local control objective for each agent, which is to be achieved by this agent based on measurements sensed from neighbors.  In recent decades, distance- and bearing-based formation control strategies have been extensively investigated since such measurements can be captured by vision sensors. However, the distance-based approach requires a large amount of sensing, while the bearing-based approach requires agents to equip GNSS devices or frequently communicate with each other.
 
Out of the above-mentioned situation, we proposed two new graph rigidity theories, based on which the designed formation control law is communication-free and GNSS-free, requires less sensing, and results in a formation with the state-of-the-art high degree of freedom.
•  Angle-Constrained And Angle Rigidity Theory

To capture a higher degree of freedom for convenience of formation maneuver, we considered subtended angles as the only constraints defining the desired formation shape, and developed angle rigidity theory. In [R1], we defined and studied angle rigidity theory, which is a new graph theoretical tool and has applications on formation control and sensor network localization.  Angle rigidity theory studies what kind of geometric shapes can be uniquely determined by subtended angle up to translations, rotations, reflections, and uniform scaling.  Later in [R2], we improved the local convergence result to almost global convergence, and studied angle-based formation maneuver control.

Infinitesimally angle rigid
  • [R1] Jing, G. Zhang, H. W. J. Lee, and L. Wang, “Angle-based shape determination theory of planar graphs with application to formation stabilization,” Automatica, vol. 105, pp. 117–129, 2019.
  • [R2] G. Jing and L. Wang, “Multi-agent flocking with angle-based formation shape control,” IEEE Transactions on Automatic Control, vol. 65, no. 2, pp. 817-823,  2019.

•  Weak Rigidity Theory and its Application to Formation Control
To reduce the number of required sensing links in distance-based formation control, we further took subtended angles into account in local constraints, and developed weak rigidity theory. This theory is to answer what kind of geometric shapes can be uniquely determined by edge lengths and subtended angles up to translations, rotations, and reflections. In [R3], we gave a comprehensive analysis for weak rigidity theory in arbitrary dimensional space and applied it to formation control. The proposed control law significantly reduces the number of required sensing links compared with the distance and bearing-based approaches.
Formation transformation based on angle
  • [R3] G. Jing, G. Zhang, H. W. J. Lee, and L. Wang, “Weak rigidity theory and its application to formation stabilization,” SIAM Journal on Control and Optimization, vol.56, no. 3, pp. 2248-2273, 2018.

Sensor Network Localization

Sensor Network Localization (SNL) is to determine locations of all sensors when locations of partial sensors (called anchors) and relative measurements between some pairs of sensors are available.Based on angle rigidity theory, we investigated how to utilize angle measurements in SNL. We call the corresponding problem as ASNL.The main benefit brought by using subtended angles is the reduction of required sensing and communication costs.

We proposed a necessary and sufficient condition for localizability of ASNL, as well as a class of graphical conditions for relaxation of an ASNL to a convex program and a decomposed semi-definite program (SDP).

An illustration for CASNL.
•  Utilizing Angles in Sensor Network Localization (SNL)

Sensor Network Localization (SNL) is to determine locations of all sensors when locations of partial sensors (called anchors) and relative measurements between some pairs of sensors are available. It has been widely studied via a centralized or distributed approach depending on the specific application and stakeholders’ requirements. Range and bearing are the two most popular measurements adopted in the literature since they can be captured by vision sensors. But the difference is that our innovative use of angles for sensor network positioning is lower cost and more convenient for measurement.

Sensor Network Localization based on angle
•  Angle-Based Sensor Network Localization (ASNL)

Based on angle rigidity theory, we investigated how to utilize angle measurements in SNL. We call the corresponding problem ASNL. The main benefit brought by using subtended angles, similar to the formation control scenario, is the reduction of required sensing and communication costs. Different from intelligent robots, which are those agents to be controlled in formation problems, sensors are not necessarily subject to specific dynamics constraints. As a result, in ASNL, the number of required sensing links can be further reduced compared with that in formation control. In [R4][R5], we proposed a milder condition for angle rigidity, a necessary and sufficient condition for localizability of ASNL, as well as a class of graphical conditions for relaxation of an ASNL to a convex program or a decomposed semi-definite program (SDP).

  • [R4] G. Jing, C. Wan and R. Dai, “Angle-Based Sensor Network Localization,” in IEEE Transactions on Automatic Control, vol. 67, no. 2, pp. 840-855, Feb. 2022.
  • [R5] G. Jing, C. Wan, and R. Dai, “Angle Fixability and Angle-Based Sensor Network Localization,” IEEE Conference on Decision and Control, pp. 7899–7904, 2019.

Multi-agent Reinforcement Learning

Achieving distributed reinforcement learning (RL) for large-scale cooperative multi-agent systems (MASs) is challenging because: (i) each agent has access to only limited information; (ii) issues on scalability and sample efficiency emerge due to the curse of dimensionality. We propose a general computationally efficient distributed framework for cooperative multi-agent reinforcement learning (MARL) by utilizing the structures of graphs involved in this problem.

We introduce some graphs coupling different agents in MARL, based on which we propose scalable distributed RL approaches

•  Hierarchical Reinforcement Learning

We proposed a hierarchical RL scheme in [R6] to resolve this computational bottleneck of conventional RL-based linear quadratic regulator (LQR) problems. The hierarchy follows from partitioning the agents into multiple clusters, then learning the controller by two steps: (i) learn a local controller for each cluster by solving multiple decoupled small-sized LQR problems independently; (ii) obtain a global controller by solving a least squares problem determined by inter-cluster couplings. The two main benefits of this hierarchical strategy are that (i) it drastically reduces learning time, and that (ii) the resulting controller inherits a special structure from the graph embedded in the cost function,  causing reduction of communication costs compared with the conventional optimal controller. Moreover, the clustering can be optimized to minimize the number of communication links or the sub-optimality gap.

•  Policy Gradient via Graph-Induced Local Value Functions

By considering different graphs embedded in the MARL problem, we develop a local value function (LVF) for each agent, so that this LVF is able to play the same role as the global value function in policy gradient algorithms. We propose asychronous [R7] and synchronous [R8] distributed RL algorithms, respectively. Simulations show that our RL algorithms have a significantly improved scalability to large-scale MASs compared with centralized and consensus-based distributed RL algorithms.

Hierarchical learning approach
  • [R6] G. Jing, H. Bai, J. George and A. Chakrabortty, “Model-free optimal control of linear multi-agent systems via decomposition and hierarchical approximation”, IEEE Transactions on Control of Network Systems, doi: 10.1109/TCNS.2021.3074256, 2021.
  • [R7] G. Jing, H. Bai, J. George, A. Chakrabortty, and Piyush K. Sharma, “Asynchronous distributed reinforcement learning for LQR control via zeroth-order block coordinate descent”, IEEE Transactions on Automatic Control, conditionally accepted, 2023.
  • [R8] G. Jing, H. Bai, J. George, A. Chakrabortty, and Piyush K. Sharma, “Distributed Multi-Agent Reinforcement Learning Based on Graph-Induced Local Value-Functions“, arXiv preprint arXiv:2202.13046.

Robotics Motion Planning

Motion planning of robotics in complex environments is critical for successful task excecution. Generally, motion planning refers to the problem of how to provide an optimal realizable trajectory for a robot to track (so as to achieve the task goal) while meeting environment constraints. For different types of robotics, the motion planning problem may refer to different specific application scenarios. Detailed problems mainly include dynamics modeling, environment constraints formulation, and optimal decision making, etc.. Related techniques mainly involve nonconvex optimization and machine learning.

We are currently working on grasp planning of dexterous hands, formation maneuver of multi-robot systems, and motion planning of network structured origami robots. 

Six-crease origami tessellation
Design and Transformation Control of Triangulated Origami Tessellation

Origami is known as a traditional art of paper folding. It has attracted extensive attention due to its self-folding mechanism, shape-morphing capability, and deployable structures. We develop network-based methods for designing and controlling a three-dimensional (3D) triangulated origami tessellation to approximate multiple surfaces. The desired surfaces are represented by sets of discrete nodes and the origami tessellation to be designed is composed of triangles. Then, the tessellation design problem is formulated as an optimization problem of minimizing the distance between the origami triangle vertices and the discrete nodes subject to developability and rigid-foldability constraints.

  • [R9] G. Jing, C. Wan, R. Dai, and M. Mehran, “Design and Transformation Control of Triangulated Origami Tessellation: A Network Perspective”, IEEE Transactions on Network Science and Engineering, doi: 10.1109/TNSE.2023.3303260