Powered by the GEVF™ Intelligence Engine
Auto-Mining Digital Twin is a next-generation platform for rapid, computation-ready modelling of underground mining methods. Built upon over a decade of research in 3D mining modelling, dynamic simulation, multimodal data fusion, and AI-driven analytics, the system transforms traditional geometry-centric workflows into structured, intelligence-enabled spatial infrastructures.
Early work in Unity-based dynamic simulation and automated mining method modelling established the foundation for digital representation of underground layouts. Subsequent advancements introduced standardised 3D modelling techniques for metallic underground mines and patented automation mechanisms for digital mining method generation. This technical lineage evolved into a data-driven visualisation framework integrating analytics and visual intelligence within a unified spatial environment.
At the core of Auto-Mining Digital Twin lies the GEVF™ (Grid Everything Visual-Fusion) Intelligence Engine—a scalable, grid-based spatial architecture that converts CAD layouts into structured, topology-aware models. Unlike conventional digital twins that focus primarily on visualisation, GEVF produces computation-ready digital substrates capable of supporting:
The system enables automated construction of mining layouts from CAD drawings or parameter inputs, dramatically reducing modelling time and enabling agile design iteration. The resulting grid-based model serves as a unified carrier for geological, geotechnical, and operational data streams, forming the foundation for predictive analytics and decision support.
Building on demonstrated capabilities in multimodal data fusion, deep learning-based rock mass analytics, immersive visualisation, and generative AI modelling, Auto-Mining Digital Twin represents a transition from static representation to dynamic spatial intelligence.
Designed for deep underground operations where complexity and geohazard risk demand structured integration and rapid response, the platform establishes a scalable digital backbone for next-generation intelligent mining systems.
Liang, R., Xu, S., Shen, Q., & An, L.
This paper explores the dynamic simulation of mining methods using the Unity3D engine...
Liang, R., Xu, S., Hou, P., & Zhu, C.
Discusses advanced 3D modeling techniques tailored for complex underground metallic deposits...
Liang, R., Huang, C., Zhang, C., Li, B., Saydam, S., & Canbulat, I.
Investigates how integrating visualization with analytics can enhance decision-making in mining digitalization...
Liang, R., Huang, C., Zhang, C., Li, B., Saydam, S., & Canbulat, I.
Proposed a structured framework for managing data diagrams to support fusion of mining data analytics...
Xu, S., Ma, J., Liang, R., Zhang, C., Li, B., Saydam, S., & Canbulat, I.
Utilizes deep learning for identifying drill core features and automating rock quality designation (RQD)...
Liang, R.
Comprehensive doctoral research on building systems for data-driven visualization in mining...
Liang, R., Zhang, C., Huang, C., Li, B., Saydam, S., Canbulat, I., & Munsamy, L.
Presents a method for fusing multimodal data sources to predict geological hazards effectively...
Liang, R., Zhang, C., Li, B., Saydam, S., Canbulat, I., & Munsamy, L.
Framework for developing data-driven visual models to enhance analysis in underground spaces...
Sepasgozar, S. M. E., Khan, A. A., Shirowzhan, S., Romero, J. S. G., Pettit, C., Zhang, C., ... Liang, R.
Overview of immersive technologies and digital twins applied to education and training in construction...
Xu, H., Zlatanova, S., Liang, R., & Canbulat, I.
Research on using Generative AI models to predict wildfire spread in 2D and 3D environments...
Xu, H., Zlatanova, S., Liang, R., & Canbulat, I.
Introducing a voxel-based simulator for 3D wildfire propagation designed for HPC environments...
Xu, H., Zlatanova, S., Liang, R., & Canbulat, I.
A study on using Agentic AI and mixed reality for community-based collaborative fire management...
Xu, S., Liang, R., Hou, P., Zhou, K., Li, F., Zhu, C., & Chen, Y.
Xu, S., Liang, R., Li, F., Li, R., Yang, Z., Ma, J., & Huang, M.