This talk focuses on extracting invariant features for segmentation of 3D models of mining sites. The image data is generated by stitching together geotagged images from a drone. The 3D model is then generated by applying stereo reconstruction using structure from motion. This reconstruction gives a dense 3D model with RGB data corresponding to the point cloud. There are a limited number of features for point clouds. The mining site terrain are mountainous in nature and thus the image loses a lot of information even within the RGB space. In this presentation, we discuss ways to best utilize both 3D and RGB data to extract features with high discriminability and invariance using machine learning. This approach shows accurate segmentation of mining sites. A segmented site can be used to provide key insights, such as ore deposit locations, drill hole deviations and safety and hazard warnings.