The ongoing race toward an autonomous era results in the development of High Definition (HD) Maps. To help extend the vision of self-driving vehicles and guarantee safety, HD maps provide detailed information about on-road environments with precise location and semantic meaning. However, one main challenge when making such a map is that it requires a massive amount of manual annotation, which is time-consuming and laborious. As such, to fulfill automation in extracting information from the sheer amount of data collected by mobile LiDAR scanners and cameras is at most concern. In this study, a workflow for automatically building traffic sign HD maps is proposed. First, traffic islands, traffic signs, signals, and poles are extracted from LiDAR point clouds using PointNet. Then, point clouds of traffic signs are clustered by the DBSCAN algorithm so that the geometric information can be obtained. An evaluation is performed to assess the accuracy of geolocation in the final stage. Next, point clouds in each traffic sign cluster are projected onto corresponding MMS images for classification purposes. The semantic attribute is obtained based on the GoogLeNet classifier and determined by a proposed mechanism, i.e. modified SNR ratio, which ensures the class with the most classified images is significant enough for that cluster to be considered as that specific type. An output text file including precise coordinates of traffic sign center, bottom-left, and top-right of the traffic sign bounding box also their type is generated for further use in HD maps.