Abstract: We propose a novel deep neural network that estimates the six degrees of freedom pose and complete shape of unseen objects from point cloud data. Our concept is to train the network that can perform well on real images captured by a consumer RGBD camera using only 3D models of the target category. To do so, we have employed two ideas. The first is modeling intra-category shape variations with active shape models that can deform the shape with a few dimensional parameters. The second is applying effective filtering processes to the training data to convert the 3D object model into a point cloud that simulates the sensor measurements. We evaluated our method on NOCS REAL275, a widely used benchmark dataset for category-level pose estimation, and confirmed its superiority over conventional methods in terms of both shape recovery and pose estimation. Our code is available at https://github.com/sakizuki/asm-net.