Amazon SageMaker provides new built-in TensorFlow image classification algorithms

amazon ad a new integrated Tensor Flow image classification algorithm in Amazon Sagemaker. The supervised learning algorithm supports transfer learning for many pre-trained models available in TensorFlow Hub. It takes an image as the input and output probability for each of the class labels. These pre-trained models can be refined using transfer learning, even when a large number of training images are not available. It is available through the SageMaker Integrated algorithms as well as through SageMaker JumpStart UI on the inside SageMakerStudio.

Transfer learning is a term used in machine learning to describe the ability to use training data from one model to create another model. A classification layer is added to the TensorFlow hub model after it has been pre-trained, based on the number of class labels in your training data. The dense, fully bound layer, the 2-norm regularizer, initialized with random weights, and the drop layer constitute the classification layer. The dropout rate of the dropout layer and the L2 regularization factor for the dense layer are hyper-parameters used in model training. The network can then be refined using the new training data, with the pre-trained model included, or just the upper classification layer.

The Amazon SageMaker TensorFlow image classification algorithm is a supervised learning algorithm that supports transfer learning with many pretrained models from the TensorFlow Hub. The image classification algorithm takes an input image and generates a probability for each provided class label. Training datasets must consist of images in .jpg, .jpeg, or .png format.

Classification of images can be run in two modes: full training and transfer learning. In full training mode, the network is initialized with random weights and trained from scratch on user data. In transfer learning mode, the network is initialized with pre-trained weights and only the fully connected upper layer is initialized with random weights. Then the entire network is refined with new data. In this mode, training can be performed even with a smaller dataset. Indeed, the network is already trained and can therefore be used in cases without sufficient training data.

Deep learning has revolutionized the field of image classification and has achieved great performance. Various deep learning networks such as ResNet, DenseNet, inception, etc., have been developed to be very accurate for image classification. At the same time, efforts have been made to collect labeled image data essential for the formation of these networks. ImageNet is one of those large datasets that contains over 11 million images with around 11,000 categories. Once a network is trained with ImageNet data, it can then be used to generalize with other datasets as well, by simple readjustment or fine-tuning. In this transfer learning approach, a network is initialized with weights, which can then be refined for an image classification task in a different dataset.

Sharon D. Cole