annahigh.blogg.se

Task2vec
Task2vec







When other natural distances are available, such as the taxonomical dis- tance in biological classification, we find that the embed- ding distance correlates positively with it (Fig. The norm of the embedding correlates with the complexity of the task, while the distance between embeddings captures semantic similarities between tasks (Fig. We introduce the TASK 2 VEC embedding, a technique to represent tasks as elements of a vector space based on the Fisher Information Matrix. Yet, no general framework exists to describe and learn relations between tasks. Introduction The success of Deep Learning hinges in part on the fact that models learned for one task can be used on other related tasks. Se- lecting a feature extractor with task embedding obtains a performance close to the best available feature extractor, while costing substantially less than exhaustively training and evaluating on all available feature extractors. We present a simple meta-learning frame- work for learning a metric on embeddings that is capable of predicting which feature extractors will perform well. We also demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a new task. , tasks based on classifying different types of plants are similar). We demon- strate that this embedding is capable of predicting task sim- ilarities that match our intuition about semantic and tax- onomic relations between different visual tasks ( e.g. This provides a fixed-dimensional embedding of the task that is independent of details such as the number of classes and does not require any understanding of the class label semantics. Given a dataset with ground-truth labels and a loss function defined over those labels, we process images through a “probe network” and compute an embedding based on estimates of the Fisher information matrix asso- ciated with the probe network parameters. Model trained on ImageNet.T ASK 2V EC : Task Embedding for Meta-Learning Alessandro Achille UCLA and AWS Michael Lam AWS Rahul Tewari AWS Avinash Ravichandran AWS Subhransu Maji UMass and AWS Charless Fowlkes UCI and AWS Stefano Soatto UCLA and AWS Pietro Perona Caltech and AWS Abstract We introduce a method to provide vectorial represen- tations of visual classification tasks which can be used to reason about the nature of those tasks and their re- lations. Outperforms the standard practice of fine-tuning a generic Select an expert from a collection of 156 feature extractors Our experiments show that using TASK2VEC to Ing set size, mimicking the heavy-tailed distribution of real. These tasks vary in the level ofĭifficulty and have orders of magnitude variation in train. We present large-scale experiments on a library of 1,460įine-grained classification tasks constructed from existingĬomputer vision datasets. We formulate thisĪs a meta-learning problem where the objective is to findĪn embedding such that that models whose embeddings areĬlose to a task exhibit good performance on that task. Sign a joint embedding of models and tasks in the same vec. To select an appropriate pre-trained model, we de. Ticularly valuable when there is insufficient data to train orįine-tune a generic model, and transfer of knowledge is es. Ple, we study the problem of selecting the best pre-trainedįeature extractor to solve a new task (Sect. Our task embedding can be used to reason about the Probe network are discriminative for the task (Sect. Of the input domain, and which features extracted by the Multaneously encodes the “difficulty” of the task, statistics Representation of the task which is independent of, e.g., how Network are fixed, the FIM provides a fixed-dimensional Since the architecture and weights of the probe Work filter parameters to capture the structure of the task The diagonal Fisher Information Matrix (FIM) of the net.

task2vec

Ral network which we call a “probe network”, and compute The data through a pre-trained reference convolutional neu. T ASK 2V EC : Task Embedding for Meta-Learning Alessandro Achille 1, 2 Michael Lam 1 Rahul Tewari 1 Avinash Ravichandran 1 Subhransu Maji 1, 3 Charless Fowlkes 1, 4 Stefano Soatto 1, 2 Pietro Perona 1, 5 Ni=1 of labeled samples, we feed









Task2vec