Questions tagged [representation-learning]

For questions related to feature learning (also known as representation learning), which is a set of techniques that can learn the features associated with the raw data. It is similar to feature engineering, but, in the case of feature learning, the features are learned and not handcrafted.

For more details, see e.g. https://en.wikipedia.org/wiki/Feature_learning or Representation Learning: A Review and New Perspectives by Yoshua Bengio, Aaron Courville, and Pascal Vincent.

44 questions
99
votes
3 answers

What is self-supervised learning in machine learning?

What is self-supervised learning in machine learning? How is it different from supervised learning?
6
votes
1 answer

Is there any theoretical work on representation in machine learning?

In AI, how knowledge is represented is a crucial topic. In traditional knowledge representation and reasoning, substantial work has focused on logic-based and graph-based knowledge representation methods. In contrast, in machine learning, knowledge…
6
votes
2 answers

How to understand the concept of self-supervised learning in AI?

I am new to self-supervised learning and it all seems a little magical at the moment. The only way I can get an intuitive understanding is to assume that, for real-world problems, features are still embedded at a per-object level. For example, to…
5
votes
2 answers

What is feature embedding in the context of convolutional neural networks?

What are feature embeddings in the context of convolutional neural networks? Is it related to bottleneck features or feature vectors?
5
votes
1 answer

Where do the feature extraction and representation learning differ?

Feature selection is a process of selecting a subset of features that contribute the most. Feature extraction allows getting new features that are not actually present in the given set of features. Representation learning is the process of learning…
5
votes
2 answers

What are examples of approaches to dimensionality reduction of feature vectors?

Given a pre-trained CNN model, I extract feature vector of images in reference and query dataset with several thousands of elements. I would like to apply some augmentation techniques to reduce the feature vector dimension to speed up cosine…
4
votes
2 answers

What is the difference between representation and embedding?

As I searched about this two terms, I found they are somehow like each other, both try to create a vector from raw data as I understood. But, what is the difference of this two term?
4
votes
2 answers

Is there any proper literature on the types of features that different layers of a deep neural network learn?

Let's consider a deep convolutional network. It seems that there is some consensus on the following notions: 1. Shallow layers tend to recognise more low-level features such as edges and curves. 2. Deeper layers tend to recognise more high-level…
3
votes
1 answer

Why different images of the same person, under some restrictions, are in a 50 dimension manifold?

In this lecture (starting from 1:31:00) the professor says that the set of all images of a person lives in a low dimensional surface (compared the the set of all possible images). And he says that the dimension of that surface is 50 and that they…
3
votes
2 answers

How do we know that the neurons of an artificial neural network start by learning small features?

I'd like to ask you how do we know that neural networks start by learning small, basic features or "parts" of the data and then use them to build up more complex features as we go through the layers. I've heard this a lot and seen it on videos like…
3
votes
1 answer

How to generate labels for self-supervised training?

I've been reading a lot lately about self-supervised learning and I didn't understand very well how to generate the desired label for a given image. Let's say that I have an image classification task, and I have very little labeled data. How can I…
3
votes
1 answer

Does self-supervised learning require auxiliary tasks?

Self-supervised learning algorithms provide labels automatically. But, it is not clear what else is required for an algorithm to fall under the category "self-supervised": Some say, self-supervised learning algorithms learn on a set of auxiliary…
2
votes
1 answer

When should I use feature learning as opposed to feature engineering?

With the advancement of deep learning and a few others automated features learning techniques, manual feature engineering started becoming obsolete. Any suggestion on when to use manual feature engineering, feature learning or a combination of the…
2
votes
1 answer

Self-supervised Learning - measure distribution on n-sphere

Most of self-supervised learning methods (SimCLR, MoCo, BYOL, SimSiam, SwAV, MS BYOL, etc.) use an n-sphere hypersphere where the extracted features (after encoder + projection/prediction head) are distributed. The loss function then uses the…
2
votes
1 answer

Deep embeddings: which is common practice, Euclidean space or normalised (hypersphere)?

There are many ways to create a "deep embedding"---by which I mean to project an input data point into a vector in a feature space, where this projection is learnt. To be useful, this vector should encode something semantic about the data. In…
1
2 3