Part 5: Revisiting the concept of similarity index and introduction of a new player. The conclusion to this series.
A relationship is about two things. First, appreciating the similarities, and second, respecting the differences
For a trained neural network, a matrix of activations M can be written as:
Part 4: Using neural network representations to understand different types of training dynamics.
What sets you apart might feel like a burden but it’s not; it’s what makes you great.
Problem-solving through the ages has been driven by the ability to find how the problem fits with the knowledge already possessed by the problem solver, and if they remembered what solution was used in such cases. Remember those science experiments when we marveled at how some rewards and punishments taught chimps to push the right buttons? Now before you go off to watch YouTube videos on smart chimps, let us…
Part 3: A look into flavors of Canonical Correlation Analysis and their applications to convolution layers
Accepting help is its own kind of strength.
In previous parts of this series I talked about internal representations learned by the neural networks, and how Canonical Correlation Analysis (CCA) emerged as a potential candidate to compare the internal representations of different neural networks. Now let us see how a variant of CCA was put to use in the scheme of things.
Existing applications of CCA had included the study of brain activity and training multi-lingual word embeddings in language models. In the year…
Part 2: Canonical Correlation Analysis (CCA) and its use to measure representation similarities of neural networks
Our similarities bring us to a common ground.
In a past post, I had talked about what is an internal representation learned by the neural networks and why researchers are focused on exploring aspects of their similarities.
Now I would like to talk about the technique Canonical Correlation Analysis (CCA) and how it emerged as a tool of choice to measure representation similarities of neural networks. Introduced in the year 1936 by Harold Hotelling, CCA is a statistical method that investigates relationships among two…
Part 1: Understanding what makes up the internal representation of deep learning networks and their significance
It is always the small pieces that make the big picture.
Neural networks, deep or shallow, fed forwards or relying on feedbacks, with memories, or with gates. Neural networks, fired by neurons, solving problems, and helping you make decisions. Different neural networks are employed to answer different types of problems. So what are the representations that define these deep neural networks? Neural networks create patterns of activations from the input data, learn patterns to train over problems, and solve tasks when they are trained…
New to research. Old to the world. Doctoral Scholar.