The people in your life should lessen your stress and not be the cause of it.

Image for post
Image for post


Part 5: Revisiting the concept of similarity index and introduction of a new player. The conclusion to this series.

Image for post
Image for post
Photo by Engin Akyurt from Pexels

A relationship is about two things. First, appreciating the similarities, and second, respecting the differences

Similarity Indices and Their Invariance Properties

For a trained neural network, a matrix of activations M can be written as:


Part 4: Using neural network representations to understand different types of training dynamics.

Image for post
Image for post
Photo by Karolina Grabowska from Pexels

What sets you apart might feel like a burden but it’s not; it’s what makes you great.

Problem-solving through the ages has been driven by the ability to find how the problem fits with the knowledge already possessed by the problem solver, and if they remembered what solution was used in such cases. Remember those science experiments when we marveled at how some rewards and punishments taught chimps to push the right buttons? Now before you go off to watch YouTube videos on smart chimps, let us…


Part 3: A look into flavors of Canonical Correlation Analysis and their applications to convolution layers

Image for post
Image for post
Photo by Julia Volk from Pexels

Accepting help is its own kind of strength.

In previous parts of this series I talked about internal representations learned by the neural networks, and how Canonical Correlation Analysis (CCA) emerged as a potential candidate to compare the internal representations of different neural networks. Now let us see how a variant of CCA was put to use in the scheme of things.

Existing applications of CCA had included the study of brain activity and training multi-lingual word embeddings in language models. In the year…


Part 2: Canonical Correlation Analysis (CCA) and its use to measure representation similarities of neural networks

Image for post
Image for post
Photo by Designecologist from Pexels

Our similarities bring us to a common ground.

In a past post, I had talked about what is an internal representation learned by the neural networks and why researchers are focused on exploring aspects of their similarities.

Now I would like to talk about the technique Canonical Correlation Analysis (CCA) and how it emerged as a tool of choice to measure representation similarities of neural networks. Introduced in the year 1936 by Harold Hotelling, CCA is a statistical method that investigates relationships among two…


Part 1: Understanding what makes up the internal representation of deep learning networks and their significance

Image for post
Image for post

It is always the small pieces that make the big picture.

What are representations learned by neural networks?

Neural networks, deep or shallow, fed forwards or relying on feedbacks, with memories, or with gates. Neural networks, fired by neurons, solving problems, and helping you make decisions. Different neural networks are employed to answer different types of problems. So what are the representations that define these deep neural networks? Neural networks create patterns of activations from the input data, learn patterns to train over problems, and solve tasks when they are trained…

Gatha Varma

New to research. Old to the world. Doctoral Scholar.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store