Hint: It is not just about AI-powered computational biology.
Everything is in a state of metamorphosis. Thou thyself art in everlasting change and in corruption to correspond; so is the whole universe. -Marcus Aurelius
The recent culmination of research being done at Alphabet’s DeepMind since the year 2016 has been the AlphaFold. It is a machine learning approach that can predict the 3-D structure that a protein will adopt solely based on its amino acid sequence. Now let me break down this statement into easily understandable parts.
Part 3 of 3: Privacy: For the people, by the people. Also, against them?
Mere roles we play. On the stage called Life. Be courageous and rejoice!
In the previous two parts of this series, I had bared the inner workings of systems that bring privacy to your table in a silver tray called the Differential Privacy (DP). We have seen how despite being one of the strongest privacy provisions, it can get caught in real-world conditionals. We also saw how different aspects of being a service user, like your session activities or longer temporal behavior, can also be protected…
Part 2 of 3: The nitty-gritty of differential privacy for what it means to be a user.
I never trust people’s assertions, I always judge of them by their actions. — Ann Radcliffe
Earlier, I had talked about how differential privacy (DP) decouples fetched information from the presence of a person belonging to the said data record. By default, we assume that differential privacy would provide a survey respondent or a service user with plausible deniability that they were not a part of the recorded data. This deniability is a powerful guarantee and an assurance that the user would not…
Part 1 of 3: What level of information-hiding guarantee does differential privacy offer?
My ham and pineapple pizza was burnt. They need to cook at aloha temperature.
You are sitting at your desk trying to stave off that post-lunch stupor. Ping! A random survey email that wants to know about the pizza preferences of people shows up. It is a crisp five-minute survey that should surely get you back in work mode. You breeze through the questions about how you like your pizza, its crust, the preferred place to get it from, and toppings. There are these boxes that you…
If you can’t explain it simply, you don’t understand it well enough — Albert Einstein
When you have got a trained model on your hands, you cannot crack it open to see its inner workings. For that, you would need an explanation process. An explanation of a trained model seeks to produce something understandable by human users by combining the input and the output. For this, you have an explanation process that has three components viz. the input, the model’s output for that input, and an internal state of the model. …
Part 5 of 5: Revisiting the concept of similarity index and introduction of a new player.
A relationship is about two things. First, appreciating the similarities, and second, respecting the differences
For a trained neural network, a matrix of activations M can be written as:
Part 4 of 5: Using neural network representations to understand different types of training dynamics.
What sets you apart might feel like a burden but it’s not; it’s what makes you great.
Problem-solving through the ages has been driven by the ability to find how the problem fits with the knowledge already possessed by the problem solver, and if they remembered what solution was used in such cases. Remember those science experiments when we marveled at how some rewards and punishments taught chimps to push the right buttons? Now before you go off to watch YouTube videos on smart chimps…
Part 3 of 5: A look into flavors of Canonical Correlation Analysis and their applications to convolution layers
Accepting help is its own kind of strength.
In previous parts of this series I talked about internal representations learned by the neural networks, and how Canonical Correlation Analysis (CCA) emerged as a potential candidate to compare the internal representations of different neural networks. Now let us see how a variant of CCA was put to use in the scheme of things.
Part 2 of 5: Canonical Correlation Analysis (CCA) and its use to measure representation similarities of neural networks
Our similarities bring us to a common ground.
In a past post, I had talked about what is an internal representation learned by the neural networks and why researchers are focused on exploring aspects of their similarities.
Now I would like to talk about the technique Canonical Correlation Analysis (CCA) and how it emerged as a tool of choice to measure representation similarities of neural networks. Introduced in the year 1936 by Harold Hotelling, CCA is a statistical method that investigates relationships…
New to research. Old to the world. Doctoral Scholar.