Skip to content

Machine-Learning-Tokyo/Interactive_Tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 

Repository files navigation

Interactive Tools for machine learning, deep learning, and math

Content

Deep Learning

Data

Interpretability

Math


Deep Learning

Transformer Explainer

Transformer Explainer is an interactive visualization tool designed to help anyone learn how Transformer-based models like GPT work. It runs a live GPT-2 model right in your browser, allowing you to experiment with your own text and observe in real time how internal components and operations of the Transformer work together to predict the next tokens.

Transformer Explainer

exBERT

"exBERT is a tool to help humans conduct flexible, interactive investigations and formulate hypotheses for the model-internal reasoning process, supporting analysis for a wide variety of Hugging Face Transformer models. exBERT provides insights into the meaning of the contextual representations and attention by matching a human-specified input to similar contexts in large annotated datasets."

exbert

BertViz

"BertViz is a tool for visualizing attention in the Transformer model, supporting most models from the transformers library (BERT, GPT-2, XLNet, RoBERTa, XLM, CTRL, MarianMT, etc.). It extends the Tensor2Tensor visualization tool by Llion Jones and the transformers library from HuggingFace."

CNN Explainer

An interactive visualization system designed to help non-experts learn about Convolutional Neural Networks (CNNs). It runs a pre-tained CNN in the browser and lets you explore the layers and operations.

Play with GANs in the Browser

Explore Generative Adversarial Networks directly in the browser with GAN Lab. There are many cool features that support interactive experimentation.

  • Interactive hyperparameter adjustment

  • User-defined data distribution

  • Slow-motion mode

  • Manual step-by-step execution

  • Source: GAN Lab

ConvNet Playground

ConvNet Playground is an interactive visualization tool for exploring Convolutional Neural Networks applied to the task of semantic image search.

Distill: Exploring Neural Networks with Activation Atlases

Feature inversion to visualize millions of activations from an image classification network leads to an explorable activation atlas of features the network has learned. This can reveal how the network typically represents some concepts.

A visual introduction to Machine Learning

In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.

Interactive Deep Learning Playground

New to Deep Learning? Tinker with a Neural Network in your browser.

Initializing neural networks

Initialization can have a significant impact on convergence in training deep neural networks. Simple initialization schemes can accelerate training, but they require care to avoid common pitfalls. In this post, deeplearning.ai folks explain how to initialize neural network parameters effectively.

Embedding Projector

It's increaingly important to understand how data is being interpreted by machine learning models. To translate the things we understand naturally (e.g. words, sounds, or videos) to a form that the algorithms can process, we often use embeddings, a mathematical vector representation that captures different facets (dimensions) of the data. In this interactive, you can explore multiple different algorithms (PCA, t-SNE, UMAP) for exploring these embeddings in your browser.

OpenAI Microscope

The OpenAI Microscope is a collection of visualizations of every significant layer and neuron of eight important vision models.

Data

Atlas Data Exploration

Atlas allows you to explore real, up-to-date data from sources like social media, news, and academic journals curated by the Nomic team.

Atlas

Interpretability, Fairness

The Language Interpretability Tool

The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.

You can use LIT to ask and answer questions like:

  • What kind of examples does my model perform poorly on?

  • Why did my model make this prediction? Can it attribute it to adversarial behavior, or undesirable priors from the training set?

  • Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?

  • Source: LIT

What if

The What-If Tool lets you visually probe the behavior of trained machine learning models, with minimal coding.

what-if

Measuring diversity

PAIR Explorables around measuring diversity.

"Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a page of white men, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels."

Math

Sage Interactions

This is a collection of pages demonstrating the use of the interact command in Sage. It should be easy to just scroll through and copy/paste examples into Sage notebooks.

Examples include Algebra, Bioinformatics, Calculus, Cryptography, Differential Equations, Drawing Graphics, Dynamical Systems, Fractals, Games and Diversions, Geometry, Graph Theory, Linear Algebra, Loop Quantum Gravity, Number Theory, Statistics/Probability, Topology, Web Applications.

Probability Distributions

by Simon Ward-Jones. A visual 👀 tour of probability distributions.

  • Bernoulli Distribution

  • Binomial Distribution

  • Normal Distribution

  • Beta Distribution

  • LogNormal Distribution

  • Source: SW-J

Bayesian Inference

by Simon Ward-Jones. Explaining the basics of bayesian inference with the example of flipping a coin.

Seeing Theory: Probability and Stats

A visual introduction to probability and statistics.

Interactive Gaussian Process Visualization

"A Gaussian process can be thought of as an extension of the multivariate normal distribution to an infinite number of random variables covering each point on the input domain. The covariance between function values at any two points is given by the evaluation of the kernel of the Gaussian process. For an in-depth explanation, read this excellent distill.pub article and then come back to this interactive visualisation!"

About

Interactive Tools for Machine Learning, Deep Learning and Math

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •