Skip to content

fengyonglv/attention

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Implementation of Attention is All You Need in Sonnet/Tensorflow

Architecture:

Paper: https://arxiv.org/abs/1706.03762

Usage

  1. Install requirements pip install -r requirements.txt
  2. Install sonnet
  3. Run run_micro_services.sh

Organisation of the repository

Transformer's architecture is composed of blocks. The uses of Sonnet makes the implementation very modular, and reusable. I tried to keep the blocks as much decouple as possible, following the paper:

  • attention/algorithms/transformers: Create an tf.contrib.learn.Experiment, and the tf.contrib.data.Dataset

  • attention/modules/cores: Implementation of the core blocks of Transformer such as MultiHeadAttention, PointWiseFeedForward

  • attention/modules/decoders: Implementation of a Decoder block, and a Decoder

  • attention/modules/encoders: Implementation of an Encoder block, and an Encoder.

  • attention/models: Implementation of a full Transformer Block. This Module is responsible to create the Encoder and the Decoder

  • attention/services: Micro Services that create the dataset, or train the model

  • attention/utils: Some classes uses as utility (recursive namespace, mocking object)

  • attention/*/tests/: Test of the Module/Algorithm/MicroService

Training Task implemented

  • Copy inputs
  • Dialogue generation

Road Map

  • Code modules
  • Test modules
  • Construct input function
  • Build Estimator
  • Run estimator
  • Plug into a workflow
  • Add validation queue
  • Iterate over model improvements

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.7%
  • Shell 2.3%