Skip to content

Commit

Permalink
Merge pull request mozilla#3614 from lissyx/remove-tc-docs
Browse files Browse the repository at this point in the history
Fix mozilla#3607: Remove doc refs to TC
  • Loading branch information
lissyx authored Apr 8, 2021
2 parents faccbbe + 2718249 commit ab134af
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 32 deletions.
14 changes: 7 additions & 7 deletions doc/BUILDING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,12 @@ It is required to use our fork of TensorFlow since it includes fixes for common

If you'd like to build the language bindings or the decoder package, you'll also need:

.. _swig-dep:

* `SWIG >= 3.0.12 <http://www.swig.org/>`_.
Unfortunately, NodeJS / ElectronJS after 10.x support on SWIG is a bit behind, and while there are pending patches proposed to upstream, it is not yet merged.
* `SWIG >= 4.0 <http://www.swig.org/>`_.
Unfortunately, NodeJS / ElectronJS after 10.x support on SWIG is a bit behind, but patches have been merged and 4.1 is good.
The proper prebuilt patched version (covering linux, windows and macOS) of SWIG should get installed under `native_client/ <native_client/>`_ as soon as you build any bindings that requires it.
Prebuilt versions for linux, macOS and Windows are `available (look for ds-swig*.tar.gz) <https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3>`_

* `node-pre-gyp <https://github.com/mapbox/node-pre-gyp>`_ (for Node.JS bindings only)

Expand Down Expand Up @@ -141,7 +143,7 @@ This will create the package ``deepspeech-VERSION.tgz`` in ``native_client/javas
Install the CTC decoder package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To build the ``ds_ctcdecoder`` package, you'll need the general requirements listed above (in particular SWIG). The command below builds the bindings using eight (8) processes for compilation. Adjust the parameter accordingly for more or less parallelism.
To build the ``ds_ctcdecoder`` package, you'll need the general requirements listed above (in particular :ref:`SWIG <swig-dep>`). The command below builds the bindings using eight (8) processes for compilation. Adjust the parameter accordingly for more or less parallelism.

.. code-block::
Expand All @@ -159,9 +161,7 @@ architectures, and you might find some help in our `discourse <https://discourse

Feedback on improving this section or usage on other architectures is welcome.

First, you need to build SWIG from scratch.
Since `SWIG >= 3.0.12 <http://www.swig.org/>`_ does not include our patches please use
https://github.com/lissyx/swig/tree/taskcluster for building SWIG from source.
First, you need to build SWIG from scratch. See :ref:`SWIG dep <swig-dep>` for details.

You can supply your prebuild SWIG using ``SWIG_DIST_URL``

Expand Down Expand Up @@ -357,4 +357,4 @@ Feedback on improving this is welcome: how it could be exposed in the API, how
much performance gains do you get in your applications, how you had to change
the model to make it work with a delegate, etc.

See :ref:`the support / contact details <support>`
See :ref:`the support / contact details <support>`
6 changes: 2 additions & 4 deletions doc/TRAINING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -241,11 +241,9 @@ Making a mmap-able model for inference
The ``output_graph.pb`` model file generated in the above step will be loaded in memory to be dealt with when running inference.
This will result in extra loading time and memory consumption. One way to avoid this is to directly read data from the disk.

TensorFlow has tooling to achieve this: it requires building the target ``//tensorflow/contrib/util:convert_graphdef_memmapped_format`` (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use ``util/taskcluster.py`` tool to download:
TensorFlow has tooling to achieve this: it requires building the target ``//tensorflow/contrib/util:convert_graphdef_memmapped_format``. We recommend you build it from `TensorFlow r1.15 <https://github.com/tensorflow/tensorflow/tree/r1.15/>`_.

.. code-block::
$ python3 util/taskcluster.py --source tensorflow --artifact convert_graphdef_memmapped_format --branch r1.15 --target .
For convenience, builds for Linux and macOS are `available (look for file named convert_graphdef_memmapped_format) <https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3>`_

Producing a mmap-able model is as simple as:

Expand Down
22 changes: 1 addition & 21 deletions doc/USING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,27 +174,7 @@ See the :ref:`TypeScript client <js-api-example>` for an example of how to use t
Using the command-line client
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To download the pre-built binaries for the ``deepspeech`` command-line (compiled C++) client, use ``util/taskcluster.py``\ :

.. code-block:: bash
python3 util/taskcluster.py --target .
or if you're on macOS:

.. code-block:: bash
python3 util/taskcluster.py --arch osx --target .
also, if you need some binaries different than current master, like ``v0.2.0-alpha.6``\ , you can use ``--branch``\ :

.. code-block:: bash
python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "."
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``deepspeech`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.

Alternatively you may manually download the ``native_client.tar.xz`` from the [releases](https://github.com/mozilla/DeepSpeech/releases).
To download the pre-built binaries for the ``deepspeech`` command-line (compiled C++) client, use one of the ``native_client.tar.xz`` files from the [releases](https://github.com/mozilla/DeepSpeech/releases).

Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.

Expand Down

0 comments on commit ab134af

Please sign in to comment.