Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Cherry-pick few commits to release 1.3 branch (#12375)
Browse files Browse the repository at this point in the history
* Add a tutorial for control flow operators. (#12340)

* the first version.

* fix.

* add to test.

* fix.

* fix.

* fix

* fix.

* fix.

* add title.

* add link

* fix.

* Update ONNX API docs references (#12317)

* update onnx API references

* update descriptions

* [MXAPPS-581] Disable an additional long test in the SD nightly (#12343)

* Disable an additional test in the SD nightly that also takes over the
  timeout.

* Documentation update related to sparse support (#12367)

* Update sparse.md

* Update sparse.md

* Update csr.md

* Update row_sparse.md

* Update train.md
  • Loading branch information
Roshrini authored and szha committed Aug 27, 2018
1 parent f0c0a97 commit 05b6dc3
Show file tree
Hide file tree
Showing 12 changed files with 416 additions and 37 deletions.
22 changes: 11 additions & 11 deletions docs/api/python/contrib/onnx.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,18 +22,17 @@ This document describes all the ONNX-MXNet APIs.
.. autosummary::
:nosignatures:
mxnet.contrib.onnx.import_model
mxnet.contrib.onnx.get_model_metadata
mxnet.contrib.onnx.import_to_gluon
mxnet.contrib.onnx.export_model
mxnet.contrib.onnx.onnx2mx.import_model
mxnet.contrib.onnx.onnx2mx.import_to_gluon
mxnet.contrib.onnx.mx2onnx.export_model
```

## ONNX Tutorials

```eval_rst
.. toctree::
:maxdepth: 1
/tutorials/onnx/super_resolution.md
/tutorials/onnx/export_mxnet_to_onnx.md
/tutorials/onnx/inference_on_onnx_model.md
Expand All @@ -43,19 +42,20 @@ This document describes all the ONNX-MXNet APIs.
## ONNX Examples

* Face Recognition with [ArcFace](https://github.com/onnx/models/tree/master/models/face_recognition/ArcFace)
* Image Classification with [MobileNet](https://github.com/onnx/models/tree/master/models/image_classification/mobilenet), [ResNet](https://github.com/onnx/models/tree/master/models/image_classification/resnet), [SqueezeNet](https://github.com/onnx/models/tree/master/models/image_classification/squeezenet), [VGG](https://github.com/onnx/models/tree/master/models/image_classification/vgg)
* Image Classification with [MobileNet](https://github.com/onnx/models/tree/master/models/image_classification/mobilenet), [ResNet](https://github.com/onnx/models/tree/master/models/image_classification/resnet), [SqueezeNet](https://github.com/onnx/models/tree/master/models/image_classification/squeezenet), [VGG](https://github.com/onnx/models/tree/master/models/image_classification/vgg)

## API Reference

<script type="text/javascript" src='../../../_static/js/auto_module_index.js'></script>

```eval_rst
.. automodule:: mxnet.contrib.onnx.import_model
.. automodule:: mxnet.contrib.onnx.get_model_metadata
.. automodule:: mxnet.contrib.onnx.import_to_gluon
.. automodule:: mxnet.contrib.onnx.export_model
.. automodule:: mxnet.contrib.onnx.onnx2mx.import_model
:members: import_model, get_model_metadata
.. automodule:: mxnet.contrib.onnx.onnx2mx.import_to_gluon
:members: import_to_gluon
.. automodule:: mxnet.contrib.onnx.mx2onnx.export_model
:members: export_model
```

<script>auto_index("api-reference");</script>
10 changes: 3 additions & 7 deletions docs/api/python/ndarray/sparse.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This document lists the routines of the *n*-dimensional sparse array package:
```

The `CSRNDArray` and `RowSparseNDArray` API, defined in the `ndarray.sparse` package, provides
imperative sparse tensor operations on **CPU**.
imperative sparse tensor operations.

An `CSRNDArray` inherits from `NDArray`, and represents a two-dimensional, fixed-size array in compressed sparse row format.

Expand Down Expand Up @@ -63,16 +63,13 @@ A detailed tutorial is available at

```eval_rst
.. note:: ``mxnet.ndarray.sparse.RowSparseNDArray`` and ``mxnet.ndarray.sparse.CSRNDArray`` DO NOT support the ``mxnet.gluon`` high-level interface yet.
.. note:: ``mxnet.ndarray.sparse`` is similar to ``mxnet.ndarray`` in some aspects. But the differences are not negligible. For instance:
- Only a subset of operators in ``mxnet.ndarray`` have specialized implementations in ``mxnet.ndarray.sparse``.
Operators such as Convolution and broadcasting do not have sparse implementations yet.
- Only a subset of operators in ``mxnet.ndarray`` have efficient sparse implementations in ``mxnet.ndarray.sparse``.
- If an operator do not occur in the ``mxnet.ndarray.sparse`` namespace, that means the operator does not have an efficient sparse implementation yet. If sparse inputs are passed to such an operator, it will convert inputs to the dense format and fallback to the already available dense implementation.
- The storage types (``stype``) of sparse operators' outputs depend on the storage types of inputs.
By default the operators not available in ``mxnet.ndarray.sparse`` infer "default" (dense) storage type for outputs.
Please refer to the [API Reference](#api-reference) section for further details on specific operators.
- GPU support for ``mxnet.ndarray.sparse`` is experimental. Only a few sparse operators are supported on GPU such as ``sparse.dot``.
.. note:: ``mxnet.ndarray.sparse.CSRNDArray`` is similar to ``scipy.sparse.csr_matrix`` in some aspects. But they differ in a few aspects:
Expand Down Expand Up @@ -559,7 +556,6 @@ We summarize the interface for each class in the following sections.
sgd_update
sgd_mom_update
adam_update
ftrl_update
adagrad_update
```

Expand Down
7 changes: 3 additions & 4 deletions docs/api/python/symbol/sparse.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This document lists the routines of the sparse symbolic expression package:
```

The `Sparse Symbol` API, defined in the `symbol.sparse` package, provides
sparse neural network graphs and auto-differentiation on CPU.
sparse neural network graphs and auto-differentiation.

The storage type of a variable is speficied by the `stype` attribute of the variable.
The storage type of a symbolic expression is inferred based on the storage types of the variables and the operators.
Expand All @@ -43,12 +43,11 @@ array([ 1., 1.],
.. note:: most operators provided in ``mxnet.symbol.sparse`` are similar to those in
``mxnet.symbol`` although there are few differences:
- Only a subset of operators in ``mxnet.symbol`` have specialized implementations in ``mxnet.symbol.sparse``.
Operators such as reduction and broadcasting do not have sparse implementations yet.
- Only a subset of operators in ``mxnet.symbol`` have efficient sparse implementations in ``mxnet.symbol.sparse``.
- If an operator do not occur in the ``mxnet.symbol.sparse`` namespace, that means the operator does not have an efficient sparse implementation yet. If sparse inputs are passed to such an operator, it will convert inputs to the dense format and fallback to the already available dense implementation.
- The storage types (``stype``) of sparse operators' outputs depend on the storage types of inputs.
By default the operators not available in ``mxnet.symbol.sparse`` infer "default" (dense) storage type for outputs.
Please refer to the API reference section for further details on specific operators.
- GPU support for ``mxnet.symbol.sparse`` is experimental.
```

Expand Down
Loading

0 comments on commit 05b6dc3

Please sign in to comment.