Jump to content

Comparison of deep learning software

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by TheRealPascalRascal (talk | contribs) at 13:30, 22 March 2017. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The following table compares some of the most popular software frameworks, libraries and computer programs for deep learning.


Deep learning software by name

Software Creator Software license[a] Open source Platform Written in Interface OpenMP support OpenCL support CUDA support Automatic differentiation[1] Has pretrained models Recurrent nets Convolutional nets RBM/DBNs Parallel execution (multi node)
Apache Singa Apache Incubator Apache 2.0 Yes Linux, Mac OS X, Windows C++ Python, C++, Java No Yes Yes ? Yes Yes Yes Yes Yes
Deeplearning4j Skymind engineering team; Deeplearning4j community; originally Adam Gibson Apache 2.0 Yes Linux, Mac OS X, Windows, Android (Cross-platform) C, C++ Java, Scala, Clojure, Python (Keras) Yes On roadmap[2] Yes[3] Computational Graph Yes[4] Yes Yes Yes Yes[5]
Dlib Davis King Boost Software License Yes Cross-Platform C++ C++ Yes No Yes Yes Yes No Yes Yes Yes
Keras François Chollet MIT license Yes Linux, Mac OS X, Windows Python Python Only if using Theano as backend Under development for the Theano backend (and on roadmap for the TensorFlow backend) Yes Yes Yes[6] Yes Yes Yes Yes[7]
Microsoft Cognitive Toolkit - CNTK Microsoft Research MIT license[8] Yes Windows, Linux[9] (OSX via Docker on roadmap) C++ Python, C++, Command line,[10] BrainScript[11] (.NET on roadmap[12]) Yes[13] No Yes Yes Yes[14] Yes[15] Yes[15] No[16] Yes[17]
MXNet Distributed (Deep) Machine Learning Community Apache 2.0 Yes Linux, Mac OS X, Windows,[18][19] AWS, Android,[20] iOS, JavaScript[21] Small C++ core library C++, Python, Julia, Matlab, JavaScript, Go, R, Scala, Perl Yes On roadmap[22] Yes Yes[23] Yes[24] Yes Yes Yes Yes[25]
Neural Designer Artelnics Proprietary No Linux, Mac OS X, Windows C++ Graphical user interface Yes No No ? ? No No No ?
OpenNN Artelnics GNU LGPL Yes Cross-platform C++ C++ Yes No No ? ? No No No ?
TensorFlow Google Brain team Apache 2.0 Yes Linux, Mac OS X, Windows[26] C++, Python Python, (C/C++ public API only for executing graphs[27]) No On roadmap[28][29] Yes Yes[30] Yes[31] Yes Yes Yes Yes
Theano Université de Montréal BSD license Yes Cross-platform Python Python Yes Under development[32] Yes Yes[33][34] Through Lasagne's model zoo[35] Yes Yes Yes Yes[36]
Torch Ronan Collobert, Koray Kavukcuoglu, Clement Farabet BSD license Yes Linux, Mac OS X, Windows,[37] Android,[38] iOS C, Lua Lua, LuaJIT,[39] C, utility library for C++/OpenCL[40] Yes Third party implementations[41][42] Yes[43][44] Through Twitter's Autograd[45] Yes[46] Yes Yes Yes Yes[47]
Wolfram Mathematica Wolfram Research Proprietary No Windows, Mac OS X, Linux, Cloud computing C++ Command line, Java, C++ No Yes Yes Yes Yes[48] Yes Yes Yes Yes


  1. ^ Licenses here are a summary, and are not taken to be complete statements of the licenses. Some libraries may use other libraries internally under different licenses

See also

References

  1. ^ Atilim Gunes Baydin; Barak A. Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind (20 Feb 2015). "Automatic differentiation in machine learning: a survey". arXiv:1502.05767 [cs.LG].
  2. ^ "Support for Open CL · Issue #27 · deeplearning4j/nd4j". GitHub.
  3. ^ "N-Dimensional Scientific Computing for Java".
  4. ^ Chris Nicholson; Adam Gibson. "Deeplearning4j Models".
  5. ^ Deeplearning4j. "Deeplearning4j on Spark". Deeplearning4j.{{cite web}}: CS1 maint: numeric names: authors list (link)
  6. ^ https://keras.io/applications/
  7. ^ Does Keras support using multiple GPUs? · Issue #2436 · fchollet/keras
  8. ^ "CNTK/LICENSE.md at master · Microsoft/CNTK · GitHub". GitHub.
  9. ^ "Setup CNTK on your machine". GitHub.
  10. ^ "CNTK usage overview". GitHub.
  11. ^ "BrainScript Network Builder". GitHub.
  12. ^ ".NET Support · Issue #960 · Microsoft/CNTK". GitHub.
  13. ^ "How to train a model using multiple machines? · Issue #59 · Microsoft/CNTK". GitHub.
  14. ^ https://github.com/Microsoft/CNTK/issues/140#issuecomment-186466820
  15. ^ a b "CNTK - Computational Network Toolkit". Microsoft Corporation.
  16. ^ url=https://github.com/Microsoft/CNTK/issues/534
  17. ^ "Multiple GPUs and machines". Microsoft Corporation.
  18. ^ "Releases · dmlc/mxnet". Github.
  19. ^ "Installation Guide — mxnet documentation". Readthdocs.
  20. ^ "MXNet Smart Device". ReadTheDocs.
  21. ^ "MXNet.js". Github.
  22. ^ "Support for other Device Types, OpenCL AMD GPU · Issue #621 · dmlc/mxnet". GitHub.
  23. ^ http://mxnet.readthedocs.io/
  24. ^ "Model Gallery". GitHub.
  25. ^ "Run MXNet on Multiple CPU/GPUs with Data Parallel". GitHub.
  26. ^ https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html
  27. ^ "TensorFlow C++ Session API reference documentation". TensorFlow.
  28. ^ "tensorflow/roadmap.md at master · tensorflow/tensorflow · GitHub". GitHub.
  29. ^ "OpenCL support · Issue #22 · tensorflow/tensorflow". GitHub.
  30. ^ https://www.tensorflow.org/
  31. ^ https://github.com/tensorflow/models
  32. ^ "Using the GPU — Theano 0.8.2 documentation".
  33. ^ http://deeplearning.net/software/theano/library/gradient.html
  34. ^ https://groups.google.com/d/msg/theano-users/mln5g2IuBSU/gespG36Lf_QJ
  35. ^ "Recipes/modelzoo at master · Lasagne/Recipes · GitHub". GitHub.
  36. ^ Using multiple GPUs — Theano 0.8.2 documentation
  37. ^ https://github.com/torch/torch7/wiki/Windows
  38. ^ "GitHub - soumith/torch-android: Torch-7 for Android". GitHub.
  39. ^ "Torch7: A Matlab-like Environment for Machine Learning" (PDF).
  40. ^ "GitHub - jonathantompson/jtorch: An OpenCL Torch Utility Library". GitHub.
  41. ^ "Cheatsheet". GitHub.
  42. ^ "cltorch". GitHub.
  43. ^ "Torch CUDA backend". GitHub.
  44. ^ "Torch CUDA backend for nn". GitHub.
  45. ^ https://github.com/twitter/torch-autograd
  46. ^ "ModelZoo". GitHub.
  47. ^ https://github.com/torch/torch7/wiki/Cheatsheet#distributed-computing--parallel-processing
  48. ^ http://blog.stephenwolfram.com/2017/03/the-rd-pipeline-continues-launching-version-11-1/