Curtin University Homepage
  • Library
  • Help
    • Admin

    espace - Curtin’s institutional repository

    JavaScript is disabled for your browser. Some features of this site may not work without it.
    View Item 
    • espace Home
    • espace
    • Curtin Research Publications
    • View Item
    • espace Home
    • espace
    • Curtin Research Publications
    • View Item

    How can deep rectifier networks achieve linear separability and preserve distances?

    Access Status
    Fulltext not available
    Authors
    An, Senjian
    Boussaid, F.
    Bennamoun, M.
    Date
    2015
    Type
    Conference Paper
    
    Metadata
    Show full item record
    Citation
    An, S. and Boussaid, F. and Bennamoun, M. 2015. How can deep rectifier networks achieve linear separability and preserve distances?, pp. 514-523.
    Source Title
    32nd International Conference on Machine Learning, ICML 2015
    ISBN
    9781510810587
    School
    School of Electrical Engineering, Computing and Mathematical Science (EECMS)
    URI
    http://hdl.handle.net/20.500.11937/69676
    Collection
    • Curtin Research Publications
    Abstract

    This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linear separating plane in the output layer and the margin of the nonlinear separating boundary in the original data space can be closely related so that the maximum margin classification in the input data space can be achieved approximately via the maximum margin linear classifiers in the output layer. The generalization performance of such distance preserving deep rectifier neural networks can be well justified by the distance-preserving properties of their hidden layers and the maximum margin property of the linear classifiers in the output layer.

    Related items

    Showing items related by title, author, creator and subject.

    • Contractive rectifier networks for nonlinear maximum margin classification
      An, Senjian; Hayat, M.; Khan, S.; Bennamoun, M.; Boussaid, F.; Sohel, F. (2015)
      © 2015 IEEE. To find the optimal nonlinear separating boundary with maximum margin in the input data space, this paper proposes Contractive Rectifier Networks (CRNs), wherein the hidden-layer transformations are restricted ...
    • Sign constrained rectifier networks with applications to pattern decompositions
      An, Senjian; Ke, Q.; Bennamoun, M.; Boussaid, F.; Sohel, F. (2015)
      © Springer International Publishing Switzerland 2015. In this paper we introduce sign constrained rectifier networks (SCRN), demonstrate their universal classification power and illustrate their applications to pattern ...
    • Investigation of activation functions in deep belief network
      Lau, M.; Lim, Hann (2017)
      © 2017 IEEE. Deep Belief Network (DBN) is made up of stacked Restricted Boltzmann Machine layers associated with global weight fine-tuning for pattern recognition. However, DBN suffers from vanishing gradient problem due ...
    Advanced search

    Browse

    Communities & CollectionsIssue DateAuthorTitleSubjectDocument TypeThis CollectionIssue DateAuthorTitleSubjectDocument Type

    My Account

    Admin

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Follow Curtin

    • 
    • 
    • 
    • 
    • 

    CRICOS Provider Code: 00301JABN: 99 143 842 569TEQSA: PRV12158

    Copyright | Disclaimer | Privacy statement | Accessibility

    Curtin would like to pay respect to the Aboriginal and Torres Strait Islander members of our community by acknowledging the traditional owners of the land on which the Perth campus is located, the Whadjuk people of the Nyungar Nation; and on our Kalgoorlie campus, the Wongutha people of the North-Eastern Goldfields.