Exploiting layerwise convexity of rectifier networks with sign constrained weights

Research output: Contribution to journalArticle

Abstract

By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization–minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.

Original languageEnglish
Pages (from-to)419-430
Number of pages12
JournalNeural Networks
Volume105
DOIs
Publication statusPublished - 1 Sep 2018

Fingerprint

Weights and Measures
Discriminant Analysis
Statistical Factor Analysis

Cite this

@article{4adee4af92004a6d9c79dc9388605d85,
title = "Exploiting layerwise convexity of rectifier networks with sign constrained weights",
abstract = "By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization–minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.",
keywords = "Geometrically interpretable neural network, Rectifier neural network, The majorization–minimization algorithm",
author = "Senjian An and Farid Boussaid and Mohammed Bennamoun and Ferdous Sohel",
year = "2018",
month = "9",
day = "1",
doi = "10.1016/j.neunet.2018.06.005",
language = "English",
volume = "105",
pages = "419--430",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier",

}

Exploiting layerwise convexity of rectifier networks with sign constrained weights. / An, Senjian; Boussaid, Farid; Bennamoun, Mohammed; Sohel, Ferdous.

In: Neural Networks, Vol. 105, 01.09.2018, p. 419-430.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Exploiting layerwise convexity of rectifier networks with sign constrained weights

AU - An, Senjian

AU - Boussaid, Farid

AU - Bennamoun, Mohammed

AU - Sohel, Ferdous

PY - 2018/9/1

Y1 - 2018/9/1

N2 - By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization–minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.

AB - By introducing sign constraints on the weights, this paper proposes sign constrained rectifier networks (SCRNs), whose training can be solved efficiently by the well known majorization–minimization (MM) algorithms. We prove that the proposed two-hidden-layer SCRNs, which exhibit negative weights in the second hidden layer and negative weights in the output layer, are capable of separating any number of disjoint pattern sets. Furthermore, the proposed two-hidden-layer SCRNs can decompose the patterns of each class into several clusters so that each cluster is convexly separable from all the patterns from the other classes. This provides a means to learn the pattern structures and analyse the discriminant factors between different classes of patterns. Experimental results are provided to show the benefits of sign constraints in improving classification performance and the efficiency of the proposed MM algorithm.

KW - Geometrically interpretable neural network

KW - Rectifier neural network

KW - The majorization–minimization algorithm

UR - http://www.scopus.com/inward/record.url?scp=85048860948&partnerID=8YFLogxK

U2 - 10.1016/j.neunet.2018.06.005

DO - 10.1016/j.neunet.2018.06.005

M3 - Article

VL - 105

SP - 419

EP - 430

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

ER -