Discriminative Bayesian Dictionary Learning for Classification

    Research output: Contribution to journalArticle

    20 Citations (Scopus)

    Abstract

    © 2016 IEEE.
    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
    Original languageEnglish
    Pages (from-to)2374-2388
    JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
    Volume38
    Issue number12
    Early online date11 Feb 2016
    DOIs
    Publication statusPublished - 1 Dec 2016

    Fingerprint

    Glossaries
    Classifiers
    Sparse Representation
    Classifier
    Bayesian Approach
    Bernoulli
    Atoms
    Hierarchical Bayesian Model
    Representation of data
    Action Recognition
    Gibbs Sampling
    Learning
    Dictionary
    Face Recognition
    Probability distributions
    Experiment
    Labels
    Probability Distribution
    Experiments
    Sampling

    Cite this

    @article{bef79966a68045c7a5b309d551e18bfe,
    title = "Discriminative Bayesian Dictionary Learning for Classification",
    abstract = "{\circledC} 2016 IEEE.We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.",
    author = "Naveed Akhtar and F. Shafait and Ajmal Mian",
    year = "2016",
    month = "12",
    day = "1",
    doi = "10.1109/TPAMI.2016.2527652",
    language = "English",
    volume = "38",
    pages = "2374--2388",
    journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
    issn = "0162-8828",
    publisher = "IEEE, Institute of Electrical and Electronics Engineers",
    number = "12",

    }

    Discriminative Bayesian Dictionary Learning for Classification. / Akhtar, Naveed; Shafait, F.; Mian, Ajmal.

    In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 38, No. 12, 01.12.2016, p. 2374-2388.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Discriminative Bayesian Dictionary Learning for Classification

    AU - Akhtar, Naveed

    AU - Shafait, F.

    AU - Mian, Ajmal

    PY - 2016/12/1

    Y1 - 2016/12/1

    N2 - © 2016 IEEE.We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

    AB - © 2016 IEEE.We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

    U2 - 10.1109/TPAMI.2016.2527652

    DO - 10.1109/TPAMI.2016.2527652

    M3 - Article

    VL - 38

    SP - 2374

    EP - 2388

    JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

    JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

    SN - 0162-8828

    IS - 12

    ER -