Show simple item record

Visual classification of dumpsters with capsule networks

dc.contributor.authorGarcia-Espinosa, Francisco J.
dc.contributor.authorConcha, David
dc.contributor.authorPantrigo, Juan J.
dc.contributor.authorCuesta-Infante, Alfredo
dc.date.accessioned2023-09-19T14:20:18Z
dc.date.available2023-09-19T14:20:18Z
dc.date.issued2022
dc.identifier.citationGarcia-Espinosa, F.J., Concha, D., Pantrigo, J.J. et al. Visual classification of dumpsters with capsule networks. Multimed Tools Appl 81, 31129–31143 (2022). https://doi.org/10.1007/s11042-022-12899-9es
dc.identifier.issn1573-7721
dc.identifier.urihttps://hdl.handle.net/10115/24379
dc.descriptionOpen Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This research has been supported by the Spanish Government research funding RTI2018-098743-B-I00 (MICINN/FEDER) and the Comunidad de Madrid research funding grant Y2018/EMT-5062. We also would like to acknowledge Ecoembes for providing the EcoDID-2017 database of dumpster images for this work.es
dc.description.abstractGarbage management is an essential task in the everyday life of a city. In many countries, dumpsters are owned and deployed by the public administration. An updated what-and-where list is in the core of the decision making process when it comes to remove or renew them. Moreover, it may give extra information to other analytics in a smart city context. In this paper, we present a capsule network-based architecture to automate the visual classification of dumpsters. We propose different network hyperparameter settings, such as reducing convolutional kernel size and increasing convolution layers. We also try several data augmentation strategies, as crop and flip image transformations. We succeed in reducing the number of network parameters by 85% with respect to the best previous method, thus decreasing the required training time and making the whole process suitable for low cost and embedded software architectures. In addition, the paper provides an extensive experimental analysis including an ablation study that illustrates the contribution of each component in the proposed method. Our proposal is compared with the state-of-the-art method, which is based on a Google Inception V3 architecture pretrained with Imagenet. Experimental results show that our proposal achieves a 95.35% accuracy, 2.35% over the previous best method.es
dc.language.isoenges
dc.publisherACSes
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectCapsule networkses
dc.subjectImage recognitiones
dc.subjectGarbage managementes
dc.subjectSmart citieses
dc.titleVisual classification of dumpsters with capsule networkses
dc.typeinfo:eu-repo/semantics/articlees
dc.identifier.doi10.1007/s11042-022-12899-9es
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses


Files in this item

This item appears in the following Collection(s)

Show simple item record

Atribución 4.0 InternacionalExcept where otherwise noted, this item's license is described as Atribución 4.0 Internacional