Show simple item record

dc.contributor.authorShahabian Alashti, Mohamad Reza
dc.contributor.authorBamorovat Abadi, Mohammad
dc.contributor.authorHolthaus, Patrick
dc.contributor.authorMenon, Catherine
dc.contributor.authorAmirabdollahian, Farshid
dc.date.accessioned2023-10-25T15:00:01Z
dc.date.available2023-10-25T15:00:01Z
dc.date.issued2023-04-28
dc.identifier.citationShahabian Alashti , M R , Bamorovat Abadi , M , Holthaus , P , Menon , C & Amirabdollahian , F 2023 , Lightweight human activity recognition for ambient assisted living . in ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions . IARIA , Venice, Italy , ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions , Venice , Italy , 24/04/23 .
dc.identifier.citationconference
dc.identifier.isbn978-1-68558-078-0
dc.identifier.otherORCID: /0000-0001-8450-9362/work/145463331
dc.identifier.otherORCID: /0000-0003-2072-5845/work/145463518
dc.identifier.urihttp://hdl.handle.net/2299/26987
dc.description© 2023, IARIA.
dc.description.abstractAmbient assisted living (AAL) systems aim to improve the safety, comfort, and quality of life for the populations with specific attention given to prolonging personal independence during later stages of life. Human activity recognition (HAR) plays a crucial role in enabling AAL systems to recognise and understand human actions. Multi-view human activity recognition (MV-HAR) techniques are particularly useful for AAL systems as they can use information from multiple sensors to capture different perspectives of human activities and can help to improve the robustness and accuracy of activity recognition. In this work, we propose a lightweight activity recognition pipeline that utilizes skeleton data from multiple perspectives to combine the advantages of both approaches and thereby enhance an assistive robot's perception of human activity. The pipeline includes data sampling, input data type, and representation and classification methods. Our method modifies a classic LeNet classification model (M-LeNet) and uses a Vision Transformer (ViT) for the classification task. Experimental evaluation on a multi-perspective dataset of human activities in the home (RH-HAR-SK) compares the performance of these two models and indicates that combining camera views can improve recognition accuracy. Furthermore, our pipeline provides a more efficient and scalable solution in the AAL context, where bandwidth and computing resources are often limited.en
dc.format.extent415940
dc.language.isoeng
dc.publisherIARIA
dc.relation.ispartofACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions
dc.titleLightweight human activity recognition for ambient assisted livingen
dc.contributor.institutionCentre for Computer Science and Informatics Research
dc.contributor.institutionAdaptive Systems
dc.contributor.institutionSchool of Physics, Engineering & Computer Science
dc.contributor.institutionDepartment of Computer Science
dc.contributor.institutionCentre for Future Societies Research
dc.date.embargoedUntil2023-04-28
rioxxterms.typeOther
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record