Show simple item record

dc.contributor.authorBamorovat Abadi, Mohammad
dc.contributor.authorShahabian Alashti, Mohamad Reza
dc.contributor.authorHolthaus, Patrick
dc.contributor.authorMenon, Catherine
dc.contributor.authorAmirabdollahian, Farshid
dc.date.accessioned2023-11-01T15:30:01Z
dc.date.available2023-11-01T15:30:01Z
dc.date.issued2023-04-28
dc.identifier.citationBamorovat Abadi , M , Shahabian Alashti , M R , Holthaus , P , Menon , C & Amirabdollahian , F 2023 , RHM: Robot House Multi-view Human Activity Recognition Dataset . in ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions . IARIA , Venice, Italy , ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions , Venice , Italy , 24/04/23 .
dc.identifier.citationconference
dc.identifier.isbn978-1-68558-078-0
dc.identifier.otherORCID: /0000-0001-8450-9362/work/145926502
dc.identifier.otherORCID: /0000-0003-2072-5845/work/145926869
dc.identifier.urihttp://hdl.handle.net/2299/27046
dc.description© 2023, IARIA.
dc.description.abstractWith the recent increased development of deep neural networks and dataset capabilities, the Human Action Recognition (HAR) domain is growing rapidly in terms of both the available datasets and deep models. Despite this, there are some lacks at datasets specifically covering the Robotics field and Human-Robot interaction. We prepare and introduce a new multi-view dataset to address this. The Robot House Multi-View dataset (RHM) contains four views: Front, Back, Ceiling, and Robot Views. There are 14 classes with 6701 video clips for each view, making a total of 26804 video clips for the four views. The lengths of the video clips are between 1 to 5 seconds. The videos with the same number and the same classes are synchronized in different views. In the second part of this paper, we consider how single streams afford activity recognition using established state-of-the-art models. We then assess the affordance for each of the views based on information theoretic modelling and mutual information concept. Furthermore, we benchmark the performance of different views, thus establishing the strengths and weaknesses of each view relevant to their information content and performance of the benchmark. Our results lead us to conclude that multi-view and multi-stream activity recognition has the added potential to improve activity recognition results.en
dc.format.extent7
dc.format.extent970272
dc.language.isoeng
dc.publisherIARIA
dc.relation.ispartofACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions
dc.titleRHM: Robot House Multi-view Human Activity Recognition Dataseten
dc.contributor.institutionSchool of Physics, Engineering & Computer Science
dc.contributor.institutionCentre for Computer Science and Informatics Research
dc.contributor.institutionAdaptive Systems
dc.contributor.institutionDepartment of Computer Science
dc.contributor.institutionCentre for Future Societies Research
dc.date.embargoedUntil2023-04-28
rioxxterms.typeOther
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record