dc.contributor.author | Bamorovat Abadi, Mohammad | |
dc.contributor.author | Shahabian Alashti, Mohamad Reza | |
dc.contributor.author | Holthaus, Patrick | |
dc.contributor.author | Menon, Catherine | |
dc.contributor.author | Amirabdollahian, Farshid | |
dc.date.accessioned | 2023-11-01T15:30:01Z | |
dc.date.available | 2023-11-01T15:30:01Z | |
dc.date.issued | 2023-04-28 | |
dc.identifier.citation | Bamorovat Abadi , M , Shahabian Alashti , M R , Holthaus , P , Menon , C & Amirabdollahian , F 2023 , RHM: Robot House Multi-view Human Activity Recognition Dataset . in ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions . IARIA , Venice, Italy , ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions , Venice , Italy , 24/04/23 . | |
dc.identifier.citation | conference | |
dc.identifier.isbn | 978-1-68558-078-0 | |
dc.identifier.other | ORCID: /0000-0001-8450-9362/work/145926502 | |
dc.identifier.other | ORCID: /0000-0003-2072-5845/work/145926869 | |
dc.identifier.uri | http://hdl.handle.net/2299/27046 | |
dc.description | © 2023, IARIA. | |
dc.description.abstract | With the recent increased development of deep neural networks and dataset capabilities, the Human Action Recognition (HAR) domain is growing rapidly in terms of both the available datasets and deep models. Despite this, there are some lacks at datasets specifically covering the Robotics field and Human-Robot interaction. We prepare and introduce a new multi-view dataset to address this. The Robot House Multi-View dataset (RHM) contains four views: Front, Back, Ceiling, and Robot Views. There are 14 classes with 6701 video clips for each view, making a total of 26804 video clips for the four views. The lengths of the video clips are between 1 to 5 seconds. The videos with the same number and the same classes are synchronized in different views. In the second part of this paper, we consider how single streams afford activity recognition using established state-of-the-art models. We then assess the affordance for each of the views based on information theoretic modelling and mutual information concept. Furthermore, we benchmark the performance of different views, thus establishing the strengths and weaknesses of each view relevant to their information content and performance of the benchmark. Our results lead us to conclude that multi-view and multi-stream activity recognition has the added potential to improve activity recognition results. | en |
dc.format.extent | 7 | |
dc.format.extent | 970272 | |
dc.language.iso | eng | |
dc.publisher | IARIA | |
dc.relation.ispartof | ACHI 2023: The Sixteenth International Conference on Advances in Computer-Human Interactions | |
dc.title | RHM: Robot House Multi-view Human Activity Recognition Dataset | en |
dc.contributor.institution | School of Physics, Engineering & Computer Science | |
dc.contributor.institution | Centre for Computer Science and Informatics Research | |
dc.contributor.institution | Adaptive Systems | |
dc.contributor.institution | Department of Computer Science | |
dc.contributor.institution | Centre for Future Societies Research | |
dc.date.embargoedUntil | 2023-04-28 | |
rioxxterms.type | Other | |
herts.preservation.rarelyaccessed | true | |