Show simple item record

dc.contributor.authorKo, Ming Hsiao
dc.contributor.supervisorProf. Svetha Venkatesh
dc.contributor.supervisorProf. Geoff West
dc.date.accessioned2017-01-30T09:49:15Z
dc.date.available2017-01-30T09:49:15Z
dc.date.created2009-08-18T06:21:15Z
dc.date.issued2009
dc.identifier.urihttp://hdl.handle.net/20.500.11937/384
dc.description.abstract

Fusion is a fundamental human process that occurs in some form at all levels of sense organs such as visual and sound information received from eyes and ears respectively, to the highest levels of decision making such as our brain fuses visual and sound information to make decisions. Multi-sensor data fusion is concerned with gaining information from multiple sensors by fusing across raw data, features or decisions. The traditional frameworks for multi-sensor data fusion only concern fusion at specific points in time. However, many real world situations change over time. When the multi-sensor system is used for situation awareness, it is useful not only to know the state or event of the situation at a point in time, but also more importantly, to understand the causalities of those states or events changing over time.Hence, we proposed a multi-agent framework for temporal fusion, which emphasises the time dimension of the fusion process, that is, fusion of the multi-sensor data or events derived over a period of time. The proposed multi-agent framework has three major layers: hardware, agents, and users. There are three different fusion architectures: centralized, hierarchical, and distributed, for organising the group of agents. The temporal fusion process of the proposed framework is elaborated by using the information graph. Finally, the core of the proposed temporal fusion framework – Dynamic Time Warping (DTW) temporal fusion agent is described in detail.Fusing multisensory data over a period of time is a challenging task, since the data to be fused consists of complex sequences that are multi–dimensional, multimodal, interacting, and time–varying in nature. Additionally, performing temporal fusion efficiently in real–time is another challenge due to the large amount of data to be fused. To address these issues, we proposed the DTW temporal fusion agent that includes four major modules: data pre-processing, DTW recogniser, class templates, and decision making. The DTW recogniser is extended in various ways to deal with the variability of multimodal sequences acquired from multiple heterogeneous sensors, the problems of unknown start and end points, multimodal sequences of the same class that hence has different lengths locally and/or globally, and the challenges of online temporal fusion.We evaluate the performance of the proposed DTW temporal fusion agent on two real world datasets: 1) accelerometer data acquired from performing two hand gestures, and 2) a benchmark dataset acquired from carrying a mobile device and performing pre-defined user scenarios. Performance results of the DTW based system are compared with those of a Hidden Markov Model (HMM) based system. The experimental results from both datasets demonstrate that the proposed DTW temporal fusion agent outperforms HMM based systems, and has the capability to perform online temporal fusion efficiently and accurately in real–time.

dc.languageen
dc.publisherCurtin University
dc.subjectdyanamic time warping (DTW)
dc.subjectusers
dc.subjecthardware
dc.subjectagents
dc.subjectdistributed
dc.subjectdecision making
dc.subjectmulti-sensor data fusion
dc.subjectsense organs
dc.subjectcentralized
dc.subjecthierarchical
dc.subjectmulti-agent framework
dc.subjectfusion
dc.subjectbrain fusion
dc.subjectdata pre-processing
dc.titleUsing dynamic time warping for multi-sensor fusion
dc.typeThesis
dcterms.educationLevelMSc
curtin.departmentDepartment of Computing
curtin.accessStatusOpen access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record