Single-View Food Portion Estimation: Learning Image-to-Energy Mappings Using Generative Adversarial Networks
|dc.identifier.citation||Fang, S. and Shao, Z. and Mao, R. and Fu, C. and Kerr, D. and Boushey, C. and Delp, E. et al. 2018. Single-View Food Portion Estimation: Learning Image-to-Energy Mappings Using Generative Adversarial Networks, in Proceedings of the 25TH IEEE International Conference on Image Processing (ICIP): pp. 251-255. Athens: IEEE.|
Due to the growing concern of chronic diseases and other health problems related to diet, there is a need to develop accurate methods to estimate an individual's food and energy intake. Measuring accurate dietary intake is an open research problem. In particular, accurate food portion estimation is challenging since the process of food preparation and consumption impose large variations on food shapes and appearances. In this paper, we present a food portion estimation method to estimate food energy (kilocalories) from food images using Generative Adversarial Networks (GAN). We introduce the concept of an "energy distribution" for each food image. To train the GAN, we design a food image dataset based on ground truth food labels and segmentation masks for each food image as well as energy information associated with the food image. Our goal is to learn the mapping of the food image to the food energy. We can then estimate food energy based on the energy distribution. We show that an average energy estimation error rate of 10.89% can be obtained by learning the image-to-energy mapping.
|dc.title||Single-View Food Portion Estimation: Learning Image-to-Energy Mappings Using Generative Adversarial Networks|
|dcterms.source.title||2018 25TH IEEE International Conference on Image Processing (ICIP)|
|curtin.department||School of Public Health|
|curtin.accessStatus||Fulltext not available|
Files in this item
There are no files associated with this item.