Welcome !

Multiview machine learning [SMDW19] occurs whenever a model has to be learned to process tasks over data coming from different description spaces. For example, medical diagnosis might use various types (aka views) of examinations (MRI, ECG, blood analysis, etc.) in order to take a decision. These views are supposed to carry different types of information regarding the learning task, which means that these views reveal different types of patterns regularities.

Multiview learning is a transverse setting of machine learning because it deals with unsupervised to supervised learning tasks, from inductive to transductive tasks, in classification or regression, etc. Moreover, there exists many classes of algorithms and theoretical aspects that currently address such a specificity of the input space, ranging from bayesian inference to kernel-based methods, from ensemble-based methods to deep learning. Last but not least, multi-view data introduces new tasks to be solved by machine learning, for example view completion (basically, a MRI might be missing for a patient whereas a medical diagnosis is required: a ML task could consist in completing the view for that patient from other patients described in that view and from the description of that patient in other views).

On one side, various real applications are concerned with multi-view learning, ranging from biology [BKR17, CD15], health [FE17], marketing, computer vision and multimedia [MYGS04,MGYA18], ecology [GSS + 18], etc., to social networks [BAD16], advertisement, signal processing [PIJR16, SWC + 17], or social sciences [FZYL14]. Actually, the digital data in any domain is exponentially increasing, impacting the need of learning methods that consider heterogeneous digital views on a same object of interest.

On another side, recent years have witnessed new frameworks/theories/algorithms able to deal with multiple views in many settings, such as Multiple Kernel Learning [BLJ04, HKC18], boosting [CK19], co-regularized approaches [SR08], shared representation learning [WALB15], kernel-based methods [BKR17], etc. In particular, rich deep learning approaches have been published with successful experiments [SSL14, RT17, NKK + 11], which now need to generalize in more than two views and other areas than multimedia and computer vision.

The workshop aims at bringing together people interested with multi-view learning, both from dataset providers to researchers in machine learning. Such a way, researchers could easily have the opportunity to inspect the reality of some true learning problems related to multi-view learning, meanwhile providers of natural multi-viewed data could get aware of the many current or potential solutions to address their learning tasks. We deeply think that a workshop in a major Machine Learning conference is a relevant way to achieve this goal. Hopefully, a synergy could emerge between the two types of publics. From this perspective, we propose to organise a tiny challenge – a hackathon – on real mutiview data, which could be provided by participants, in order to highlight a synergy during the workshop.

References

[BAD16] Adrian Benton, Raman Arora, and Mark Dredze. Learning multiview embeddings of twitter users. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 14–19, 2016.
[BKR17] Sahely Bhadra, Samuel Kaski, and Juho Rousu. Multi-view kernel completion. Machine Learning, 106(5):713–739, 2017.
[BLJ04] Francis R. Bach, Gert R. G. Lanckriet, and Michael I. Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In Proceedings of the Twenty-first International Conference on Machine Learning, ICML ’04, pages 6–, New York, NY, USA, 2004. ACM.
[CD15] Pio G. Kuzmanovski V. Ceci, M. and S. Džeroski. Semi-supervised multi-view learning for gene network reconstruction. PloS one, 10(12), 2015.
[CK19] Cécile Capponi and Sokol Koço. Learning from Imbalanced Datasets with Cross-View Cooperation-Based Ensemble Methods, chapter 7, pages 161–182. Springer International Publishing, 2019.
[FE17] Caiazzo G. Trojsi F. Russo A. Tedeschi G. Tagliaferri R. Fratello, M. and F. Esposito. Multi-view ensemble classification of brain connectivity images for neurodegeneration type discrimination. Neuroinformatics, 15(2):199–213, 2017.
[FZYL14] Yixiang Fang, Haijun Zhang, Yunming Ye, and Xutao Li. Detecting hot topics from twitter: A multiview approach. Journal of Information Science, 40(5):578–593, 2014.
[GSS + 18] Garrett B. Goh, Khushmeen Sakloth, Charles Siegel, Abhinav Vishnu, and Jim Pfaendtner. Multimodal deep neural networks using both engineered and learned representations for biodegradability prediction. CoRR, abs/1808.04456, 2018.
[HKC18] Riikka Huusari, Hachem Kadri, and Cécile Capponi. Multi-view metric learning in vector-valued kernel spaces. In International Conference on Artificial Intelligence and Statistics, AISTATS 2018, 9-11 April 2018, Playa Blanca, Lanzarote, Canary Islands, Spain, pages 415–424, 2018.
[MGYA18] C. Ma, Y. Guo, J. Yang, and W. An. Learning multi-view representation with lstm for 3d shape recognition and retrieval. IEEE Transactions on Multimedia, pages 1–1, 2018.
[MYGS04] Jason Meltzer, Ming-Hsuan Yang, Rakesh Gupta, and Stefano Soatto. Multiple view feature descriptors from image sequences via kernel principal component analysis. In Tomás Pajdla and Jiřı́ Matas, editors, Computer Vision – ECCV 2004, pages 215–227, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg.
[NKK + 11] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y.Ng. Multimodal deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 689–696, USA, 2011. Omnipress.
[PIJR16] Seonyoung Park, Jungho Im, Eunna Jang, and Jinyoung Rhee. Drought assessment and monitoring through blending of multi-sensor indices using machine learning approaches for different climate regions. Agricultural and Forest Meteorology, 216:157 – 169, 2016.
[RT17] D. Ramachandram and G. W. Taylor. Deep multimodal learning: A survey on recent advances and trends. IEEE Signal Processing Magazine, 34(6):96–108, Nov 2017.
[SMDW19] Shiliang Sun, Liang Mao, Ziang Dong, and Lidan Wu. Multiview Machine Learning. Springer, 2019.
[SR08] Vikas Sindhwani and David S. Rosenberg. An rkhs for multi-view learning and manifold co-regularization. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 976–983, New York, NY, USA, 2008. ACM.
[SSL14] Kihyuk Sohn, Wenling Shang, and Honglak Lee. Improved multimodal deep learning with variation of information. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2141–2149. Curran Associates, Inc., 2014.
[SWC + 17] Lichao Sun, Yuqi Wang, Bokai Cao, Philip S. Yu, Witawas Srisa-an, and Alex D. Leow. Sequential keystroke behavioral biometrics for mobile user identification via multi-view deep learning. In Yasemin Altun, Kamalika Das, Taneli Mielikäinen, Donato Malerba, Jerzy Stefanowski, Jesse Read, Marinka Žitnik, Michelangelo Ceci, and Sašo Džeroski, editors, Machine Learning and Knowledge Discovery in Databases, pages 228–240, Cham, 2017. Springer International Publishing.
[WALB15] Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1083–1092, Lille, France, 07–09 Jul 2015. PMLR