Multimodal learning
![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Part of a series on |
Machine learning and data mining |
---|
Multimodal learning, in the context of machine learning, is a type of deep learning using a combination of various modalities of data, often arising in real-world applications. An example of multi-modal data is data that combines text (typically represented as feature vector) with imaging data consisting of pixel intensities and annotation tags. As these modalities have fundamentally different statistical properties, combining them is non-trivial, which is why specialized modelling strategies and algorithms are required. The model is then trained to able to understand and work with multiple forms of data.
Motivation
Many models and algorithms have been implemented to retrieve and classify certain types of data, e.g. image or text (where humans who interact with machines can extract images in form of pictures and texts that could be any message etc.). However, data usually come with different modalities (it is the degree to which a system's components may be separated or combined) which carry different information. For example, it is very common to caption an image to convey the information not presented in the image itself. Similarly, sometimes it is more straightforward to use an image to describe the information which may not be obvious from texts. As a result, if different words appear in similar images, then these words likely describe the same thing. Conversely, if a word is used to describe seemingly dissimilar images, then these images may represent the same object. Thus, in cases dealing with multi-modal data, it is important to use a model which is able to jointly represent the information such that the model can capture the correlation structure between different modalities. Moreover, it should also be able to recover missing modalities given observed ones (e.g. predicting possible image object according to text description). The Multimodal Deep Boltzmann Machine model satisfies the above purposes.
Background: Boltzmann machine
A Boltzmann machine is a type of stochastic neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics. The units in Boltzmann machines are divided into two groups: visible units and hidden units. General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine[citation needed]. A more efficient architecture is called restricted Boltzmann machine where connection is only allowed between hidden unit and visible unit, which is described in the next section.
Restricted Boltzmann machine
A restricted Boltzmann machine[1] is an undirected graph model with stochastic visible variables and stochastic hidden variables. Each visible variable is connected to each hidden variable. The energy function of the model is defined as
where are model parameters: represents the symmetric interaction term between visible unit and hidden unit ; and are bias terms. The joint distribution of the system is defined as
where is a normalizing constant. The conditional distribution over hidden and can be derived as logistic function in terms of model parameters.
- , with
- , with
where is the logistic function.
The derivative of the log-likelihood with respect to the model parameters can be decomposed as the difference between the model's expectation and data-dependent expectation.
Gaussian-Bernoulli RBM
Gaussian-Bernoulli RBMs[2] are a variant of restricted Boltzmann machine used for modeling real-valued vectors such as pixel intensities. It is usually used to model the image data. The energy of the system of the Gaussian-Bernoulli RBM is defined as
where are the model parameters. The joint distribution is defined the same as the one in restricted Boltzmann machine. The conditional distributions now become
- , with
- , with
In Gaussian-Bernoulli RBM, the visible unit conditioned on hidden units is modeled as a Gaussian distribution.
Replicated Softmax Model
The Replicated Softmax Model[3] is also a variant of restricted Boltzmann machine and commonly used to model word count vectors in a document. In a typical text mining problem, let be the dictionary size, and be the number of words in the document. Let be a binary matrix with only when the word in the document is the word in the dictionary. denotes the count for the word in the dictionary. The energy of the state for a document contains words is defined as
The conditional distributions are given by
Deep Boltzmann machines
A deep Boltzmann machine[4] has a sequence of layers of hidden units. There are only connections between adjacent hidden layers, as well as between visible units and hidden units in the first hidden layer. The energy function of the system adds layer interaction terms to the energy function of general restricted Boltzmann machine and is defined by
The joint distribution is
Multimodal deep Boltzmann machines
Multimodal deep Boltzmann machine[5][6] uses an image-text bi-modal DBM where the image pathway is modeled as Gaussian-Bernoulli DBM and text pathway as Replicated Softmax DBM, and each DBM has two hidden layers and one visible layer. The two DBMs join together at an additional top hidden layer. The joint distribution over the multi-modal inputs defined as
The conditional distributions over the visible and hidden units are
Inference and learning
Exact maximum likelihood learning in this model is intractable, but approximate learning of DBMs can be carried out by using a variational approach, where mean-field inference is used to estimate data-dependent expectations and an MCMC based stochastic approximation procedure is used to approximate the model’s expected sufficient statistics.[7]
Application
Multimodal deep Boltzmann machines are successfully used in classification and missing data retrieval. The classification accuracy of multimodal deep Boltzmann machine outperforms support vector machines, latent Dirichlet allocation and deep belief network, when models are tested on data with both image-text modalities or with single modality.[citation needed] Multimodal deep Boltzmann machine is also able to predict missing modalities given the observed ones with reasonably good precision.[citation needed] Self Supervised Learning brings a more interesting and powerful model for multimodality. OpenAI developed CLIP and DALL-E models that revolutionized multimodality.
Multimodal deep learning is used for cancer screening – at least one system under development integrates such different types of data.[8][9]
Multimodal transformers
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning.[10] The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[11] and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[12]
Vision transformers[13] adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer[14] and later Whisper[15] follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers[16][17] are a variant of Transformers designed for multimodality.
For image generation, notable architectures are DALL-E 1 (2021), Parti (2022),[18] Phenaki (2023),[19] and Muse (2023).[20] Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image.[21] Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image.[22] Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted.[20] Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.[19]See also
References
- ^ "Restricted Boltzmann Machine" (PDF). 1986. Archived (PDF) from the original on 2016-03-03. Retrieved 2019-08-29.
- ^ "Gaussian-Bernoulli RBM" (PDF). 1994. Archived (PDF) from the original on 2015-07-01. Retrieved 2015-06-14.
- ^ "Replicated Softmax Model" (PDF). 2009a. Archived (PDF) from the original on 2015-10-01. Retrieved 2015-06-14.
- ^ "Deep Boltzmann Machine" (PDF). 2009b. Archived (PDF) from the original on 2016-03-10. Retrieved 2015-06-14.
- ^ "Multimodal Learning with Deep Boltzmann Machine" (PDF). 2012. Archived (PDF) from the original on 2016-03-04. Retrieved 2015-06-14.
- ^ "Multimodal Learning with Deep Boltzmann Machine" (PDF). 2014. Archived (PDF) from the original on 2015-06-21. Retrieved 2015-06-14.
- ^ "Approximations to the Likelihood Gradient" (PDF). 2008. Archived (PDF) from the original on 2016-03-04. Retrieved 2015-06-14.
- ^ Quach, Katyanna. "Harvard boffins build multimodal AI system to predict cancer". The Register. Archived from the original on 20 September 2022. Retrieved 16 September 2022.
- ^ Chen, Richard J.; Lu, Ming Y.; Williamson, Drew F. K.; Chen, Tiffany Y.; Lipkova, Jana; Noor, Zahra; Shaban, Muhammad; Shady, Maha; Williams, Mane; Joo, Bumjin; Mahmood, Faisal (8 August 2022). "Pan-cancer integrative histology-genomic analysis via multimodal deep learning". Cancer Cell. 40 (8): 865–878.e6. doi:10.1016/j.ccell.2022.07.004. ISSN 1535-6108. PMC 10397370. PMID 35944502. S2CID 251456162.
- Teaching hospital press release: "New AI technology integrates multiple data types to predict cancer outcomes". Brigham and Women's Hospital via medicalxpress.com. Archived from the original on 20 September 2022. Retrieved 18 September 2022.
- ^ Lu, Kevin; Grover, Aditya; Abbeel, Pieter; Mordatch, Igor (2022-06-28). "Frozen Pretrained Transformers as Universal Computation Engines". Proceedings of the AAAI Conference on Artificial Intelligence. 36 (7): 7628–7636. doi:10.1609/aaai.v36i7.20729. ISSN 2374-3468.
- ^ "Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | LMSYS Org". lmsys.org. Retrieved 2024-08-11.
- ^ Liu, Haotian; Li, Chunyuan; Wu, Qingyang; Lee, Yong Jae (2023-12-15). "Visual Instruction Tuning". Advances in Neural Information Processing Systems. 36: 34892–34916.
- ^ Dosovitskiy, Alexey; Beyer, Lucas; Kolesnikov, Alexander; Weissenborn, Dirk; Zhai, Xiaohua; Unterthiner, Thomas; Dehghani, Mostafa; Minderer, Matthias; Heigold, Georg; Gelly, Sylvain; Uszkoreit, Jakob (2021-06-03). "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale". arXiv:2010.11929 [cs.CV].
- ^ Gulati, Anmol; Qin, James; Chiu, Chung-Cheng; Parmar, Niki; Zhang, Yu; Yu, Jiahui; Han, Wei; Wang, Shibo; Zhang, Zhengdong; Wu, Yonghui; Pang, Ruoming (2020). "Conformer: Convolution-augmented Transformer for Speech Recognition". arXiv:2005.08100 [eess.AS].
- ^ Radford, Alec; Kim, Jong Wook; Xu, Tao; Brockman, Greg; McLeavey, Christine; Sutskever, Ilya (2022). "Robust Speech Recognition via Large-Scale Weak Supervision". arXiv:2212.04356 [eess.AS].
- ^ Jaegle, Andrew; Gimeno, Felix; Brock, Andrew; Zisserman, Andrew; Vinyals, Oriol; Carreira, Joao (2021-06-22). "Perceiver: General Perception with Iterative Attention". arXiv:2103.03206 [cs.CV].
- ^ Jaegle, Andrew; Borgeaud, Sebastian; Alayrac, Jean-Baptiste; Doersch, Carl; Ionescu, Catalin; Ding, David; Koppula, Skanda; Zoran, Daniel; Brock, Andrew; Shelhamer, Evan; Hénaff, Olivier (2021-08-02). "Perceiver IO: A General Architecture for Structured Inputs & Outputs". arXiv:2107.14795 [cs.LG].
- ^ "Parti: Pathways Autoregressive Text-to-Image Model". sites.research.google. Retrieved 2024-08-09.
- ^ a b Villegas, Ruben; Babaeizadeh, Mohammad; Kindermans, Pieter-Jan; Moraldo, Hernan; Zhang, Han; Saffar, Mohammad Taghi; Castro, Santiago; Kunze, Julius; Erhan, Dumitru (2022-09-29). "Phenaki: Variable Length Video Generation from Open Domain Textual Descriptions".
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ a b Chang, Huiwen; Zhang, Han; Barber, Jarred; Maschinot, A. J.; Lezama, Jose; Jiang, Lu; Yang, Ming-Hsuan; Murphy, Kevin; Freeman, William T. (2023-01-02). "Muse: Text-To-Image Generation via Masked Generative Transformers". arXiv:2301.00704 [cs.CV].
- ^ Ramesh, Aditya; Pavlov, Mikhail; Goh, Gabriel; Gray, Scott; Voss, Chelsea; Radford, Alec; Chen, Mark; Sutskever, Ilya (2021-02-26), Zero-Shot Text-to-Image Generation, arXiv:2102.12092
- ^ Yu, Jiahui; Xu, Yuanzhong; Koh, Jing Yu; Luong, Thang; Baid, Gunjan; Wang, Zirui; Vasudevan, Vijay; Ku, Alexander; Yang, Yinfei (2022-06-21), Scaling Autoregressive Models for Content-Rich Text-to-Image Generation, arXiv:2206.10789