<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Ren_et_al_2023b</id>
		<title>Ren et al 2023b - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Ren_et_al_2023b"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;action=history"/>
		<updated>2026-05-06T21:04:58Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291208&amp;oldid=prev</id>
		<title>Rimni at 13:15, 31 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291208&amp;oldid=prev"/>
				<updated>2024-01-31T13:15:34Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:15, 31 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l648&quot; &gt;Line 648:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 648:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{| style=&amp;quot;text-align: center; margin:auto;width: 100%;&amp;quot; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{| style=&amp;quot;text-align: center; margin:auto;width: 100%;&amp;quot; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| style=&amp;quot;text-align: center;&amp;quot; |&amp;lt;math&amp;gt;\hbox{RMSE }=\sqrt{\sum _{}^{}{(\hbox{predicted-&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;acutal&lt;/del&gt;})}^{2}/2}&amp;lt;/math&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| style=&amp;quot;text-align: center;&amp;quot; |&amp;lt;math&amp;gt;\hbox{RMSE }=\sqrt{\sum _{}^{}{(\hbox{predicted-&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;actual&lt;/ins&gt;})}^{2}/2}&amp;lt;/math&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|}&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| style=&amp;quot;width: 5px;text-align: right;white-space: nowrap;&amp;quot; | (4)&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| style=&amp;quot;width: 5px;text-align: right;white-space: nowrap;&amp;quot; | (4)&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291207:newid:291208 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291207&amp;oldid=prev</id>
		<title>Rimni at 13:05, 31 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291207&amp;oldid=prev"/>
				<updated>2024-01-31T13:05:29Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:05, 31 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l405&quot; &gt;Line 405:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 405:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==5. Datasets and languages==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==5. Datasets and languages==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the literature we surveyed, there are two types of dataset sources for humor recognition, the first in which is data collected on its own data to the requirements of the task, and the second using public datasets in [[#tab-2|Table 2]]. The first case contain 10 papers, which are [15,17,20,21,24,25,26,&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;[&lt;/del&gt;27,41,42]. Some of this self-collected data comes from Twitter, some from other sites. The remaining papers use the detail of public datasets. The most frequently used public datasets are Passau-SFCH, UR-FUNNY, Pun of the Day and the 16000 One-liner.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the literature we surveyed, there are two types of dataset sources for humor recognition, the first in which is data collected on its own data to the requirements of the task, and the second using public datasets in [[#tab-2|Table 2]]. The first case contain 10 papers, which are [15,17,20,21,24,25,26,27,41,42]. Some of this self-collected data comes from Twitter, some from other sites. The remaining papers use the detail of public datasets. The most frequently used public datasets are Passau-SFCH, UR-FUNNY, Pun of the Day and the 16000 One-liner.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;div class=&amp;quot;center&amp;quot; style=&amp;quot;font-size: 75%;&amp;quot;&amp;gt;'''Table 2'''. Summary of dataset from surveyed articles&amp;lt;/div&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;div class=&amp;quot;center&amp;quot; style=&amp;quot;font-size: 75%;&amp;quot;&amp;gt;'''Table 2'''. Summary of dataset from surveyed articles&amp;lt;/div&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291206:newid:291207 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291206&amp;oldid=prev</id>
		<title>Rimni at 13:04, 31 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291206&amp;oldid=prev"/>
				<updated>2024-01-31T13:04:36Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:04, 31 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l97&quot; &gt;Line 97:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 97:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Xiong et al. [33] proposed a humor identification model (MLSN) using popular humor theory and deep learning approaches for humor task. By incorporating the incoherence, phonetic properties and vagueness of a humorous statement as semantic attributes, the model automatically recognizes whether a statement includes a humorous utterance. The results show that the new model has better humor recognition accuracy and can contribute to discourse understanding research on three datasets.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Xiong et al. [33] proposed a humor identification model (MLSN) using popular humor theory and deep learning approaches for humor task. By incorporating the incoherence, phonetic properties and vagueness of a humorous statement as semantic attributes, the model automatically recognizes whether a statement includes a humorous utterance. The results show that the new model has better humor recognition accuracy and can contribute to discourse understanding research on three datasets.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Christ et al. [34] have three datasets for this Multimodal Sentiment Analysis Challenge (MuSe) 2022, the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset, the Hume-Reaction dataset, the Ulm-Trier Social Stress Test (Ulm-TSST) dataset. For each sub-challenge, a deep learning recurrent neural network with LSTM cells was employed to determine benchmark on the test divisions. They recorded an area under the curve (AUC) of .8480 for MuSe humor; a mean (from 7 classes) Pearson's correlation coefficient (&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;ð&lt;/del&gt;) of .2801 for MuSe response, as well as a concordance correlation coefficient (CCC) of .4931 and .4761 for valence and arousal in MuSe stress, respectively.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Christ et al. [34] have three datasets for this Multimodal Sentiment Analysis Challenge (MuSe) 2022, the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset, the Hume-Reaction dataset, the Ulm-Trier Social Stress Test (Ulm-TSST) dataset. For each sub-challenge, a deep learning recurrent neural network with LSTM cells was employed to determine benchmark on the test divisions. They recorded an area under the curve (AUC) of .8480 for MuSe humor; a mean (from 7 classes) Pearson's correlation coefficient (&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt; \delta&amp;lt;/math&amp;gt;&lt;/ins&gt;) of .2801 for MuSe response, as well as a concordance correlation coefficient (CCC) of .4931 and .4761 for valence and arousal in MuSe stress, respectively.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To aid humor understanding, they elaborated on incongruity and ambiguity and suggested an internal/external attention neural network (IEANN) for humor recognition [35]. To address incongruity and ambiguity in humorous texts, IEANN combined two different types of attentional mechanisms. At the same time, in order to verify the performance and reliability of the model, extensive experiments were conducted on two humor datasets.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To aid humor understanding, they elaborated on incongruity and ambiguity and suggested an internal/external attention neural network (IEANN) for humor recognition [35]. To address incongruity and ambiguity in humorous texts, IEANN combined two different types of attentional mechanisms. At the same time, in order to verify the performance and reliability of the model, extensive experiments were conducted on two humor datasets.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291205:newid:291206 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291205&amp;oldid=prev</id>
		<title>Rimni at 12:56, 31 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291205&amp;oldid=prev"/>
				<updated>2024-01-31T12:56:50Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 12:56, 31 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l739&quot; &gt;Line 739:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 739:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[24] Chen R., Rau P.-L.P. Deep learning model for humor recognition of different cultures. In Cross-Cultural Design. Experience and Product Design Across Cultures, 13th International Conference, CCD 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I 23 Springer International Publishing, 373-389, 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[24] Chen R., Rau P.-L.P. Deep learning model for humor recognition of different cultures. In Cross-Cultural Design. Experience and Product Design Across Cultures, 13th International Conference, CCD 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I 23 Springer International Publishing, 373-389, 2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[25] Prajapati P., Jaiswal A., Aastha, Shilpi, Neha, Sachdeva N.&amp;#160; Empirical &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Analysis &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Detection Using Deep Learning &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Machine Learning &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Kaggle Corpus, &lt;/del&gt;International Conference on Advancements in Interdisciplinary Research&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;:&lt;/del&gt;300-312&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[25] Prajapati P., Jaiswal A., Aastha, Shilpi, Neha, Sachdeva N.&amp;#160; Empirical &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;analysis &lt;/ins&gt;of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor detection using deep learning &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;machine learning &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;kaggle corpus. AIR 2022: &lt;/ins&gt;International Conference on Advancements in Interdisciplinary Research&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;300-312&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[26] Li D., Rzepka R., Ptaszynski M., Araki K. HEMOS: A novel deep learning-based fine-grained humor detecting method for sentiment analysis of social media&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Information Processing &amp;amp; Management, 57 (6)&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[26] Li D., Rzepka R., Ptaszynski M., Araki K. HEMOS: A novel deep learning-based fine-grained humor detecting method for sentiment analysis of social media&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Information Processing &amp;amp; Management, 57(6)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, 102290, &lt;/ins&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[27] Ziser Y., Kravi E., Carmel D. Humor &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Detection &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Product Question Answering Systems, &lt;/del&gt;Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 519-528, 2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[27] Ziser Y., Kravi E., Carmel D. Humor &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;detection &lt;/ins&gt;in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;product question answering systems. &lt;/ins&gt;Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 519-528, 2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[28] Mahajan R., Zaveri M. Humor identification using affect based content in target text&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Journal of Intelligent &amp;amp; Fuzzy Systems, 39 (1)&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;: &lt;/del&gt;697-708&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[28] Mahajan R., Zaveri M. Humor identification using affect based content in target text&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Journal of Intelligent &amp;amp; Fuzzy Systems, 39(1) 697-708&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[29] Hasan M.K., Lee S., Rahman W., Zadeh A. Humor &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Knowledge Enriched Transformer &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Understanding Multimodal Humor, &lt;/del&gt;In Proceedings of the AAAI &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;conference &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;artificial intelligence&lt;/del&gt;, 12972-12980, 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[29] Hasan M.K., Lee S., Rahman W., Zadeh A. Humor &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;knowledge enriched transformer &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;understanding multimodal humor. &lt;/ins&gt;In Proceedings of the AAAI &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Conference &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Artificial Intelligence&lt;/ins&gt;, 12972-12980, 2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[30] Yang D., Lavie A., Dyer C., Hovy E. Humor &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Recognition &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Anchor Extraction, &lt;/del&gt;Proceedings of the 2015 &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;conference &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;empirical methods &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;natural language processing&lt;/del&gt;, 2367-2376, 2015.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[30] Yang D., Lavie A., Dyer C., Hovy E. Humor &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;recognition &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor anchor extraction. &lt;/ins&gt;Proceedings of the 2015 &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Conference &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Empirical Methods &lt;/ins&gt;in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Natural Language Processing&lt;/ins&gt;, 2367-2376, 2015.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[31] Xu H., Liu W., Liu J., Li M., Feng Y., Peng Y., Shi Y., Sun X., Wang M. Hybrid &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Multimodal Fusion &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Detection, &lt;/del&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, 15-21, 2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[31] Xu H., Liu W., Liu J., Li M., Feng Y., Peng Y., Shi Y., Sun X., Wang M. Hybrid &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;multimodal fusion &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor detection. &lt;/ins&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, 15-21, 2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[32] Hasan M.K., Rahman W., Zadeh A., Zhong J. UR-FUNNY A &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Multimodal Language Dataset &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Understanding Humor, &lt;/del&gt;arXiv preprint arXiv:1904.06618&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[32] Hasan M.K., Rahman W., Zadeh A., Zhong J. UR-FUNNY A &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;multimodal language dataset &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;understanding humor. &lt;/ins&gt;arXiv preprint arXiv:1904.06618&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[33] Xiong S., Wang R., Huang X., Chen Z. Multidimensional &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Latent Semantic Networks &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Text Humor Recognition, &lt;/del&gt;Sensors (Basel), 22 (15)&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[33] Xiong S., Wang R., Huang X., Chen Z. Multidimensional &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;latent semantic networks &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;text humor recognition. &lt;/ins&gt;Sensors (Basel), 22(15)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, 5509, &lt;/ins&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[34] Christ L., Amiriparian S., Baird A., Tzirakis P., Kathan A., Müller N., Stappen L., Meßner E.-M., König A., Cowen A., Cambria E., Schuller B.W. The MuSe 2022 &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Multimodal Sentiment Analysis Challenge, &lt;/del&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, 5-14, 2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[34] Christ L., Amiriparian S., Baird A., Tzirakis P., Kathan A., Müller N., Stappen L., Meßner E.-M., König A., Cowen A., Cambria E., Schuller B.W. The MuSe 2022 &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;multimodal sentiment analysis challenge. &lt;/ins&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, 5-14, 2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[35] Fan X., Lin H., Yang L., Diao Y., Shen C., Chu Y., Zou Y. Humor detection via an internal and external neural network&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Neurocomputing, 394 105-111&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[35] Fan X., Lin H., Yang L., Diao Y., Shen C., Chu Y., Zou Y. Humor detection via an internal and external neural network&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Neurocomputing, 394&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;:&lt;/ins&gt;105-111&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[36] Peng-Yu C., Von-Wun S. Humor recognition using deep learning&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Proceedings of the 2018 &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;conference &lt;/del&gt;of the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;north american chapter &lt;/del&gt;of the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;association &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;computational linguistics&lt;/del&gt;: Human &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;language technologies&lt;/del&gt;,&amp;#160; 2018.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[36] Peng-Yu C., Von-Wun S. Humor recognition using deep learning&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Proceedings of the 2018 &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Conference &lt;/ins&gt;of the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;North American Chapter &lt;/ins&gt;of the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Association &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Computational Linguistics&lt;/ins&gt;: Human &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Language Technologies&lt;/ins&gt;,&amp;#160; &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;vol. 2, 113–117, &lt;/ins&gt;2018.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[37] Fan X., Lin H., Yang L., Diao Y., Shen C., Chu Y., Zhang T. Phonetics and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Ambiguity Comprehension Gated Attention Network &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Recognition, &lt;/del&gt;Complexity, 1-9&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[37] Fan X., Lin H., Yang L., Diao Y., Shen C., Chu Y., Zhang T. Phonetics and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;ambiguity comprehension gated attention network &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor recognition. &lt;/ins&gt;Complexity, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;2020:&lt;/ins&gt;1-9&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[38] Chen C., Zhang P. Integrating &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Cross&lt;/del&gt;-modal &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Interactions &lt;/del&gt;via &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Latent Representation Shift &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Multi&lt;/del&gt;-modal &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Detection, &lt;/del&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, 23-28, 2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[38] Chen C., Zhang P. Integrating &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;cross&lt;/ins&gt;-modal &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;interactions &lt;/ins&gt;via &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;latent representation shift &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;multi&lt;/ins&gt;-modal &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor detection. &lt;/ins&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, 23-28, 2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[39] Mondal A., Sharma R. Team KGP at SemEval-2021 Task 7: A &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Deep Neural System &lt;/del&gt;to &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Detect, &lt;/del&gt;Proceedings of the 15th &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;international workshop &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;semantic evaluation &lt;/del&gt;(SemEval-2021), &lt;del class=&quot;diffchange diffchange-inline&quot;&gt; &lt;/del&gt;2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[39] Mondal A., Sharma R. Team KGP at SemEval-2021 Task 7: A &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;deep neural system &lt;/ins&gt;to &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;detect. &lt;/ins&gt;Proceedings of the 15th &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;International Workshop &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Semantic Evaluation &lt;/ins&gt;(SemEval-2021), &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;1169–1174, &lt;/ins&gt;2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[40] Wei H., Hui C., Alexander G., Amir Z. Bi-&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Bimodal Modality Fusion &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Correlation&lt;/del&gt;-&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Controlled Multimodal Sentiment Analysis, &lt;/del&gt;Proceedings of the 2021 International Conference on Multimodal Interaction,&amp;#160; 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[40] Wei H., Hui C., Alexander G., Amir Z. Bi-&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;bimodal modality fusion &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;correlation&lt;/ins&gt;-&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;controlled multimodal sentiment analysis. &lt;/ins&gt;Proceedings of the 2021 International Conference on Multimodal Interaction,&amp;#160; &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;6–15, &lt;/ins&gt;2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[41] Arunima J., Monika, Mathur A., Prachi&amp;#160; Automatic &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humour Detection &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Tweets &lt;/del&gt;using &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Soft Computing Paradigms, &lt;/del&gt;2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (Com-IT-Con), India&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[41] Arunima J., Monika, Mathur A., Prachi&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, Sheena Ma. &lt;/ins&gt; Automatic &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humour detection &lt;/ins&gt;in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;tweets &lt;/ins&gt;using &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;soft computing paradigms. &lt;/ins&gt;2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (Com-IT-Con)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, Faridabad&lt;/ins&gt;, India&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, 172-176, &lt;/ins&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[42] Chen P.-Y., Soo V.-W. Humor recognition using deep learning&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 113-117, 2018.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[42] Chen P.-Y., Soo V.-W. Humor recognition using deep learning&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 113-117, 2018.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[43] Burak A.T., Murat G., Ibrahim T. Database for an emotion recognition system based on EEG signals and various computer games–GAMEEMO&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Biomedical Signal Processing Control, 60&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;101951&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[43] Burak A.T., Murat G., Ibrahim T. Database for an emotion recognition system based on EEG signals and various computer games–GAMEEMO&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Biomedical Signal Processing Control, 60&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;101951&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[44] Santiago C., Devamanyu H., Verónica P.-R., Roger Z., Rada M., Poria S. Towards multimodal sarcasm detection&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;arXiv preprint arXiv:.01815&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[44] Santiago C., Devamanyu H., Verónica P.-R., Roger Z., Rada M., Poria S. Towards multimodal sarcasm detection&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;arXiv preprint arXiv:.01815&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291106:newid:291205 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291106&amp;oldid=prev</id>
		<title>Rimni at 09:14, 31 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291106&amp;oldid=prev"/>
				<updated>2024-01-31T09:14:55Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 09:14, 31 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l687&quot; &gt;Line 687:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 687:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==References==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==References==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;div class=&amp;quot;auto&amp;quot; style=&amp;quot;text-align: left;width: auto; margin-left: auto; margin-right: auto;font-size: 85%;&amp;quot;&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[1] Ramakristanaiah C., Namratha P., Ganiya R.K., Reddy M.R. A survey on humor detection methods in communications. 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), IEEE, 668-674, 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[1] Ramakristanaiah C., Namratha P., Ganiya R.K., Reddy M.R. A survey on humor detection methods in communications. 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), IEEE, 668-674, 2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291105:newid:291106 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291105&amp;oldid=prev</id>
		<title>Rimni at 14:09, 30 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291105&amp;oldid=prev"/>
				<updated>2024-01-30T14:09:43Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 14:09, 30 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l696&quot; &gt;Line 696:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 696:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Attardo S. Irony as relevant inappropriateness. Journal of Pragmatics, 32(6):793-826. 2000.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Attardo S. Irony as relevant inappropriateness. Journal of Pragmatics, 32(6):793-826. 2000.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[5] Shahe K. Humour styles, personality, and well‐being among Lebanese university students&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;European &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;journal &lt;/del&gt;of Personality, 18 (3): 209-219&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2004.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[5] Shahe K. Humour styles, personality, and well‐being among Lebanese university students&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;European &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Journal &lt;/ins&gt;of Personality, 18(3):209-219&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2004.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[6] Sonja U. The function of self-disclosure on social network sites: Not only intimate, but also positive and entertaining self-disclosures increase the feeling of connection&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Computers in Human Behavior, 45 1-10&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2015.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[6] Sonja U. The function of self-disclosure on social network sites: Not only intimate, but also positive and entertaining self-disclosures increase the feeling of connection&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Computers in Human Behavior, 45&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;:&lt;/ins&gt;1-10&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2015.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[7] Tajfel H., Turner J.C., Austin W.G., Worchel S. An integrative theory of intergroup conflict&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Organizational &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;identity&lt;/del&gt;: A reader, 56 (65): 9780203505984-16&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;1979.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[7] Tajfel H., Turner J.C., Austin W.G., Worchel S. An integrative theory of intergroup conflict&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Organizational &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Identity&lt;/ins&gt;: A reader, 56(65):9780203505984-16&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;1979.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[8] Martin R.A., Ford T.&amp;#160; The psychology of humor: An integrative approach&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, Academic press&lt;/del&gt;.2018.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[8] Martin R.A., Ford T.&amp;#160; The psychology of humor: An integrative approach. &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Academic Press, &lt;/ins&gt;2018.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[9] Giora R. Understanding figurative and literal language: &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;The graded salience hypothesis, &lt;/del&gt;The graded salience hypothesis. 1997.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[9] Giora R. Understanding figurative and literal language: The graded salience hypothesis. &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Cognitive Linguistics, 8(3):183-206, &lt;/ins&gt;1997.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[10] Gibbs R.W.&amp;#160; The poetics of mind: Figurative thought, language, and understanding&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Cambridge University Press&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;.&lt;/del&gt;1994.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[10] Gibbs R.W.&amp;#160; The poetics of mind: Figurative thought, language, and understanding&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Cambridge University Press&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;1994.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[11] Clark H.H.&amp;#160; Using language&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, Cambridge university press&lt;/del&gt;.1996.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[11] Clark H.H.&amp;#160; Using language. &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Cambridge University Press, &lt;/ins&gt;1996.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[12] Purcell D., Brown M.S., Gokmen M. Achmed the dead terrorist and humor in popular geopolitics, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Achmed the dead terrorist and humor in popular geopolitics, 75 373-385. &lt;/del&gt;2010.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[12] Purcell D., Brown M.S., Gokmen M. Achmed the dead terrorist and humor in popular geopolitics&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. GeoJournal 75:373–385&lt;/ins&gt;, 2010.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[13] Mao J., Liu W. A BERT-based &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Approach &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Automatic Humor &lt;/del&gt; &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Detecting &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Scoring, &lt;/del&gt;In [mailto:IberLEF@ IberLEF@] SEPLN, 197-202&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[13] Mao J., Liu W. A BERT-based &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;approach &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;automatic humor &lt;/ins&gt; &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;detecting &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;scoring. &lt;/ins&gt;In [mailto:IberLEF@ IberLEF@] SEPLN, 197-202&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2019.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[14] Heaton J., Givigi S. A &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Deep &lt;/del&gt;CNN &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;System &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Classification &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Emotions Using &lt;/del&gt;EEG &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Signals, &lt;/del&gt;2022 IEEE International Systems Conference (SysCon), 1-7, 2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[14] Heaton J., Givigi S. A &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;deep &lt;/ins&gt;CNN &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;system &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;classification &lt;/ins&gt;of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;emotions using &lt;/ins&gt;EEG &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;signals. &lt;/ins&gt;2022 IEEE International Systems Conference (SysCon)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;,&amp;#160; Montreal, QC, Canada&lt;/ins&gt;, 1-7, 2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[15] Xie J., Tang M., Xiong J. A &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Multimodal &lt;/del&gt;Chinese &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Classification Algorithm Based &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Interactive Attention &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Text &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Speech Fusion, &lt;/del&gt;Available at SSRN 4241914&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[15] Xie J., Tang M., Xiong J. A &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;multimodal &lt;/ins&gt;Chinese &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor classification algorithm based &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;interactive attention &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;text &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;speech fusion. &lt;/ins&gt;Available at SSRN 4241914&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[16] Kathan A., Amiriparian S., Christ L., Triantafyllopoulos A., Müller N., König A., Schuller B.W.&amp;#160; A &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Personalised Approach &lt;/del&gt;to &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Audiovisual Humour Recognition &lt;/del&gt;and its &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Individual&lt;/del&gt;-level &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Fairness, &lt;/del&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;:&lt;/del&gt;29-36&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[16] Kathan A., Amiriparian S., Christ L., Triantafyllopoulos A., Müller N., König A., Schuller B.W.&amp;#160; A &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;personalised approach &lt;/ins&gt;to &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;audiovisual humour recognition &lt;/ins&gt;and its &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;individual&lt;/ins&gt;-level &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;fairness. &lt;/ins&gt;Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;29-36&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[17] Godoy F.C.D. Advancing &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor&lt;/del&gt;-&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Focused Sentiment Analysis &lt;/del&gt;through improved contextualized&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;34th Conference on Neural Information Processing Systems (NeurIPS 2020),&amp;#160; 2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[17] Godoy F.C.D. Advancing &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor&lt;/ins&gt;-&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;focused sentiment analysis &lt;/ins&gt;through improved contextualized&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;34th Conference on Neural Information Processing Systems (NeurIPS 2020),&amp;#160; 2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[18] Chauhan D.S., R D.S., Ekbal A., Bhattacharyya P. All-in-One: A &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Deep Attentive Multi&lt;/del&gt;-task &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Learning Framework &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humour&lt;/del&gt;, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Sarcasm&lt;/del&gt;, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Offensive&lt;/del&gt;, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Motivation&lt;/del&gt;, and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Sentiment &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Memes, &lt;/del&gt;Proceedings ofthe 1st Conference ofthe Asia-Pacific Chapter &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;ofthe &lt;/del&gt;Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, 281-290, 2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[18] Chauhan D.S., R D.S., Ekbal A., Bhattacharyya P. All-in-One: A &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;deep attentive multi&lt;/ins&gt;-task &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;learning framework &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humour&lt;/ins&gt;, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;sarcasm&lt;/ins&gt;, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;offensive&lt;/ins&gt;, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;motivation&lt;/ins&gt;, and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;sentiment &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;memes. &lt;/ins&gt;Proceedings ofthe 1st Conference ofthe Asia-Pacific Chapter &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;of the &lt;/ins&gt;Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, 281-290, 2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[19] Xu Z., Xie Y. Attention &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Method Analysis &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Sentiment Analysis &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Level Evaluation, &lt;/del&gt;2022 14th International Conference on Computer Research and Development (ICCRD), 178-185, 2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[19] Xu Z., Xie Y. Attention &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;method analysis &lt;/ins&gt;in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;sentiment analysis &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor level evaluation. &lt;/ins&gt;2022 14th International Conference on Computer Research and Development (ICCRD), 178-185, 2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[20] &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Francesco &lt;/del&gt;Barbieri, Saggion H. Automatic &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Detection &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Irony &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humour &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Twitter, ICCC&lt;/del&gt;. 2014.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[20] Barbieri &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;F.&lt;/ins&gt;, Saggion H. Automatic &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;detection &lt;/ins&gt;of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;irony &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humour &lt;/ins&gt;in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;twitter&lt;/ins&gt;. &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;ICCC, &lt;/ins&gt;2014.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[21] Annamoradnejad I. ColBERT &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Using &lt;/del&gt;BERT &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Sentence Embedding &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Detection, &lt;/del&gt;arXiv preprint arXiv:2004.12765 1.3&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[21] Annamoradnejad I. ColBERT &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;using &lt;/ins&gt;BERT &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;sentence embedding &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor detection. &lt;/ins&gt;arXiv preprint arXiv:2004.12765 1.3&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;,&amp;#160; &lt;/ins&gt;2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[22] Wang C., Xin S., Yi M.&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, Zhang Z.&amp;#160; &lt;/del&gt;Comparative study on deep learning models in humor detection, 2021 International Conference on Neural Networks, Information and Communication Engineering&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[22] Wang C., Xin S., Yi M. Comparative study on deep learning models in humor detection&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. Proceedings Volume 11933&lt;/ins&gt;, 2021 International Conference on Neural Networks, Information and Communication Engineering&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[23] Dario B., Fung P. Deep &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Learning &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Audio &lt;/del&gt;and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Language Features &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Prediction, &lt;/del&gt;Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16),&amp;#160; 2016.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[23] Dario B., Fung P. Deep &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;learning &lt;/ins&gt;of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;audio &lt;/ins&gt;and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;language features &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor prediction. &lt;/ins&gt;Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16),&amp;#160; &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;496–501, &lt;/ins&gt;2016.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[24] Chen R., Rau P.-L.P. Deep &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Learning Model &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Recognition &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Different Cultures, &lt;/del&gt;In Cross-Cultural Design. Experience and Product Design Across Cultures&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;: &lt;/del&gt;13th International Conference, CCD 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I 23 Springer International Publishing, 373-389, 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[24] Chen R., Rau P.-L.P. Deep &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;learning model &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor recognition &lt;/ins&gt;of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;different cultures. &lt;/ins&gt;In Cross-Cultural Design. Experience and Product Design Across Cultures&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;13th International Conference, CCD 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I 23 Springer International Publishing, 373-389, 2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[25] Prajapati P., Jaiswal A., Aastha, Shilpi, Neha, Sachdeva N.&amp;#160; Empirical Analysis of Humor Detection Using Deep Learning and Machine Learning on Kaggle Corpus, International Conference on Advancements in Interdisciplinary Research:300-312. 2022.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[25] Prajapati P., Jaiswal A., Aastha, Shilpi, Neha, Sachdeva N.&amp;#160; Empirical Analysis of Humor Detection Using Deep Learning and Machine Learning on Kaggle Corpus, International Conference on Advancements in Interdisciplinary Research:300-312. 2022.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291104:newid:291105 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291104&amp;oldid=prev</id>
		<title>Rimni at 12:20, 30 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291104&amp;oldid=prev"/>
				<updated>2024-01-30T12:20:58Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 12:20, 30 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l155&quot; &gt;Line 155:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 155:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Finding a mechanism to adapt to the characteristics of each individual&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Finding a mechanism to adapt to the characteristics of each individual&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Extending the method with imbalanced datasets&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Extending the method with imbalanced datasets&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;the &lt;/del&gt;Passau Spontaneous Football Coach Humour (Passau-SFCH) dataset&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;The &lt;/ins&gt;Passau Spontaneous Football Coach Humour (Passau-SFCH) dataset&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|AUC of audio modality&amp;#160; 0.773;&amp;#160; AUC of video modality 0.925 &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|AUC of audio modality&amp;#160; 0.773;&amp;#160; AUC of video modality 0.925 &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-style=&amp;quot;text-align:left&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-style=&amp;quot;text-align:left&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291103:newid:291104 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291103&amp;oldid=prev</id>
		<title>Rimni at 12:18, 30 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291103&amp;oldid=prev"/>
				<updated>2024-01-30T12:18:50Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 12:18, 30 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l357&quot; &gt;Line 357:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 357:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|CMU-MOS, Accuracy&amp;#160;  45.0%, F1 84.3%, MAE 0.776;&amp;#160; CMU-MOSEI&amp;#160; Accuracy&amp;#160; 86.2%, &amp;lt;br&amp;gt; F1 86.1%,&amp;#160; MAE&amp;#160; 0.529,&amp;#160; UR-FUNNY,&amp;#160; Accuracy&amp;#160; 71.68%&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|CMU-MOS, Accuracy&amp;#160;  45.0%, F1 84.3%, MAE 0.776;&amp;#160; CMU-MOSEI&amp;#160; Accuracy&amp;#160; 86.2%, &amp;lt;br&amp;gt; F1 86.1%,&amp;#160; MAE&amp;#160; 0.529,&amp;#160; UR-FUNNY,&amp;#160; Accuracy&amp;#160; 71.68%&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-style=&amp;quot;text-align:left&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-style=&amp;quot;text-align:left&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|[41]&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;;text-align:center&lt;/ins&gt;;&amp;quot;|[41]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Twitter&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Twitter&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|A new CNN model&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|A new CNN model&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l402&quot; &gt;Line 402:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 402:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|New model: Accuracy&amp;#160; &amp;#160; 0.897, Recall 0.903 (16000 One-Liners)&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|New model: Accuracy&amp;#160; &amp;#160; 0.897, Recall 0.903 (16000 One-Liners)&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|}&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==5. Datasets and languages==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==5. Datasets and languages==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291093:newid:291103 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291093&amp;oldid=prev</id>
		<title>Rimni at 15:46, 29 January 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291093&amp;oldid=prev"/>
				<updated>2024-01-29T15:46:51Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 15:46, 29 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l677&quot; &gt;Line 677:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 677:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In order to enhance the precision of the model, we are used to using sophisticated models (deep network hierarchy, large number of parameters), and even selecting models that are integrated with multiple models, which leads to the need for a large number of computational resources and a huge dataset to support this &amp;quot;big&amp;quot; model. However, when deploying the service, it becomes clear that this &amp;quot;big&amp;quot; model is slow to reason and consumes a lot of memory. So distillation could be another direction to be explored for humor detection.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In order to enhance the precision of the model, we are used to using sophisticated models (deep network hierarchy, large number of parameters), and even selecting models that are integrated with multiple models, which leads to the need for a large number of computational resources and a huge dataset to support this &amp;quot;big&amp;quot; model. However, when deploying the service, it becomes clear that this &amp;quot;big&amp;quot; model is slow to reason and consumes a lot of memory. So distillation could be another direction to be explored for humor detection.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=9. Conclusion=&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;=&lt;/ins&gt;=9. Conclusion&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;=&lt;/ins&gt;=&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This is the first paper reviewing humor detection to our knowledge. There are a number of significant insights from this survey. Studies on humor detection were carried on almost for English language. However, other languages, such as France and Malay, have not yet been explored and should be taken into consideration for future studies. The survey also shows that the almost works in humor detection based on deep learning use attention mechanism and multimodal technique. With the emergence of large models, models for humor detection have been newly inspired, and their recognition performance has greatly improved. But there’s very little mention of pre-processing. In addition, most of the data collected by the authors themselves come from Twitter and did not publish publicly. Use of privately generated data will result in biased performance which would make it difficult to compare new research with the benchmark research. There are many other areas of application that have yet to be explored, such as Weibo, YouTube.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This is the first paper reviewing humor detection to our knowledge. There are a number of significant insights from this survey. Studies on humor detection were carried on almost for English language. However, other languages, such as France and Malay, have not yet been explored and should be taken into consideration for future studies. The survey also shows that the almost works in humor detection based on deep learning use attention mechanism and multimodal technique. With the emergence of large models, models for humor detection have been newly inspired, and their recognition performance has greatly improved. But there’s very little mention of pre-processing. In addition, most of the data collected by the authors themselves come from Twitter and did not publish publicly. Use of privately generated data will result in biased performance which would make it difficult to compare new research with the benchmark research. There are many other areas of application that have yet to be explored, such as Weibo, YouTube.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l683&quot; &gt;Line 683:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 683:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Over the past 10 years, remarkable progress has been made in humor recognition. this article surveyed 29 papers and has addressed some significant topics required of the research landscape in humor detection. A detailed survey of the literatures have been carried out with an emphasis on humor detection, e.g., approach based deep learning, techniques (pre-processing, attention mechanism and multimodal), analysis of dataset, definition of problem, humor studies in linguistics. It also discusses, but is not limited to, some promising future directions at final section of the article.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Over the past 10 years, remarkable progress has been made in humor recognition. this article surveyed 29 papers and has addressed some significant topics required of the research landscape in humor detection. A detailed survey of the literatures have been carried out with an emphasis on humor detection, e.g., approach based deep learning, techniques (pre-processing, attention mechanism and multimodal), analysis of dataset, definition of problem, humor studies in linguistics. It also discusses, but is not limited to, some promising future directions at final section of the article.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Acknowledgments&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;==&lt;/ins&gt;Acknowledgments&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This work is supported by SICHUAN INTERNATIONAL STUDIES UNIVERSITY 2023 Planning Project (sisu202306)&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This work is supported by SICHUAN INTERNATIONAL STUDIES UNIVERSITY 2023 Planning Project (sisu202306)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;References&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;==&lt;/ins&gt;References&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[1] Ramakristanaiah C., Namratha P., Ganiya R.K., Reddy M.R. A &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Survey &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Detection Methods &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Communications, &lt;/del&gt;2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/del&gt;IEEE, 668-674, 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[1] Ramakristanaiah C., Namratha P., Ganiya R.K., Reddy M.R. A &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;survey &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor detection methods &lt;/ins&gt;in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;communications. &lt;/ins&gt;2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;IEEE, 668-674, 2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[2] Antony K., Panagiotis A. Computational &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Humor Recognition&lt;/del&gt;: A Systematic Literature Review. 2023.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[2] Antony K., Panagiotis A. Computational &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;humor recognition&lt;/ins&gt;: A Systematic Literature Review. &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt; Research Square, &lt;/ins&gt;2023.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] Attardo S.&amp;#160; Humorous texts: A semantic and pragmatic analysis, Walter de Gruyter&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;.&lt;/del&gt;2001.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] Attardo S.&amp;#160; Humorous texts: A semantic and pragmatic analysis&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. Series Humor Research, vol. 6&lt;/ins&gt;, Walter de Gruyter&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2001.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Attardo S. Irony as relevant inappropriateness&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/del&gt;Journal of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;pragmatics&lt;/del&gt;, 32 (6): 793-826. 2000.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Attardo S. Irony as relevant inappropriateness&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Journal of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Pragmatics&lt;/ins&gt;, 32(6):793-826. 2000.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[5] Shahe K. Humour styles, personality, and well‐being among Lebanese university students, European journal of Personality, 18 (3): 209-219. 2004.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[5] Shahe K. Humour styles, personality, and well‐being among Lebanese university students, European journal of Personality, 18 (3): 209-219. 2004.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291092:newid:291093 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291092&amp;oldid=prev</id>
		<title>Rimni: /* 8. Future work */</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Ren_et_al_2023b&amp;diff=291092&amp;oldid=prev"/>
				<updated>2024-01-29T15:18:26Z</updated>
		
		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;8. Future work&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 15:18, 29 January 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l658&quot; &gt;Line 658:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 658:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==8. Future work==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==8. Future work==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Building on the discussion and research presented in the previous sections, we have outlined many directions for future research and identified the unresolved issues. In &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;figure 2，the &lt;/del&gt;most important future areas of research and challenges in humor detection were presented.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Building on the discussion and research presented in the previous sections, we have outlined many directions for future research and identified the unresolved issues. In &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;[[#img-2|Figure 2]], the &lt;/ins&gt;most important future areas of research and challenges in humor detection were presented.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Unexplored data domain, dataset problem, language problem, hybrid method, Research of multimodal, improving the performance of humor detection, mining useful feature for humor detection, knowledge distillation of humor detection model.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Unexplored data domain, dataset problem, language problem, hybrid method, Research of multimodal, improving the performance of humor detection, mining useful feature for humor detection, knowledge distillation of humor detection model.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Image:Draft_Ren_105609095-image2.png|492px]]&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;div id='img-2'&amp;gt;&amp;lt;/div&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin: 0em auto 0.1em auto;border-collapse: collapse;width:auto;&amp;quot; &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|-style=&amp;quot;background:white;&amp;quot;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|style=&amp;quot;text-align: center;padding:10px;&amp;quot;| &lt;/ins&gt;[[Image:Draft_Ren_105609095-image2.png|492px]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|-&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;| style=&amp;quot;background:#efefef;text-align:left;padding:10px;font-size: 85%;&amp;quot;| '''Figure 2'''. Future research directions for humor detection&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|}&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;div class=&amp;quot;center&amp;quot; style=&amp;quot;width: auto; margin-left: auto; margin-right: auto;&amp;quot;&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Figure 2, Future research directions for humor detection&amp;lt;/div&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;As can be seen from &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;figure &lt;/del&gt;2, there are still many fields in need of more detailed investigation and development. For instance, for humor detection and evaluation, there are few standard datasets available. Those datasets are small. Humor detection is aided by public standards and large datasets. Another problem that emerged from the survey carried out is that almost all of the are in English. Others languages are not yet unexplored in humor detection. One potential future trend is to hybridize several existing methods to overcome the drawbacks of each individual of each individual technique.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;As can be seen from &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;[[#img-&lt;/ins&gt;2&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;|Figure 2]]&lt;/ins&gt;, there are still many fields in need of more detailed investigation and development. For instance, for humor detection and evaluation, there are few standard datasets available. Those datasets are small. Humor detection is aided by public standards and large datasets. Another problem that emerged from the survey carried out is that almost all of the are in English. Others languages are not yet unexplored in humor detection. One potential future trend is to hybridize several existing methods to overcome the drawbacks of each individual of each individual technique.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;An unsolved problem is to explore the implicit and useful feature of humor detection. The use of attentional mechanisms is now a common means of detecting significant features. However, it needs a large amount of data and high computational cost. Perhaps there are more efficient, usable, and less costly methods to be mined to solve feature extraction for humor detection.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;An unsolved problem is to explore the implicit and useful feature of humor detection. The use of attentional mechanisms is now a common means of detecting significant features. However, it needs a large amount of data and high computational cost. Perhaps there are more efficient, usable, and less costly methods to be mined to solve feature extraction for humor detection.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:291091:newid:291092 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	</feed>