<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Wang_Zhang_2024a</id>
		<title>Wang Zhang 2024a - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Wang_Zhang_2024a"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;action=history"/>
		<updated>2026-05-06T23:02:10Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314863&amp;oldid=prev</id>
		<title>18066876011 at 03:36, 10 December 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314863&amp;oldid=prev"/>
				<updated>2024-12-10T03:36:37Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;amp;diff=314863&amp;amp;oldid=314862&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>18066876011</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314862&amp;oldid=prev</id>
		<title>18066876011 at 03:03, 10 December 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314862&amp;oldid=prev"/>
				<updated>2024-12-10T03:03:47Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;amp;diff=314862&amp;amp;oldid=314861&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>18066876011</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314861&amp;oldid=prev</id>
		<title>18066876011 at 02:52, 10 December 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314861&amp;oldid=prev"/>
				<updated>2024-12-10T02:52:43Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 02:52, 10 December 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l19&quot; &gt;Line 19:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 19:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Abstract==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Abstract==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The current investigation delineates the efficacy of AI-facilitated detection of athletic postures within the realm of sports training. Employing a synthesis of literature review and empirical methodologies, data were amassed and scrutinized, affirming the study’s validity. The salient outcomes are manifold: (1) The frame difference algorithm efficaciously discerns inter-frame variances, evidencing pronounced adaptability and robustness, thereby enabling the recognition of weightlifting postures. (2) Confronting the challenge of negligible inter-frame disparities inherent in the frame difference algorithm, the research introduces a novel detection technique predicated on the cumulative inter-frame differences, which precisely pinpoints regions of posture alteration in weightlifting athletes. (3) Leveraging the dynamic space model of optical flow, the study ascertains the directional channel predicated on optical flow trajectory analyses, facilitating the identification of three distinct weightlifting postures: squatting, descending, and standing. (4) In alignment with the distinctive postural attributes of weightlifting athletes, a human posture paradigm was formulated, and a BP neural network classifier was deployed for both training and evaluative purposes, culminating in the successful differentiation of athlete from non-athlete entities within the training milieu. (5) The application of AI in posture recognition was extended to the scrutiny of pivotal postures and motions in weightlifting athletes, with experimental findings revealing a 98.21% accuracy rate in the recognition of force-exertion postures via the inter-frame difference method, and a flawless 100% accuracy in the identification of the apex and squatting postures. The enumeration of detected postures—encompassing knee extension, knee flexion, force application, squatting, and standing—through the poselet keyframe extraction approach, corresponded with the video count. Prospectively, AI’s role in athletic posture detection promises to augment coaches’ and athletes’ comprehension of their proficiencies and deficiencies, thereby steering training refinement and bolstering both the efficacy of training and the athletes’ caliber. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The current investigation delineates the efficacy of AI-facilitated detection of athletic postures within the realm of sports training. Employing a synthesis of literature review and empirical methodologies, data were amassed and scrutinized, affirming the study’s validity. The salient outcomes are manifold: (1) The frame difference algorithm efficaciously discerns inter-frame variances, evidencing pronounced adaptability and robustness, thereby enabling the recognition of weightlifting postures. (2) Confronting the challenge of negligible inter-frame disparities inherent in the frame difference algorithm, the research introduces a novel detection technique predicated on the cumulative inter-frame differences, which precisely pinpoints regions of posture alteration in weightlifting athletes. (3) Leveraging the dynamic space model of optical flow, the study ascertains the directional channel predicated on optical flow trajectory analyses, facilitating the identification of three distinct weightlifting postures: squatting, descending, and standing. (4) In alignment with the distinctive postural attributes of weightlifting athletes, a human posture paradigm was formulated, and a BP neural network classifier was deployed for both training and evaluative purposes, culminating in the successful differentiation of athlete from non-athlete entities within the training milieu. (5) The application of AI in posture recognition was extended to the scrutiny of pivotal postures and motions in weightlifting athletes, with experimental findings revealing a 98.21% accuracy rate in the recognition of force-exertion postures via the inter-frame difference method, and a flawless 100% accuracy in the identification of the apex and squatting postures. The enumeration of detected postures—encompassing knee extension, knee flexion, force application, squatting, and standing—through the poselet keyframe extraction approach, corresponded with the video count. Prospectively, AI’s role in athletic posture detection promises to augment coaches’ and athletes’ comprehension of their proficiencies and deficiencies, thereby steering training refinement and bolstering both the efficacy of training and the athletes’ &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt; &lt;/ins&gt;caliber. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;'''Keywords''': Sports training, athletic postures, AI-based detection&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;'''Keywords''': Sports training, athletic postures, AI-based detection&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:314860:newid:314861 --&gt;
&lt;/table&gt;</summary>
		<author><name>18066876011</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314860&amp;oldid=prev</id>
		<title>18066876011 at 02:48, 10 December 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=314860&amp;oldid=prev"/>
				<updated>2024-12-10T02:48:05Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 02:48, 10 December 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l19&quot; &gt;Line 19:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 19:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Abstract==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Abstract==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The current investigation delineates the efficacy of AI-facilitated detection of athletic postures within the realm of sports training. Employing a synthesis of literature review and empirical methodologies, data were amassed and scrutinized, affirming the study’s validity. The salient outcomes are manifold: (1) The frame difference algorithm efficaciously discerns inter-frame variances, evidencing pronounced adaptability and robustness, thereby enabling the recognition of weightlifting postures. (2) Confronting the challenge of negligible inter-frame disparities inherent in the frame difference algorithm, the research introduces a novel detection technique predicated on the cumulative inter-frame differences, which precisely pinpoints regions of posture alteration in weightlifting athletes. (3) Leveraging the dynamic space model of optical flow, the study ascertains the directional channel predicated on optical flow trajectory analyses, facilitating the identification of three distinct weightlifting postures: squatting, descending, and standing. (4) In alignment with the distinctive postural attributes of weightlifting athletes, a human posture paradigm was formulated, and a BP neural network classifier was deployed for both training and evaluative purposes, culminating in the successful differentiation of athlete from non-athlete entities within the training milieu. (5) The application of AI in posture recognition was extended to the scrutiny of pivotal postures and motions in weightlifting athletes, with experimental findings revealing a 98.21% accuracy rate in the recognition of force-exertion postures via the inter-frame difference method, and a flawless 100% accuracy in the identification of the apex and squatting postures. The enumeration of detected postures—encompassing knee extension, knee flexion, force application, squatting, and standing—through the poselet keyframe extraction approach, corresponded with the video count. Prospectively, AI’s role in athletic posture detection promises to augment coaches’ and athletes’ comprehension of their proficiencies and deficiencies, thereby steering training refinement and bolstering both the efficacy of training and the athletes’ caliber.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The current investigation delineates the efficacy of AI-facilitated detection of athletic postures within the realm of sports training. Employing a synthesis of literature review and empirical methodologies, data were amassed and scrutinized, affirming the study’s validity. The salient outcomes are manifold: (1) The frame difference algorithm efficaciously discerns inter-frame variances, evidencing pronounced adaptability and robustness, thereby enabling the recognition of weightlifting postures. (2) Confronting the challenge of negligible inter-frame disparities inherent in the frame difference algorithm, the research introduces a novel detection technique predicated on the cumulative inter-frame differences, which precisely pinpoints regions of posture alteration in weightlifting athletes. (3) Leveraging the dynamic space model of optical flow, the study ascertains the directional channel predicated on optical flow trajectory analyses, facilitating the identification of three distinct weightlifting postures: squatting, descending, and standing. (4) In alignment with the distinctive postural attributes of weightlifting athletes, a human posture paradigm was formulated, and a BP neural network classifier was deployed for both training and evaluative purposes, culminating in the successful differentiation of athlete from non-athlete entities within the training milieu. (5) The application of AI in posture recognition was extended to the scrutiny of pivotal postures and motions in weightlifting athletes, with experimental findings revealing a 98.21% accuracy rate in the recognition of force-exertion postures via the inter-frame difference method, and a flawless 100% accuracy in the identification of the apex and squatting postures. The enumeration of detected postures—encompassing knee extension, knee flexion, force application, squatting, and standing—through the poselet keyframe extraction approach, corresponded with the video count. Prospectively, AI’s role in athletic posture detection promises to augment coaches’ and athletes’ comprehension of their proficiencies and deficiencies, thereby steering training refinement and bolstering both the efficacy of training and the athletes’ caliber. &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;'''Keywords''': Sports training, athletic postures, AI-based detection&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;'''Keywords''': Sports training, athletic postures, AI-based detection&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:304717:newid:314860 --&gt;
&lt;/table&gt;</summary>
		<author><name>18066876011</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304717&amp;oldid=prev</id>
		<title>Rimni at 14:05, 26 June 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304717&amp;oldid=prev"/>
				<updated>2024-06-26T14:05:50Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 14:05, 26 June 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l73&quot; &gt;Line 73:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 73:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin: 0em auto 0.1em auto;border-collapse: collapse;width:auto;&amp;quot; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin: 0em auto 0.1em auto;border-collapse: collapse;width:auto;&amp;quot; &amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-style=&amp;quot;background:white;&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-style=&amp;quot;background:white;&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|style=&amp;quot;text-align: center;padding:10px;&amp;quot;| [[Image:Draft_Wang_260601743-image2.png|&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;600px&lt;/del&gt;]]&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|style=&amp;quot;text-align: center;padding:10px;&amp;quot;| [[Image:Draft_Wang_260601743-image2.png|&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;500px&lt;/ins&gt;]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| style=&amp;quot;background:#efefef;text-align:left;padding:10px;font-size: 85%;&amp;quot;| '''Figure 2'''. Map of athletic postures detection in experimental phase&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| style=&amp;quot;background:#efefef;text-align:left;padding:10px;font-size: 85%;&amp;quot;| '''Figure 2'''. Map of athletic postures detection in experimental phase&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:304716:newid:304717 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304716&amp;oldid=prev</id>
		<title>Rimni at 13:57, 26 June 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304716&amp;oldid=prev"/>
				<updated>2024-06-26T13:57:49Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:57, 26 June 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l85&quot; &gt;Line 85:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 85:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the realm of sports training, the formulation of a human posture feature model is instrumental for the identification and verification of weightlifting athletes based on distinct posture characteristics [18]. Such characteristics, encompassing exertion, apex reach, and squatting, are quantified through ratios of contour width to height, human body rectangularity, width-to-length ratio, perimeter squared-to-area ratio, and posture feature angles across four quadrants, culminating in the establishment of a posture feature model tailored for weightlifting athletes [19]. To discern between athlete and non-athlete targets, a BP neural network training strategy is employed, leveraging its renowned efficacy and precision in prediction, calibrated via mean square error (MSE) [20]. Mastery over the BP neural network classifier’s training steps is achieved to facilitate the detection of athletic postures, thereby laying the groundwork for refining sports training methodologies. In the classifier’s testing phase, weightlifting posture data are inputted, necessitating computational output from both hidden and output layers due to the algorithm’s classification nature. Upon completing the test training with all samples, the classifier unveils the results of weightlifting athletic posture recognition alongside the corresponding accuracy rate [21].&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the realm of sports training, the formulation of a human posture feature model is instrumental for the identification and verification of weightlifting athletes based on distinct posture characteristics [18]. Such characteristics, encompassing exertion, apex reach, and squatting, are quantified through ratios of contour width to height, human body rectangularity, width-to-length ratio, perimeter squared-to-area ratio, and posture feature angles across four quadrants, culminating in the establishment of a posture feature model tailored for weightlifting athletes [19]. To discern between athlete and non-athlete targets, a BP neural network training strategy is employed, leveraging its renowned efficacy and precision in prediction, calibrated via mean square error (MSE) [20]. Mastery over the BP neural network classifier’s training steps is achieved to facilitate the detection of athletic postures, thereby laying the groundwork for refining sports training methodologies. In the classifier’s testing phase, weightlifting posture data are inputted, necessitating computational output from both hidden and output layers due to the algorithm’s classification nature. Upon completing the test training with all samples, the classifier unveils the results of weightlifting athletic posture recognition alongside the corresponding accuracy rate [21].&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==3 Results and discussion==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==3&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. &lt;/ins&gt;Results and discussion==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===3.1 Experimental results===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===3.1 Experimental results===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:304715:newid:304716 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304715&amp;oldid=prev</id>
		<title>Rimni at 13:57, 26 June 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304715&amp;oldid=prev"/>
				<updated>2024-06-26T13:57:30Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:57, 26 June 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l123&quot; &gt;Line 123:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 123:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the intricate milieu of sports training, characterized by a plethora of participants and a heterogeneity of training schemata, objectives, and methodologies, the task of detecting athletic postures is rendered more arduous. A paramount challenge is the homogeneity observed amongst frames. Specifically, in the context of weightlifting training, the presence of barbells may occlude portions of the athlete’s movements, thereby impinging upon the precision of key frame extraction and elucidating the aforementioned issue of frame congruence. The categorization of weightlifting postures during the detection process presents its own set of hurdles, stemming from illogical classification demarcations and pronounced disparities in classification ratios, which in turn affect the veracity of the detection outcomes. Consequently, the augmentation of data through methodological refinement is imperative. This entails the meticulous delineation and annotation of target posture categories, culminating in the genesis of a robust target posture dataset. This dataset, buttressed by the BP neural network, is designed to bolster sports training efficacy and furnish precise guidance for the execution of athletic postures [22].&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the intricate milieu of sports training, characterized by a plethora of participants and a heterogeneity of training schemata, objectives, and methodologies, the task of detecting athletic postures is rendered more arduous. A paramount challenge is the homogeneity observed amongst frames. Specifically, in the context of weightlifting training, the presence of barbells may occlude portions of the athlete’s movements, thereby impinging upon the precision of key frame extraction and elucidating the aforementioned issue of frame congruence. The categorization of weightlifting postures during the detection process presents its own set of hurdles, stemming from illogical classification demarcations and pronounced disparities in classification ratios, which in turn affect the veracity of the detection outcomes. Consequently, the augmentation of data through methodological refinement is imperative. This entails the meticulous delineation and annotation of target posture categories, culminating in the genesis of a robust target posture dataset. This dataset, buttressed by the BP neural network, is designed to bolster sports training efficacy and furnish precise guidance for the execution of athletic postures [22].&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====3.2.2 Applicability of the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Poselet &lt;/del&gt;method====&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====3.2.2 Applicability of the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;poselet &lt;/ins&gt;method====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The poselet keyframe extraction methodology is employed to discern pivotal athletic postures in weightlifting athletes, predicated on the extraction of samples anchored in stringent spatial configurations and the scrutiny of image attributes, thereby facilitating expeditious and precise posture detection. Within the ambit of weightlifting training footage, salient locales and movement magnitudes are pinpointed, and the poselet detector is deployed with stability to dynamically seize detection points. These points are then aggregated for focused examination. The training regimen is analytically segmented into five distinct phases—knee extension, knee flexion, exertion, squatting, and standing (culminating in the highest point)—each meticulously observed for the exertion and velocity of movements, thereby enriching the dataset and culminating in the accurate identification of essential weightlifting postures.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The poselet keyframe extraction methodology is employed to discern pivotal athletic postures in weightlifting athletes, predicated on the extraction of samples anchored in stringent spatial configurations and the scrutiny of image attributes, thereby facilitating expeditious and precise posture detection. Within the ambit of weightlifting training footage, salient locales and movement magnitudes are pinpointed, and the poselet detector is deployed with stability to dynamically seize detection points. These points are then aggregated for focused examination. The training regimen is analytically segmented into five distinct phases—knee extension, knee flexion, exertion, squatting, and standing (culminating in the highest point)—each meticulously observed for the exertion and velocity of movements, thereby enriching the dataset and culminating in the accurate identification of essential weightlifting postures.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:304714:newid:304715 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304714&amp;oldid=prev</id>
		<title>Rimni at 13:57, 26 June 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304714&amp;oldid=prev"/>
				<updated>2024-06-26T13:57:06Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:57, 26 June 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l127&quot; &gt;Line 127:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 127:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The poselet keyframe extraction methodology is employed to discern pivotal athletic postures in weightlifting athletes, predicated on the extraction of samples anchored in stringent spatial configurations and the scrutiny of image attributes, thereby facilitating expeditious and precise posture detection. Within the ambit of weightlifting training footage, salient locales and movement magnitudes are pinpointed, and the poselet detector is deployed with stability to dynamically seize detection points. These points are then aggregated for focused examination. The training regimen is analytically segmented into five distinct phases—knee extension, knee flexion, exertion, squatting, and standing (culminating in the highest point)—each meticulously observed for the exertion and velocity of movements, thereby enriching the dataset and culminating in the accurate identification of essential weightlifting postures.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The poselet keyframe extraction methodology is employed to discern pivotal athletic postures in weightlifting athletes, predicated on the extraction of samples anchored in stringent spatial configurations and the scrutiny of image attributes, thereby facilitating expeditious and precise posture detection. Within the ambit of weightlifting training footage, salient locales and movement magnitudes are pinpointed, and the poselet detector is deployed with stability to dynamically seize detection points. These points are then aggregated for focused examination. The training regimen is analytically segmented into five distinct phases—knee extension, knee flexion, exertion, squatting, and standing (culminating in the highest point)—each meticulously observed for the exertion and velocity of movements, thereby enriching the dataset and culminating in the accurate identification of essential weightlifting postures.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===3.2.3 Key &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Points &lt;/del&gt;in the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Application &lt;/del&gt;of the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Poselet Method&lt;/del&gt;===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===3.2.3 Key &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;points &lt;/ins&gt;in the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;application &lt;/ins&gt;of the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;poselet method&lt;/ins&gt;===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Anchored in statistical learning theory, the detection of key weightlifting postures commences with the SVM classifier’s inaugural training, aiming for structural risk minimization and global optimization. Upon the video frame’s input, the foreground is discerned, succeeded by a multi-scale scan. A tally of detection windows ensues, aggregating data to ascertain the optimal key frames, thus refining the SVM posture classifier’s training and mitigating frame similarity concerns [23]. The methodology initiates with the delineation of the histogram of oriented gradients within a rectangular frame, engendering image feature descriptors. The ensuing phase encompasses gradient computation, treating training set samples as discrete channel images. The computation of pixel point direction gradients’ magnitude involves independent assessment of each component, with the component’s maximal value designated as the gradient direction. Subsequent to spatial difference scrutiny, normalization, and feature vector acquisition, the moving images undergo recurrent gradient calculations until the image features are fully materialized.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Anchored in statistical learning theory, the detection of key weightlifting postures commences with the SVM classifier’s inaugural training, aiming for structural risk minimization and global optimization. Upon the video frame’s input, the foreground is discerned, succeeded by a multi-scale scan. A tally of detection windows ensues, aggregating data to ascertain the optimal key frames, thus refining the SVM posture classifier’s training and mitigating frame similarity concerns [23]. The methodology initiates with the delineation of the histogram of oriented gradients within a rectangular frame, engendering image feature descriptors. The ensuing phase encompasses gradient computation, treating training set samples as discrete channel images. The computation of pixel point direction gradients’ magnitude involves independent assessment of each component, with the component’s maximal value designated as the gradient direction. Subsequent to spatial difference scrutiny, normalization, and feature vector acquisition, the moving images undergo recurrent gradient calculations until the image features are fully materialized.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:304713:newid:304714 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304713&amp;oldid=prev</id>
		<title>Rimni: /* 3.2.3 Key Points in the Application of the Poselet Method */</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304713&amp;oldid=prev"/>
				<updated>2024-06-26T13:55:41Z</updated>
		
		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;3.2.3 Key Points in the Application of the Poselet Method&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:55, 26 June 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l170&quot; &gt;Line 170:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 170:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|260&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|260&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Knee &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Bending&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Knee &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;bending&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|260&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|260&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|260&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|260&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l191&quot; &gt;Line 191:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 191:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; rowspan='6' style=&amp;quot;text-align: center;vertical-align: center;&amp;quot;|Test set&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; rowspan='6' style=&amp;quot;text-align: center;vertical-align: center;&amp;quot;|Test set&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Knee &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Extension&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Knee &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;extension&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Knee &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Bending&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;vertical-align: top;&amp;quot;|Knee &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;bending&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|&amp;#160; style=&amp;quot;text-align: center;vertical-align: top;&amp;quot;|140&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:304712:newid:304713 --&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304712&amp;oldid=prev</id>
		<title>Rimni: /* References */</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Wang_Zhang_2024a&amp;diff=304712&amp;oldid=prev"/>
				<updated>2024-06-26T13:52:03Z</updated>
		
		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;References&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:52, 26 June 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l244&quot; &gt;Line 244:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 244:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[2] Zhang S. Key posture detection in sports videos based on deep learning. Beijing University of Technology, Thesis, 2017.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[2] Zhang S. Key posture detection in sports videos based on deep learning. Beijing University of Technology, Thesis, 2017.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] Fan Q, Rao Q. Huang H. Multitarget &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Flexible Grasping Detection Method &lt;/del&gt;for &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Robots &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Unstructured Environments&lt;/del&gt;. CMES-Computer Modeling in Engineering &amp;amp; Sciences&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, 2023&lt;/del&gt;, 137: 1825-1848.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] Fan Q&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;.&lt;/ins&gt;, Rao Q. Huang H. Multitarget &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;flexible grasping detection method &lt;/ins&gt;for &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;robots &lt;/ins&gt;in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;unstructured environments&lt;/ins&gt;. CMES-Computer Modeling in Engineering &amp;amp; Sciences, 137:1825-1848&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, 2023&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Hou B., Zhou L. Classroom posture detection based on YOLOv4. Modern Education Forum, 4(8):105-107, 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Hou B., Zhou L. Classroom posture detection based on YOLOv4. Modern Education Forum, 4(8):105-107, 2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l254&quot; &gt;Line 254:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 254:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[7] Tompson J., Jain A., LeCun Y., et al. Joint training of a convolutional network and a graphicalmodel for human pose estimation. Advances in Neural Information Processing Systems, 1:1799-1807, 2014.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[7] Tompson J., Jain A., LeCun Y., et al. Joint training of a convolutional network and a graphicalmodel for human pose estimation. Advances in Neural Information Processing Systems, 1:1799-1807, 2014.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[8] Pfister T., Charles J., Zisserman A. Flowing convnets for human pose estimation in videos. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1913-1921&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, Santiago, CHILE&lt;/del&gt;, Dec 11-18 2015.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[8] Pfister T., Charles J., Zisserman A. Flowing convnets for human pose estimation in videos. 2015 IEEE International Conference on Computer Vision (ICCV)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, Santiago, Chile&lt;/ins&gt;, pp. 1913-1921, Dec 11-18&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, &lt;/ins&gt;2015.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[9] Newell A., Yang K., Deng J. Stacked hourglass networks for human pose estimation. ComputerVision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016. Proceedings, Part VIII 14, pp. 483-499, 2016.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[9] Newell A., Yang K., Deng J. Stacked hourglass networks for human pose estimation. ComputerVision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016. Proceedings, Part VIII 14, pp. 483-499, 2016.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l262&quot; &gt;Line 262:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 262:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[11] Ke Y. Development of AI-based detection of sports postures on embedded devices. Modern Information Technology, 5(22):92-94+97, 2021.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[11] Ke Y. Development of AI-based detection of sports postures on embedded devices. Modern Information Technology, 5(22):92-94+97, 2021.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[12] Liang P. Research on 3D &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Human Posture Estimation Based &lt;/del&gt;on &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Deep Learning [thesis]&lt;/del&gt;. University of Electronic Science and Technology of China, 2020.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[12] Liang P. Research on 3D &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;human posture estimation based &lt;/ins&gt;on &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;deep learning&lt;/ins&gt;. University of Electronic Science and Technology of China&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;, Thesis&lt;/ins&gt;, 2020.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[13] Ma Z. 3D Human posture estimation and action recognition based on deep learning. Anhui University, Thesis, 2023.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[13] Ma Z. 3D Human posture estimation and action recognition based on deep learning. Anhui University, Thesis, 2023.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Rimni</name></author>	</entry>

	</feed>