This is not a Wikipedia article : It is an individual user's work-in-progress page, and may be incomplete and/or unreliable. For guidance on developing this draft, see Wikipedia:So you made a userspace draft .
Finished writing a draft article? Are you ready to request an experienced editor review it for possible inclusion in Wikipedia? Submit your draft for review!
流行向量机 (简称MVM) 是据态叠加原理 组建而成的机器学习 算法 。据相对论 ,统一数据源的数据点间距保持相对性,继而提供了定义量子指针的可能,这可以用来人工架构叠加态流形的时空构象并保证流行变换的映射 关系。这从本质上区别于非线性降维 和支持向量机 ,后者运作机制依靠维度(DIB-N: 升N维,DDB-N: 降N维),而MVM维度不变,见[ 1] 。哲学体系上对于数据源的理解虽然有相似之处,实则也与杰弗里·辛顿 的Capsule neural network 概念有本质上的不同,后者的根本定位于维度。
流行向量机 或 MVM 需要依靠两个基本假设(1) 同源数据点间距保持相对性; (2) 存在离散体系连续性以完成 数据点间向量全集的单向重复操作,以此构成迭代器 。此时定义单次游历数据集为一次振荡
O
.
{\displaystyle {\overset {.}{O}}}
,那么我们可以得到:
一个完整的ETE向量全集:
A
→
0
,
A
→
1
,
A
→
2
{\displaystyle {\vec {A}}{_{0}},{\vec {A}}{_{1}},{\vec {A}}{_{2}}}
,...
A
→
n
{\displaystyle {\vec {A}}{_{n}}}
,…,
A
→
∞
{\displaystyle {\vec {A}}{_{\infty }}}
:≡
{\displaystyle :\equiv }
⊝
∞
n
=
0
A
→
n
{\displaystyle {\underset {n=0}{\overset {\infty }{\circleddash }}}{\vec {A}}{_{n}}}
(
‖
A
→
n
‖
≠
0
,
∀
n
∈
[
0
,
∗
∞
]
)
{\displaystyle (\lVert {\vec {A}}{_{n}}\rVert \neq 0,\forall n\in [0,*\infty ])}
such that:
I
t
e
r
a
t
i
o
n
{\displaystyle Iteration}
(
d
δ
{\displaystyle d\delta }
) =
F
l
o
o
r
{\displaystyle Floor}
(
N
×
o
.
‖
c
o
n
c
a
t
(
⊝
∞
n
=
0
A
→
n
)
‖
)
{\displaystyle \left({\frac {N\times {\overset {.}{o}}}{\lVert concat({\underset {n=0}{\overset {\infty }{\circleddash }}}{\vec {A}}{_{n}})\rVert }}\right)}
.
N
{\displaystyle N}
: 一次游历N次振荡
F
l
o
o
r
{\displaystyle Floor}
: 代数取底
那么,
d
M
T
d
δ
=
‖
λ
→
⋅
M
T
‖
d
H
{\displaystyle {\frac {dM_{T}}{d\delta }}={\frac {\lVert {\vec {\lambda }}\cdot M_{T}\rVert }{d_{H}}}}
,Hausdorff可定义:
∃
d
H
t
o
t
a
l
:=
m
a
x
{
m
i
n
{
d
(
A
→
0
,
A
→
∞
)
,
d
(
A
→
∞
,
A
→
0
)
}
}
{\displaystyle \exists d_{H}{_{total}}:={max}\lbrace {min}\lbrace d({\vec {A}}{_{0}},{\vec {A}}{_{\infty }}),d({\vec {A}}{_{\infty }},{\vec {A}}{_{0}})\rbrace \rbrace }
,
∃
d
H
l
o
c
a
l
:=
m
a
x
{
m
i
n
{
d
(
A
→
∞
−
1
,
A
→
∞
)
,
d
(
A
→
∞
,
A
→
∞
−
1
)
}
}
=
‖
A
→
n
‖
{\displaystyle \exists d_{H}{_{local}}:={max}\lbrace {min}\lbrace d({\vec {A}}{_{\infty }{_{-}{_{1}}}},{\vec {A}}{_{\infty }}),d({\vec {A}}{_{\infty }},{\vec {A}}{_{\infty }{_{-}{_{1}}}})\rbrace \rbrace =\lVert {\vec {A}}{_{n}}\rVert }
特例:
∃
d
H
t
o
t
a
l
:=
‖
A
0
→
−
A
∞
→
‖
{\displaystyle \exists d_{H}{_{total}}:=\lVert {\vec {A_{0}}}-{\vec {A_{\infty }}}\rVert }
, iff
⊝
∞
n
=
0
A
→
n
=
⊚
∞
n
=
0
A
→
n
{\displaystyle {\underset {n=0}{\overset {\infty }{\circleddash }}}{\vec {A}}{_{n}}={\underset {n=0}{\overset {\infty }{\circledcirc }}}{\vec {A}}{_{n}}}
.
不同的数据集所对应最有效流形向量架构存在个别差异。
例子 :
奇点
d
H
t
o
t
a
l
:=
‖
0
→
‖
{\displaystyle d_{H}{_{total}}:=\lVert {\vec {0}}\rVert }
, (见 引力奇点 )
线段
d
H
t
o
t
a
l
:=
∑
n
=
0
∞
‖
A
→
n
‖
{\displaystyle d_{H}{_{total}}:=\sum _{n=0}^{\infty }\lVert {\vec {A}}{_{n}}\rVert }
, (见 数组 )
闭合有向/非有向流形
d
H
t
o
t
a
l
:=
‖
A
0
→
−
A
∞
→
‖
{\displaystyle d_{H}{_{total}}:=\lVert {\vec {A_{0}}}-{\vec {A_{\infty }}}\rVert }
, (见 流形 , 及グローバーのアルゴリズム ), 因:
⊝
∞
n
=
0
A
→
n
≡
⊚
∞
n
=
0
A
→
n
{\displaystyle {\underset {n=0}{\overset {\infty }{\circleddash }}}{\vec {A}}{_{n}}\equiv {\underset {n=0}{\overset {\infty }{\circledcirc }}}{\vec {A}}{_{n}}}
,局部
∣
φ
i
⟩
{\displaystyle \mid \varphi _{i}\rangle }
=
∑
‖
A
→
k
‖
×
∣
⊛
k
⟩
{\displaystyle \sum _{}^{}\lVert {\vec {A}}{_{k}}\rVert \times \mid \circledast _{k}\rangle }
MVM可以单独使用或者嵌入到更大体系的机器学习 策略中生效,回归分析 ,支持向量机 ,和深度学习 等。
例如:
Category:人工智能
Category:机器学习
Category:量子计算机